text
stringlengths
256
16.4k
Observable Set context $V$…Hilbert space definiendum $\mathrm{Observable}(V)\equiv\mathrm{SelfAdjoint}(V)\cap\mathrm{End}(V)$ Discussion Observables are the linear self-adjoint operators. $\langle\psi|A\ \phi\rangle\in \mathbb C$ is called transition amplitude. $\frac{|\langle\psi|A\ \phi\rangle|^2}{\Vert\psi\Vert^2\Vert\psi\Vert^2}\ge 0$ is called transition probability. $\langle \psi | A\ \psi \rangle\in \mathbb R$ is called expectation value. $\frac{\langle \psi | A\ \psi \rangle}{\Vert\psi\Vert^2}\in \mathbb R$ is called mean value.
Hello, I've never ventured into char before but cfr suggested that I ask in here about a better name for the quiz package that I am getting ready to submit to ctan (tex.stackexchange.com/questions/393309/…). Is something like latex2quiz too audacious? Also, is anyone able to answer my questions about submitting to ctan, in particular about the format of the zip file and putting a configuration file in $TEXMFLOCAL/scripts/mathquiz/mathquizrc Thanks. I'll email first but it sounds like a flat file with a TDS included in the right approach. (There are about 10 files for the package proper and the rest are for the documentation -- all of the images in the manual are auto-generated from "example" source files. The zip file is also auto generated so there's no packaging overhead...) @Bubaya I think luatex has a command to force “cramped style”, which might solve the problem. Alternatively, you can lower the exponent a bit with f^{\raisebox{-1pt}{$\scriptstyle(m)$}} (modify the -1pt if need be). @Bubaya (gotta go now, no time for followups on this one …) @egreg @DavidCarlisle I already tried to avoid ascenders. Consider this MWE: \documentclass[10pt]{scrartcl}\usepackage{lmodern}\usepackage{amsfonts}\begin{document}\noindentIf all indices are even, then all $\gamma_{i,i\pm1}=1$.In this case the $\partial$-elementary symmetric polynomialsspecialise to those from at $\gamma_{i,i\pm1}=1$,which we recognise at the ordinary elementary symmetric polynomials $\varepsilon^{(n)}_m$.The induction formula from indeed gives\end{document} @PauloCereda -- okay. poke away. (by the way, do you know anything about glossaries? i'm having trouble forcing a "glossary" that is really an index, and should have been entered that way, into the required series style.) @JosephWright I'd forgotten all about it but every couple of months it sends me an email saying I'm missing out. Oddly enough facebook and linked in do the same, as did research gate before I spam filtered RG:-) @DavidCarlisle Regarding github.com/ho-tex/hyperref/issues/37, do you think that \textNFSSnoboundary would be okay as name? I don't want to use the suggested \textPUnoboundary as there is a similar definition in pdfx/l8uenc.def. And textnoboundary isn't imho good either, as it is more or less only an internal definition and not meant for users. @UlrikeFischer I think it should be OK to use @, I just looked at puenc.def and for example \DeclareTextCompositeCommand{\b}{PU}{\@empty}{\textmacronbelow}% so @ needs to be safe @UlrikeFischer that said I'm not sure it needs to be an encoding specific command, if it is only used as \let\noboundary\zzznoboundary when you know the PU encoding is going to be in force, it could just be \def\zzznoboundary{..} couldn't it? @DavidCarlisle But puarenc.def is actually only an extension of puenc.def, so it is quite possible to do \usepackage[unicode]{hyperref}\input{puarenc.def}. And while I used a lot @ in the chess encodings, since I saw you do \input{tuenc.def} in an example I'm not sure if it was a good idea ... @JosephWright it seems to be the day for merge commits in pull requests. Does github's "squash and merge" make it all into a single commit anyway so the multiple commits in the PR don't matter or should I be doing the cherry picking stuff (not that the git history is so important here) github.com/ho-tex/hyperref/pull/45 (@UlrikeFischer) @JosephWright I really think I should drop all the generation of README and ChangeLog in html and pdf versions it failed there as the xslt is version 1 and I've just upgraded to a version 3 engine, an dit's dropped 1.0 compatibility:-)
Suppose we have $M$ a monoid. Define $E(M) = \{\alpha : M\rightarrow M : \alpha(xy)=\alpha(x) \cdot y \}$ If $a \in M$, define $\alpha_{a}: M \rightarrow M$ by \begin{align*} \alpha_{a}(x)=ax \quad \forall x \in M \end{align*} Question is to prove that the function $\theta: M \rightarrow E(M)$ defined by $$\theta(a)=\alpha_{a} \quad \forall a \in M$$ is injective. My attempt: Suppose for some $a, b \in M$ \begin{align*} \theta(a)=\theta(b) \Rightarrow \alpha_{a}=\alpha_{b} & \Rightarrow \forall x \in M, ax=bx \end{align*} I am now stuck on this part. In a group, I would take the inverse of $x$, however it is not necessary the case that every element has an inverse. I guess the key question that I am asking is whether in every row/column of a Cayley table of a monoid, does each element occur exactly once?
Introduction Energy and Power Basic Operations Practice Problems Transformation of signals defined piecewise Even and Odd Signals Commonly encountered signals 1. Real Exponential Signals These signals are given by:(1) When $a<0$: Decaying ExponentialWhen $a>0$: Growing Exponential 2. Real Discrete-time Exponential Signal The Continuous-time Exponential Signal is given by- $x(t)=Be^{at}$ So, assuming the sampling time as $T_s$, it's discrete-time version is- $x[n]=Be^{aT_sn}$ Let $e^{aT_sn}=r$ So, 3. Continuous-time Sinusoidal Signals These signals are given by:(3) where, A:Amplitude $\theta$:Phase Shift $\omega$:Angular Frequency (rad/s)$=\omega = 2\pi f$ f:frequency in Hz $x(t+T)=Acos(\omega t+\omega T+\theta)$ For this to be same as $x(t)$, $\omega T=k.2\pi$.For the fundamental period, $k=1$ $\Rightarrow T=\frac{2\pi}{\omega}$.So, $x(t+T)=x(t)$.Therefore, T is the Time period. 4. Discrete-time Sinusoidal Signal The Continuous-time Sinusoidal Signal is given by- $x(t)=Acos(\omega t+\theta)$ So, assuming the sampling time as $T_s$ , it's discrete-time version is- $x[n]=Acos(\omega n T_s+\theta)$ Let $\Omega=\omega T_s$ So,(4) Now, $x[n+N]=Acos(\Omega n +\Omega N+\theta)$. For this to be same as $x[n]$, $\Omega N=2\pi m$ , $m$ and $N$ are integers. $\Rightarrow$ $\Omega = 2\pi \frac{m}{N}$ So, $x[n+N]=x[n]$ Remark: $x(t)=Acos(6t)$ is periodic but $x[n]=Acos(6n)$ is not periodic. 5. Complex Exponential Signal These signals are given by-(5) A,S are complex, $s=\sigma +j\omega$ and $A=|A|e^{j\theta}$ Substituting these, $x(t)=|A|.e^{(\sigma+j\omega)t}.e^{j\theta}$ $x(t)=e^{st}=e^{(\sigma+j\omega)t}=e^{\sigma t}.e^{j\omega t}=e^{\sigma t}\cos \omega t + j e^{\sigma t}\sin \omega t$ In the above equation,$e^{\sigma t}\cos \omega t$ is real and $e^{\sigma t}\sin \omega t$ is imaginary. 6. Unit Step Function The continuous time unit step function is given by-(10) The discrete time unit step function is given by-(11) 7. Rectangular Function The rectangular function is given by-(12) 8. Ramp Function The continuous time ramp function is given by-(13) The discrete time ramp function is given by-(14) * Relation between Unit step Function and Rectangular Function The rectangular function can be expressed in the form of unit step functions as show below- $rect(t)=u(t+\frac{1}{2})-u(t-\frac{1}{2})$ In the same way,in discrete time- $rect[n]=u[n+N]-u[n-(N+1)]$ * Relation between Ramp function and Unit Step Function The ramp function can be expressed in the form of unit step functions as show below- But when $t<0$ $\int_{-\infty}^{t} u(\tau) d\tau= 0$ So, $rampt(t)= \int_{-\infty}^{t} u(\tau) d\tau=0+\int_{0}^{t} u(\tau) d\tau=\int_{0}^{t} 1.d\tau=t$ In the same way, in discrete time-(16) 9. Discrete time Impulse or Delta Function" hide="Hide the example" hideLocation="both"]] The unit impulse function is also known as Kronecker delta function(17) So, $x[n].\delta[n]=x[0].\delta[n]$ Therefore,(19) This is an important property called as Shifting Property used in many applications. So generalizing this we get,(20) (21) 10. Continuous Time Impulse Function or Dirac Delta Function: But this does not work. Following Equation 1.45, the same thing can be written in continuous form as follows- $\int_{-\infty}^{-\infty} x(t).\delta(t)=x(0)$This will be non-zero only for one value of $t$ So a formal definition can be- $\delta(t)=\lim_{\Delta\to 0} w_\Delta(t)$ Now, $\lim_{\Delta\to 0} \int_{-\infty}^{-\infty} x(t).w_\Delta(t)= \lim_{\Delta\to 0} \int_{\frac{-\Delta}{2}}^{\frac{\Delta}{2}} x(t).\frac{1}{\Delta} dt = x(0).\lim_{\Delta\to 0} \int_{\frac{-\Delta}{2}}^{\frac{\Delta}{2}} \frac{1}{\Delta} dt=1$ $\delta(t)$ as the $\lim_{\Delta\to 0} w_\Delta(t)$ then, $\int_{-\infty}^{-\infty} x(t).\delta(t)dt=x(0)$ Therefore,the two take aways are:(22) Similarly, in continuous time also the shifting property can be applied- $x(t).\delta(t-t_0)=x(t_0).\delta(t-t_0)$(24)
This question already has an answer here: Is there an exact form of $$f(s)=\sum_{n=0}^\infty {\frac{(-1)^n}{(2n+1)^s}}=1-\frac{1}{3^s}+\frac{1}{5^s}-\frac{1}{7^s}+\dots$$ when $s$ is odd? Discussion I have been exploring infinite series and will be spending my evening looking for patterns in this particular class. I invite the interested reader to join me and the not so interested to just move along. I will be updating this question with relevant facts as the evening unfolds. There must (there must!) be some closed form in terms of $\pi$ and $s$ when $s$ is odd and there will definitely be something we can say about how this relates to the generalized zeta function. $f(1)=\frac{\pi}{4}$ $f(2)=$Catalan. [I will leave a remark about this below.] $f(3)= \frac{1}{64} (ζ(3, 1/4) - ζ(3, 3/4))=\frac{\pi^3}{32}$ $f(4)= \frac{1}{256} (ζ(4, 1/4) - ζ(4, 3/4))$ $f(5)= \frac{1}{1024}(ζ(5, 1/4) - ζ(5, 3/4)) =\frac{5\pi^5}{1536} $ $f(6)= \frac{1}{4096}(ζ(6, 1/4) - ζ(6, 3/4)$ $f(7)=\frac{1}{16384}(ζ(7, 1/4) - ζ(7, 3/4))= \frac{61 π^7}{184320}$ I thought about posting in Meta asking about this type of question. It's a "call to adventure" question: Come look at this with me if you so please. If you're not into it... downvote the question/let me know in the comments/move on to some other question that you do enjoy. Update 1: It looks like $$f(s)= \frac{1}{2^{2s}} \Bigg(\zeta(s, \frac{1}{4})-\zeta(s, \frac{3}{4}) \Bigg)$$ Update 2: A remark on Catalan's number and on $s$ even in general. The wiki page claims it to be unknown whether this Catalan's constant is irrational or transcendental. Come on guys? What do we pay you for? Let me just state for the conjectural record that $\sum_{n=1}^\infty\frac{a_n}{n^s}$ for a periodic sequence of integers $a_n$ has just must be transcendental (it must!). I am very confident this is the case when $a_n$ has period of prime $p$ and for $s=1$. It's surprising to me that I would need these conditions. Note that for $f(2)$ the numerators of the series would be $1,0,-1,0 \dots$ and that's not a prime period and also $s \neq 1$ so we cannot use any of those tools to make any statements about Catalan's number but also... one cannot deny the conjecture isn't really too bold. Most numbers should be transcendental and this periodic numerators of these series must be a push in the transcendental direction.
It looks you need to use other NDSolve options to control the spatial grid size to help NDSolve a little. Trying things, found that by increasing the grid size, now this effect you showed goes away. Grid size needs to be much smaller when the diffusion coefficient is very small since you are effectively in $\lim{k \to 0}$ now solving $\frac{\partial u}{\partial t} = k \left( \frac{\partial^2 u}{\partial x^2} \right)$ as $ \frac{\partial u(x,t)}{\partial t}=0$. Using NDSolve with "MethodOfLines" options and increasing the "MaxPoints" helped get rid of this problem. There might be other and better options to try. This function below takes the number of spatial grid points you want to use. The smaller $k$ is, the larger the number of grid points need to be (i.e smaller grid size) to get rid of that kink near the left boundary you had. Manipulate[ fun[k, lim, grid], {{k, 1, "k"}, 0.000001, 1, 0.000001, Appearance -> "Labeled"}, {{grid, 39, "grid points"}, 39, 999, 2, Appearance -> "Labeled"}, {{lim, 1, "plot x limit"}, 0.01, 1, 0.01, Appearance -> "Labeled"}, SynchronousUpdating -> False, ContinuousAction -> False, SynchronousInitialization -> True, Initialization :> { fun[k_, lim_, nPoints_] := Module[{u, t, x, sol, ic, bc}, bc = {u[t, 0] == t, u[t, 1] == 0}; ic = u[0, x] == 0; sol = First@NDSolve[ Flatten[{D[u[t, x], t] == k D[u[t, x], x, x], ic, bc}], {u[t, x]}, {t, 0, 1}, {x, 0, lim}, Method -> {"MethodOfLines", "SpatialDiscretization" -> {"TensorProductGrid", "MaxPoints" -> nPoints, "MinPoints" -> nPoints}, Method -> "Adams"}, MaxSteps -> Infinity]; Plot[(u[t, x] /. sol) /. t -> 1, {x, 0, lim}, PlotRange -> All, ImageSize -> 300, Frame -> True, ImageMargins -> {{30, 10}, {20, 20}}] ] } ] which gives
Question: Show that the collection of measurable rectangles form an algebra. Denote $\Sigma$ by $\sigma$-algebra. I found the following set operations. Let $A_1, A_2 \in \Sigma_A$, and $B_1, B_2 \in \Sigma_B$. Then, $(A_1 \times B_1) \setminus(A_2 \times B_2) = [(A_1 \cap A_2) \times (B_1 \setminus B_2)] \cup [(A_1 \setminus A_2) \times B_2)]$. In addition, $(A_1 \times B_1) \cap (A_2 \times B_2) = (A_1 \cap A_2) \times (B_1 \cap B_2)$. Then, by De Morgan, I can show the finite union as well. But, two set operations is intuitively understandable, but is there a simple way to prove these operations formally? Also, how can we show that the product of empty sets is in the collection? Can we still use the caratheodory criterion in this case?
Following the idea that a group is its structure, and reminding of Cayley theorem, I'm wondering whether we can build up virtually any finite group $G=\lbrace a_0,\dots,a_{n-1} \rbrace$ by searching pairs of subgroups of $\operatorname{Sym}(G)$ (the symmetric group over the set $G$), say $\Theta=\lbrace\theta_i,i=0,\dots,n-1\rbrace$ and $\Gamma=\lbrace\gamma_j,j=0,\dots,n-1\rbrace$, such that: i) $\theta_i(a_j)=\gamma_j(a_i)$ for all $i,j=0,\dots,n-1$ ii) $\theta_i\gamma_j=\gamma_j\theta_i$ for all $i,j=0,\dots,n-1$ Supposing to have found out such a pair, we could use their elements to define right and left multiplications, where i) would ensure the identity $a_ia_j=a_ia_j$ for all $i,j$, and ii) the associativity of the composition law "under construction". Moreover, the constraint i) entails that $\Theta=\Gamma \Rightarrow \theta_i=\gamma_i$ for all $i$, so that $G$ is abelian if and only if $\Theta=\Gamma$ [ Proof: $\Theta=\Gamma \Rightarrow$ $\exists \sigma \in \operatorname{Sym}(n)$ such that $\theta_i=\gamma_{\sigma(i)}$ for all $i \Rightarrow$ $\theta_i(a_j)=\gamma_{\sigma(i)}(a_j)$ for all $i,j \Rightarrow$ (by virtue of i)) $\gamma_j(a_i)=\gamma_{\sigma(i)}(a_j)$ for all $i,j \Rightarrow$ $\gamma_{\sigma(i)}(a_i)=\gamma_{\sigma(i)}(a_{\sigma(i)})$ for all $i \Rightarrow$ ($\gamma_{\sigma(i)}$ is 1-1) $a_i=a_{\sigma(i)}$ for all $i \Rightarrow$ ($a_k$ are distinct by hypothesis) $\sigma(i)=i$ for all $i \Rightarrow$ $\theta_i=\gamma_i$ for all $i$. #] As a first test for this approach, let's consider $\rho \in \operatorname{Sym}(G)$ defined by $\rho(a_k):=a_{k+1 \mod n}$, $k=0,\dots,n-1$. It is: $\rho^i(a_j)=a_{j+i \mod n}=a_{i+j \mod n}=\rho^j(a_i)$; therefore, if we set $\gamma_i=\theta_i:=\rho^i$, we have that either i) and ii) are fulfilled. The subgroups of $\operatorname{Sym}(G)$ (here coincident) $\Theta=\lbrace \theta_i=\rho^i \rbrace$ and $\Gamma=\lbrace \gamma_i=\rho^i \rbrace$ define the (abelian) composition law $a_ia_j=a_{j+i \mod n}$, whence $a_i^k=a_{ki \mod n}$ and then $a_1^k=a_{k \mod n}=a_k$ for $k=0,\dots,n-1$. Thus, we are finally led to $G=\lbrace a_k, k=0,...,n-1 \rbrace= \lbrace a_1^k, k=0,...,n-1 \rbrace= \langle a_1 \rangle$, and $G$ is cyclic. This result is irrespective of $n$, so cyclic groups exist for any order $n$ (not a surprising result, indeed, but here what I'm focused on is rather the approach to get it). Yet another way to start appreciating this approach, by rediscovering basic facts by means of it, could be the following. Condition ii) implies that $\Theta\Gamma=\Gamma\Theta$ and then $\Theta\Gamma \le \operatorname{Sym}(G)$. So, once set $l:=|\Theta \cap \Gamma|$, $n:=|\Theta|$ (=$|G|$) and noticing that $|\Theta\Gamma|=n^2/l$, we get: $l \le n \le n^2/l \le n!$, with (Lagrange) $l|n \wedge (n^2/l)|n!$. Now, $\Theta \ne \Gamma \Rightarrow l < n < n^2/l \le n!$. Then, if $|G|=n=p$, with $p$ prime, we have $l=1$ and then $p^2|p!$: contradiction. Then we are left with $|G|=p$ ($p$ prime) $\Rightarrow \Theta=\Gamma \Rightarrow G$ abelian. Could this approach be used to search for other, less trivial group structures?
I am asked to evaluate the following integral: $\int_0^{2\pi} \cos^{10}\theta \mathrm{d}\theta$. I am using complex analysis. Setting $z = e^{i\theta}$, I get from Eulers formula: $\cos \theta = \frac{1}{2}\left(e^{i\theta} + e^{-i\theta}\right) = \frac{1}{2}\left(z + z^{-1}\right)$. Now as $\theta$ goes from $0$ to $2\pi$, $z = e^{i\theta}$ goes one time around the unit circle. Therefore the problem is reduced to the following contour integral: $\oint_{C} \left(\frac{1}{2}(z + z^{-1})\right)^{10} \frac{dz}{iz}$, where C is the unit circle. At this point, I don't know how to move forward. I am pretty sure I am to apply the residue theroem, and the function I am integrating clearly has a pole at $z = 0$. But i don't know how to calculate the residue of that, since the pole is of the 10th order. Is there another approach i should take, maybe fine the laurent series of the function? Any help is greatly appreciated!
This question already has an answer here: Some background. I was asked to find an arithmetic function $f$ such that $f*f=\mathbf 1$ where $\mathbf 1$ is the constant function 1 and $*$ denotes Dirichlet convolution. I was able to prove that there are two solutions $\pm f$ and that $f$ is multiplicative. Next, I would have to evaluate $f$ at prime powers. I constructed a few values and my conjecture is that $$f(p^n)=\frac{2n-1\choose n}{2^{2n-1}}$$ for $p$ prime and $n>0$. To prove this, I only need to show that $$\sum_{k=1}^n{2k-1\choose k}{2n-2k+1\choose n-k+1}=4^n-{2n+1\choose n+1}\qquad\text{for }\;n\geq0.$$ (This is simply expressing $(f*f)(p^{n+1})=1$ explicitly, plugging in the conjecture.) For readers who don't really understand what I'm talking about and who are merely interested in the proof of the identity, you can just start reading from here. Hoping for a combinatorial proof, I interpreted the summation as follows. Given a set of $n+1$ indistinguishable marbles and $n+1$ distinguishable bags (say $b_1,\ldots,b_{n+1}$), the term ${2k-1\choose k}{2n-2k+1\choose n-k+1}$ counts the number of ways to put the marbles in the bags such that there are exactly $k$ marbles in the first $k$ bags $b_1,\ldots,b_k$. Equivalently, if we identify a configuration of the marbles with a monotonic path in a $n+1\times n+1$ grid such that the path starts in the bottom left corner and ends in the upper right corner, the sum $${2n+1\choose n+1}+\sum_{k=1}^n{2k-1\choose k}{2n-2k+1\choose n-k+1}$$ counts the number of times a path 'crosses' or 'touches' the main diagonal in a point that is not the 'origin', if we summate over all possible paths. (There are ${2n+2\choose n+1}$ such paths in total.) For example, the following path touches the main diagonal $4$ times: At $(2,2)$, $(3,3)$, $(4,4)$ and $(7,7)$. (We do not count $(0,0)$ because the summation doesn't.) However, interpreting the summation like this I can't get any further. Any other ideas or suggestions on how to approach this problem? Edit: There are some errors in my reasoning above, let's try again. Using the identity ${2n-1\choose n}=\frac12{2n\choose n}$ it can be rewritten as $$\sum_{k=0}^n{2k\choose k}{2n-2k\choose n-k}=4^n$$ which looks much better and holds for all $n\geq0$, making it more naturally. This form may give some ideas for combinatorial proofs but I don't really see any. The term ${2k\choose k}{2n-2k\choose n-k}$ counts the number of $n\times n$ monotonic paths intersecting the diagonal at $(k,k)$. So the summation counts the number of intersection points with the diagonal (all of them this time, including the origin) summing over all paths. As in Arthur's comment, it would suffice to find a bijection between all $2n$-monotonic paths (no matter their width or height) and the pairs $(p,s)$ where $p$ is a $n\times n$ path and $s$ an intersection point with the diagonal. Perhaps there is a weird bijection which would then solve the question. For the sake of a proof with induction, I considered all paths that intersect the diagonal for the first time at $(k,k)$ and all possible continuations and their intersection points, summed for $k$ from $1$ to $n$ using the induction hypothesis and a trick with Catalan numbers. (Writing out the details would be tedious, I think the reasoning becomes clear when you see the sum.) It turns out to be sufficient to prove $$\sum_{k=1}^n2C_{k-1}\left(4^{n-k}+{2n-2k\choose n-k}\right)=4^n$$ where $C_n=\frac1{n+1}{2n\choose n}$ denotes the $n$th Catalan number. However this doesn't seem to be a simplification.
Optimization Optimization refers to the maximums or minimums of a function in calculus. Common types of optimization problems include: Optimization Area & Perimeter Optimization Volume & Surface Area Optimization of the Distance Between a Point on a Curve First step is to create two or more equations to solve the problem. Second step is to find the global maximum or minimum through the first derivative test. Third step is to substitute the variable back into one of the equations to find the other variables. Example 1 A rectangle has its base on the x-axis and its upper two vertices on the parabola \(y = 3 − x^2\), as shown in the figure below. What is the largest area the rectangle can have? Step 1 \(A = 2xy\) \(y = 3 - x^2\) Substitute \(y\) into the area equation: \(A = 2x(3-x^2)=6x-2x^3\) Step 2 Maximize the area by taking the first derivative and set it equal to \(0\). \(A’ = 6 - 6x^2 = 0 \rightarrow x = 1\) \begin{array}{|c|c|c|} \hline +& 0 & - \\ \hline & x = 1 \end{array} \(A’(0) = 6 - 6(0)^2 = 6 \rightarrow \text{positive value}\) \(A’(2) = 6 - 6(2)^2 = -18 \rightarrow \text{negative value}\) Step 3 Substitute \(x = 1\) into \(y\) \(y = 3 - 1^2 = 2\) \(\text{Maximum Area} = 2xy = 2(1)(2) = 4\) Example 2 Find the dimensions of a right circular cylindrical can (with bottom and top closed) that has a volume of 1 liter and that minimizes the amount of material used. ( Note: One liter corresponds to \(1000 \ \mathrm{cm}^3\). Step 1 \(V = \pi r^2h = 1000 \ \mathrm{cm}^3 \rightarrow h = \frac {1000} {\pi r^2} \) \(\text{Surface Area} = S_\text{top} + S_\text{bottom} + S_\text{side}\) \(\text{Surface Area} = \pi r^2 + \pi r^2 + 2\pi r h\) Step 2 Minimize the surface area \(\text{Surface Area} = \pi r^2 + \pi r^2 + 2\pi r h\) \(= \pi r^2 + \pi r^2 + 2\pi r (\frac {1000} {\pi r^2})\) \(= 2 \pi r^2 + \frac {2000} {r} \) \(S’ = 4\pi r - \frac {2000} {r^2}\) \(= \frac {4\pi r^3 - 2000} {r^2} = 0 \) \(4\pi r^3 - 2000 = 0 \rightarrow r^3 = \frac {2000} {4\pi}\) \(r = 5.42\) \begin{array}{|c|c|c|} \hline -& 0 & + \\ \hline & r = 5.42 \end{array} \(S’(-1) \rightarrow \text{negative value}\) \(S’(6) \rightarrow \text{positive value}\) Step 3 Substitute back in \(r = 5.42\) to Volume equation to get the value of \(h\) \(h = \frac {1000} {\pi (5.42)^2} = 10.84 \ \mathrm{cm} \)
I am working on one of How To Prove It by Velleman's exercise: "Is the following proof of the theorem correct? Suppose A, B, and C are sets and $A\subseteq B \cup C $. Then either $A\subseteq B$ or $A\subseteq C $. Proof: Let $x$ be an arbitrary element of $A$. Since $A\subseteq B \cup C$, it follows that either $x \in B$ or $x\in C$. Case 1. $x\in b$. Since x was an arbitrary element of A, it follows that $\forall x \in A (x\in B)$, which means that $A \subseteq B$. Case 2. Similar step to case 1. Thus, either $A \subseteq B$ or $A\subseteq C$." Apparently it's incorrect, and a counterexample to the theorem is A={1,2}, B={1}, C={2}. The counter example makes sense, but while I know that one counter example is enough to invalidate this proof, I don't really understand what went wrong in the proof; furthermore even if the proof is wrong I don't know why is the theorem incorrect. I drew up the logical form of this theorem: $$ \forall x((x\in A\to x \in B \lor x\in C) \to (x \in A \to x \in B) \lor (x \in A \to x \in C)) $$ Where the antecedent corresponds to the supposition of the proof as given. I managed to derive a contradiction with an indirect proof. So the only logical conclusion is that either my logical form or indirect proof is wrong (otherwise how could the theorem possibly be incorrect?), but I don't understand how is my logical form wrong either. Would anyone mind giving me a hand please? Thank you so much! $--------------------------------------$ Edit:The following is how my indirect proof with the logical form went. $$ \forall x((x\in A\to x \in B \lor x\in C) \to (x \in A \to x \in B) \lor (x \in A \to x \in C)) $$ 1. Let x be an arbitrary element, so that we have $(x\in A\to x \in B \lor x\in C) \to (x \in A \to x \in B) \lor (x \in A \to x \in C)$ to prove. Assume $(x\in A\to x \in B \lor x\in C)$, and try to prove the consequence of the conditional $(x \in A \to x \in B) \lor (x \in A \to x \in C)$ Indirect proof: so let's assume the negated form of the consequence, which by DeMorgan's law becomes: $\sim(x \in A \to x \in B) \land \sim(x \in A \to x \in C) $ The left side of the conjuction, by negated conditional, becomes $x\in A \land \sim x \in B $. Likewise for the right side, $x\in A \land \sim x \in C $ We isolate $ x\in A$ from either of the conjunctions from 4, and by modus ponens onto the assumption of 2, we get $x \in B \lor x\in C$ We isolate $ \sim x\in B $ and $ \sim x \in C$ from 4, and by disjunctive syllogism onto 5 we get $x \in B$ and $x \in C$ - contradiction.
Search Now showing items 1-3 of 3 Azimuthally differential pion femtoscopy relative to the second and thrid harmonic in Pb-Pb 2.76 TeV collisions from ALICE (Elsevier, 2017-11) Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the ... Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions (Elsevier, 2017-11) Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
What are the initial and boundary conditions for a Butterfly Option? I want to write up a PDE program for it and I have a rough idea of what the payoff should be (is it just a call and a put at the strike price?) but if anyone can provide me with definitive answers then I'd greatly appreciate it. In particular, I'm after stuff like the time $T$ boundary condition (which is usually the option payoff and taken as the initial condition) which is written as $u(T,x)$, the boundary condition as $x \rightarrow 0$ i.e. $\lim_{x\rightarrow 0} u(t,x)$ (which I think should be equal to $0$) and the boundary condition as $x \rightarrow \infty$ i.e. $\lim_{x\rightarrow \infty} u(t,x)$ On a related note, I'm new to financial mathematics and every time I need to look for the conditions for options other than a call option I usually find it incredibly difficult (I have to google search everything for nearly an hour to find something relevant it seems). Does anyone have a resource which provides the initial and boundary conditions for a range of options? Thanks in advance. EDIT: Okay, a quick search showed me that the payoff to a Butterfly Spread is $(S - K_c)^+ + (K_{p} - S)^+ - (S - K_{atm})^+ - (K_{atm} - S)^+$ where $K_{atm} = \frac{K_c + K_p}{2}$, however, I still don't know what the boundary conditions are, can someone tell me what they are (and hopefully even how to derive them?) Thanks!
Returns on an asset are negatively correlated with own variance, and I would like to set up a hedge with a variance swap (no options are traded). I need to decide on the notional of the swap: any ideas how I could calculate it? EDIT I do not want to trade variance, I want to (imperfectly) hedge the part of the return that is by my gut feeling low when the return variance is high. My attempt: I will try to set up a \$1 portfolio of the asset and variance swap that has return $r^p$: $$r^p_{t+1} = r_{t+1} +s_{t+1}, $$ where $r$ is the asset return and $s$ is the payoff of the variance swap: $$ s_{t+1} = N_t ( rv_{t+1} - iv_{t} ), $$ where $N_t$ is the notional, $rv_{t+1}$ is the realized variance in month $t+1$, $iv_t$ is the swap price. I think I ultimately need the following to hold: $$ E \frac{\partial r^p_{t+1}}{\partial \sigma^2_{t+1}} = E \frac{\partial r_{t+1}}{\partial \sigma^2_{t+1}} + N_t = 0. $$ I thought of modelling the dependence between $r$ and $\sigma^2$ as a GARCH-in-mean process: $$ r_{t+1} = \alpha + \beta \color{red}{\sigma^2_{t+1}} + \varepsilon_{t+1} $$ $$ \varepsilon_{t+1} \sim N(0, \color{red}{\sigma^2_{t+1}}) $$ $$ \color{red}{\sigma^2_{t+1}} = \omega + \theta_1 \varepsilon_{t}^2 + \theta_2 \sigma^2_{t}, $$ from where it would follow that: $$ E \frac{\partial r_{t+1}}{\partial \sigma^2_{t+1}} = \beta = -N_t. $$ What would you say? Thanks.
Multiple solutions for a class of quasilinear problems 1. Departamento de Matemática, IMECC - UNICAMP , Caixa Postal 6065, 13081-970 Campinas-SP, Brazil $-\Delta_p u = g(x,u)$ in $\Omega$ $u = 0 $ on $\partial \Omega$, where $\Omega \subset \mathbb{R}^N$ is an open bounded domain with smooth boundary $\partial \Omega$, $g:\Omega\times\mathbb{R}\to \mathbb{R}$ is a Carathéodory function such that $g(x,0)=0$ and which is asymptotically linear. We suppose that $g(x,t)/t$ tends to an $L^r$-function, $r>N/p$ if 1 < p ≤ N and $r=1$ if $p>N$, which can change sign. We consider both the resonant and the nonresonant cases. Mathematics Subject Classification:35J25 (58E05. Citation:Francisco Odair de Paiva. Multiple solutions for a class of quasilinear problems. Discrete & Continuous Dynamical Systems - A, 2006, 15 (2) : 669-680. doi: 10.3934/dcds.2006.15.669 [1] Marcos L. M. Carvalho, José Valdo A. Goncalves, Claudiney Goulart, Olímpio H. Miyagaki. Multiplicity of solutions for a nonhomogeneous quasilinear elliptic problem with critical growth. [2] Lynnyngs Kelly Arruda, Francisco Odair de Paiva, Ilma Marques. A remark on multiplicity of positive solutions for a class of quasilinear elliptic systems. [3] Leszek Gasiński, Nikolaos S. Papageorgiou. Multiplicity of solutions for Neumann problems with an indefinite and unbounded potential. [4] Alberto Boscaggin, Maurizio Garrione. Positive solutions to indefinite Neumann problems when the weight has positive average. [5] M. Gaudenzi, P. Habets, F. Zanolin. Positive solutions of superlinear boundary value problems with singular indefinite weight. [6] Roberta Filippucci, Chiara Lini. Existence of solutions for quasilinear Dirichlet problems with gradient terms. [7] Christine Chambers, Nassif Ghoussoub. Deformation from symmetry and multiplicity of solutions in non-homogeneous problems. [8] Junping Shi, Ratnasingham Shivaji. Exact multiplicity of solutions for classes of semipositone problems with concave-convex nonlinearity. [9] Jiafeng Liao, Peng Zhang, Jiu Liu, Chunlei Tang. Existence and multiplicity of positive solutions for a class of Kirchhoff type problems at resonance. [10] Inara Yermachenko, Felix Sadyrbaev. Types of solutions and multiplicity results for second order nonlinear boundary value problems. [11] [12] Michael E. Filippakis, Nikolaos S. Papageorgiou. Existence and multiplicity of positive solutions for nonlinear boundary value problems driven by the scalar $p$-Laplacian. [13] Cristina Tarsi. Perturbation from symmetry and multiplicity of solutions for elliptic problems with subcritical exponential growth in $\mathbb{R} ^2$. [14] Julián López-Góme, Andrea Tellini, F. Zanolin. High multiplicity and complexity of the bifurcation diagrams of large solutions for a class of superlinear indefinite problems. [15] Q-Heung Choi, Changbum Chun, Tacksun Jung. The multiplicity of solutions and geometry in a wave equation. [16] [17] Zuzana Došlá, Mauro Marini, Serena Matucci. Global Kneser solutions to nonlinear equations with indefinite weight. [18] [19] Eunkyoung Ko, Eun Kyoung Lee, R. Shivaji. Multiplicity results for classes of singular problems on an exterior domain. [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
ä is in the extended latin block and n is in the basic latin block so there is a transition there, but you would have hoped \setTransitionsForLatin would have not inserted any code at that point as both those blocks are listed as part of the latin block, but apparently not.... — David Carlisle12 secs ago @egreg you are credited in the file, so you inherit the blame:-) @UlrikeFischer I was leaving it for @egreg to trace but I suspect the package makes some assumptions about what is safe, it offers the user "enter" and "exit" code for each block but xetex only has a single insert, the interchartoken at a boundary the package isn't clear what happens at a boundary if the exit of the left class and the entry of the right are both specified, nor if anything is inserted at boundaries between blocks that are contained within one of the meta blocks like latin. Why do we downvote to a total vote of -3 or even lower? Weren't we a welcoming and forgiving community with the convention to only downvote to -1 (except for some extreme cases, like e.g., worsening the site design in every possible aspect)? @Skillmon people will downvote if they wish and given that the rest of the network regularly downvotes lots of new users will not know or not agree with a "-1" policy, I don't think it was ever really that regularly enforced just that a few regulars regularly voted for bad questions to top them up if they got a very negative score. I still do that occasionally if I notice one. @DavidCarlisle well, when I was new there was like never a question downvoted to more than (or less?) -1. And I liked it that way. My first question on SO got downvoted to -infty before I deleted it and fixed my issues on my own. @DavidCarlisle I meant the total. Still the general principle applies, when you're new and your question gets donwvoted too much this might cause the wrong impressions. @DavidCarlisle oh, subjectively I'd downvote that answer 10 times, but objectively it is not a good answer and might get a downvote from me, as you don't provide any reasoning for that, and I think that there should be a bit of reasoning with the opinion based answers, some objective arguments why this is good. See for example the other Emacs answer (still subjectively a bad answer), that one is objectively good. @DavidCarlisle and that other one got no downvotes. @Skillmon yes but many people just join for a while and come form other sites where downvoting is more common so I think it is impossible to expect there is no multiple downvoting, the only way to have a -1 policy is to get people to upvote bad answers more. @UlrikeFischer even harder to get than a gold tikz-pgf badge. @cis I'm not in the US but.... "Describe" while it does have a technical meaning close to what you want is almost always used more casually to mean "talk about", I think I would say" Let k be a circle with centre M and radius r" @AlanMunn definitions.net/definition/describe gives a websters definition of to represent by drawing; to draw a plan of; to delineate; to trace or mark out; as, to describe a circle by the compasses; a torch waved about the head in such a way as to describe a circle If you are really looking for alternatives to "draw" in "draw a circle" I strongly suggest you hop over to english.stackexchange.com and confirm to create an account there and ask ... at least the number of native speakers of English will be bigger there and the gamification aspect of the site will ensure someone will rush to help you out. Of course there is also a chance that they will repeat the advice you got here; to use "draw". @0xC0000022L @cis You've got identical responses here from a mathematician and a linguist. And you seem to have an idea that because a word is informal in German, its translation in English is also informal. This is simply wrong. And formality shouldn't be an aim in and of itself in any kind of writing. @0xC0000022L @DavidCarlisle Do you know the book "The Bronstein" in English? I think that's a good example of archaic mathematician language. But it is still possible harder. Probably depends heavily on the translation. @AlanMunn I am very well aware of the differences of word use between languages (and my limitations in regard to my knowledge and use of English as non-native speaker). In fact words in different (related) languages sharing the same origin is kind of a hobby. Needless to say that more than once the contemporary meaning didn't match a 100%. However, your point about formality is well made. A book - in my opinion - is first and foremost a vehicle to transfer knowledge. No need to complicate matters by trying to sound ... well, overly sophisticated (?) ... The following MWE with showidx and imakeidx:\documentclass{book}\usepackage{showidx}\usepackage{imakeidx}\makeindex\begin{document}Test\index{xxxx}\printindex\end{document}generates the error:! Undefined control sequence.<argument> \ifdefequal{\imki@jobname }{\@idxfile }{}{... @EmilioPisanty ok I see. That could be worth writing to the arXiv webmasters as this is indeed strange. However, it's also possible that the publishing of the paper got delayed; AFAIK the timestamp is only added later to the final PDF. @EmilioPisanty I would imagine they have frozen the epoch settings to get reproducible pdfs, not necessarily that helpful here but..., anyway it is better not to use \today in a submission as you want the authoring date not the date it was last run through tex and yeah, it's better not to use \today in a submission, but that's beside the point - a whole lot of arXiv eprints use the syntax and they're starting to get wrong dates @yo' it's not that the publishing got delayed. arXiv caches the pdfs for several years but at some point they get deleted, and when that happens they only get recompiled when somebody asks for them again and, when that happens, they get imprinted with the date at which the pdf was requested, which then gets cached Does any of you on linux have issues running for foo in *.pdf ; do pdfinfo $foo ; done in a folder with suitable pdf files? BMy box says pdfinfo does not exist, but clearly do when I run it on a single pdf file. @EmilioPisanty that's a relatively new feature, but I think they have a new enough tex, but not everyone will be happy if they submit a paper with \today and it comes out with some arbitrary date like 1st Jan 1970 @DavidCarlisle add \def\today{24th May 2019} in INITEX phase and recompile the format daily? I agree, too much overhead. They should simply add "do not use \today" in these guidelines: arxiv.org/help/submit_tex @yo' I think you're vastly over-estimating the effectiveness of that solution (and it would not solve the problem with 20+ years of accumulated files that do use it) @DavidCarlisle sure. I don't know what the environment looks like on their side so I won't speculate. I just want to know whether the solution needs to be on the side of the environment variables, or whether there is a tex-specific solution @yo' that's unlikely to help with prints where the class itself calls from the system time. @EmilioPisanty well th eenvironment vars do more than tex (they affect the internal id in teh generated pdf or dvi and so produce reproducible output, but you could as @yo' showed redefine \today oor teh \year, \month\day primitives on teh command line @EmilioPisanty you can redefine \year \month and \day which catches a few more things, but same basic idea @DavidCarlisle could be difficult with inputted TeX files. It really depends on at which phase they recognize which TeX file is the main one to proceed. And as their workflow is pretty unique, it's hard to tell which way is even compatible with it. "beschreiben", engl. "describe" comes from the math. technical-language of the 16th Century, that means from Middle High German, and means "construct" as much. And that from the original meaning: describe "making a curved movement". In the literary style of the 19th to the 20th century and in the GDR, this language is used. You can have that in englisch too: scribe(verb) score a line on with a pointed instrument, as in metalworking https://www.definitions.net/definition/scribe @cis Yes, as @DavidCarlisle pointed out, there is a very technical mathematical use of 'describe' which is what the German version means too, but we both agreed that people would not know this use, so using 'draw' would be the most appropriate term. This is not about trendiness, just about making language understandable to your audience. Plan figure. The barrel circle over the median $s_b = |M_b B|$, which holds the angle $\alpha$, also contains an isosceles triangle $M_b P B$ with the base $|M_b B|$ and the angle $\alpha$ at the point $P$. The altitude of the base of the isosceles triangle bisects both $|M_b B|$ at $ M_ {s_b}$ and the angle $\alpha$ at the top. \par The centroid $S$ divides the medians in the ratio $2:1$, with the longer part lying on the side of the corner. The point $A$ lies on the barrel circle and on a circle $\bigodot(S,\frac23 s_a)$ described by $S$ of radius…
Skewness Formula Skewness formula is called so because the graph plotted is displayed in skewed manner. Skewness is a measure used in statistics that helps reveal the asymmetry of a probability distribution. It can either be positive or negative, irrespective of signs. To calculate the skewness, we have to first find the mean and variance of the given data. The formula is: \[\large g=\sqrt{\frac{\sum_{i-1}^{n}\left(x-x_{i}\right)^{3}}{\left(n-1\right)s^{3}}}\] Where, x is the observations $x_{i}$ is the mean n is the total number of observations s is the variance Solved example Question. Find the skewness in the following data. Height (inches) Class Marks Frequency 59.5 – 62.5 61 5 62.5 – 65.5 64 18 65.5 – 68.5 67 42 68.5 – 71.5 70 27 71.5 – 74.5 73 8 To know how skewed these data are as compared to other data sets, we have to compute the skewness. Sample size and sample mean should be found out. N = 5 + 18 + 42 + 27 + 8 = 100 $\overline{x}=\frac{\left(61\times 5\right)+\left(64\times 18\right)+\left(67\times 43\right)+\left(70\times 27\right)+\left(73\times 8\right)}{100}$ $\overline{x}=\frac{6745}{100}=67.45$ Now with the mean we can compute the skewness. Class Mark, x Frequency, f xf $\left(x-\overline{x}\right)$ $\left(x-\overline{x}\right)^{2}\times f$ $\left(x-\overline{x}\right)^{3}\times f$ 61 5 305 -6.45 208.01 -1341.68 64 18 1152 -3.45 214.25 -739.15 67 42 2814 -0.45 8.51 -3.83 70 27 1890 2.55 175.57 447.70 73 8 584 5.55 246.42 1367.63 6745 n/a 852.75 -269.33 67.45 n/a 8.5275 -2.6933 Now, the skewness is $g_{i}=\sqrt{\frac{\sum_{i=1}^{n}\left(x-x_{i}\right)^{3}}{\left(n-1 \right)s^{3}}}=-\frac{2.6937}{8.5275^{\frac{3}{2}}}=-0.1082$ For interpreting we have the folowing rules as per Bulmer in the year 1979: If the skewness comes to less than -1 or greater than +1, the data distribution is highly skewed If the skewness comes to between -1 and $-\frac{1}{2}$ or between $+\frac{1}{2}$ and +1, the data distribution is moderately skewed. If the skewness is between $-\frac{1}{2}$ and $+\frac{1}{2}$,the distribution is approximately symmetric
There are several solvers for solving the heat transfer problems in OpenFOAM. In this post, I will try to give a description of how to derive the governing equations of the following two solvers: buoyantBoussinesqSimpleFoam buoyantBoussinesqPimpleFoam In the presence of gravity body force, the mass and momentum conservation equations are: \begin{align} \frac{\partial \rho}{\partial t} + \nabla \cdot \left(\rho \boldsymbol{u}\right) = 0, \tag{1} \label{eq:massEqn} \end{align} \begin{align} \frac{\partial \left(\rho \boldsymbol{u} \right)}{\partial t} + \nabla \cdot \left(\rho\boldsymbol{u}\boldsymbol{u}\right) = &-\nabla p + \rho \boldsymbol{g} + \nabla \cdot \left(2 \mu_{eff} D \left( \boldsymbol{u} \right) \right) \\ &-\nabla \left( \frac{2}{3}\mu_{eff}\left(\nabla \cdot \boldsymbol{u} \right) \right), \tag{2} \label{eq:momEqn} \end{align} where \(\boldsymbol{u}\) is the velocity field, \(p\) is the pressure field, \(\rho\) is the density field, and \(\boldsymbol{g}\) is the gravitational acceleration. The effective viscosity \(\mu_{eff}\) is the sum of the molecular and turbulent viscosity and the rate of strain tensor \(D\left(\boldsymbol{u}\right)\) is defined as \(D\left(\boldsymbol{u}\right) = \frac{1}{2}\left( \nabla \boldsymbol{u} + \left(\nabla \boldsymbol{u}\right)^{T} \right)\). If both density and gravitational acceleration are constant, the gravitational force can be expressed in terms of gradient: \begin{align} \rho \boldsymbol{g} = \nabla \left( \rho \boldsymbol{g} \cdot \boldsymbol{r} \right), \tag{3} \label{eq:Fgravity} \end{align} where \(\boldsymbol{r}\) is the position vector, so that the pressure gradient and gravity force can be lumped together as shown in the following equation: \begin{align} \nabla p \;-\; \rho \boldsymbol{g} = \nabla \left( p \;-\; \rho \boldsymbol{g} \cdot \boldsymbol{r} \right). \tag{4} \label{eq:conversion} \end{align} Boussinesq approximation Now we consider the case when the density is not constant. The Boussinesq approximation is valid when the variation of the density induced by the temperature change is small, as Ferziger and Peric (2001, pp.14-15) states that: In flows accompanied by heat transfer, the fluid properties are normally functions of temperature. The variations may be small and yet be the cause of the fluid motion. If the density variation is not large, one may treat the density as constant in the unsteady and convection terms, and treat it as variable only in the gravitational term. This is called the Boussinesq approximation. Hereafter, we denote the reference density by \(\rho_0\) at the reference temperature \(T_0\). If we replace \(\rho\) by \(\rho_0\) in the Eq.\eqref{eq:massEqn}\eqref{eq:momEqn} except for the gravitational term, then we get: \begin{align} \nabla \cdot \boldsymbol{u} = 0, \tag{5} \label{eq:massEqn2} \end{align} \begin{align} \frac{\partial \left(\rho_0 \boldsymbol{u} \right)}{\partial t} + \nabla \cdot \left(\rho_0\boldsymbol{u}\boldsymbol{u}\right) = -\nabla p + \rho \boldsymbol{g} + \nabla \cdot \left(2 \mu_{eff} D \left( \boldsymbol{u} \right) \right). \tag{6} \label{eq:momEqn2} \end{align} Then divide both sides of Eq.\eqref{eq:momEqn2} by \(\rho_0\) and we get: \begin{align} \frac{\partial \boldsymbol{u}}{\partial t} + \nabla \cdot \left(\boldsymbol{u}\boldsymbol{u}\right) = -\frac{1}{\rho_0} \left( \nabla p \;-\; \rho\boldsymbol{g} \right) + \nabla \cdot \left(2 \nu_{eff} D \left( \boldsymbol{u} \right) \right). \tag{7} \label{eq:momEqn3} \end{align} Here, the density \(\rho\) in the gravitational term is expressed by the linear function of the temperature \(T\): \begin{align} \rho \approx \rho_0\left[ 1 \;-\; \beta\left( T \;-\; T_0 \right) \right]. \tag{8} \label{eq:rhoEqn} \end{align} This relation is derived from the definition of the volumetric thermal expansion coefficient \(\beta\): \begin{align} \beta \equiv -\frac{1}{\rho}\frac{\partial \rho}{\partial T} \approx -\frac{1}{\rho_0}\frac{\rho \;-\; \rho_0}{T \;-\; T_0}. \tag{9} \label{eq:beta} \end{align} We have to know when these approximations are valid. Ferziger and Peric (2001, p.15) states that: This approximation introduces errors of the order of 1% if the temperature differences are below e.g. 2℃ for water and 15℃ for air. The error may be more substantial when temperature differences are larger; the solution may even be qualitatively wrong. In terms of the implementation, the pressure gradient and gravity force terms are rearranged in the following form: \begin{align} &-\nabla \left(\frac{p}{\rho_0}\right) + \left(\frac{\rho}{\rho_0}\right)\boldsymbol{g} \\ =&-\nabla \left(\frac{p \;- \rho \boldsymbol{g} \cdot \boldsymbol{r}}{\rho_0} + \frac{\rho \boldsymbol{g} \cdot \boldsymbol{r}}{\rho_0} \right) + \left(\frac{\rho}{\rho_0}\right)\boldsymbol{g} \tag{10a}\\ =&-\nabla p_{rgh} – \left( \boldsymbol{g} \cdot \boldsymbol{r} \right) \nabla \left(\frac{\rho}{\rho_0}\right), \tag{10b} \label{eq:final} \end{align} where we define a new variable \(p_{rgh} = \left(p-\rho\boldsymbol{g}\cdot\boldsymbol{r}\right)/\rho_0\) and finally we get: \begin{align} \frac{\partial \boldsymbol{u}}{\partial t} + \nabla \cdot \left(\boldsymbol{u}\boldsymbol{u}\right) =\; &- \nabla p_{rgh} – \left( \boldsymbol{g} \cdot \boldsymbol{r} \right) \nabla \left(\frac{\rho}{\rho_0}\right) \\ &+ \nabla \cdot \left(2 \nu_{eff} D \left( \boldsymbol{u} \right) \right). \tag{11} \label{eq:momEqn4} \end{align} This momentum equation \eqref{eq:momEqn4} is implemented in UEqn.H as shown in the following code: 21 22 23 24 25 26 27 28 29 30 32 solve ( UEqn == fvc::reconstruct ( ( - ghf*fvc::snGrad(rhok) - fvc::snGrad(p_rgh) )*mesh.magSf() ); The Eq. \eqref{eq:final} is calculated using the function from the face flux fields and the variable fvc::reconstruct is defined in createFields.H as \(\rho/\rho_0\): rhok 55 56 57 58 59 60 61 62 63 65 // Kinematic density for buoyancy force volScalarField rhok ( IOobject ( "rhok", runTime.timeName(), mesh ), ); Reference Please wait till the next update!
Asymptotic completeness is a strong constraint on quantum field theories that rules out generalized free fields, which otherwise satisfy the Wightman axioms. If we were to take a limit of a list of continuous mass distributions $\rho_n(k^2)$ that approaches a distribution in some topological sense, however, is there anywhere an analysis of how the behavior of the $\rho_n(k^2)$ would approach the behavior of the free field? The following statement seems too bald (from the review "Outline of axiomatic relativistic quantum field theory" by R F Streater, Rep. Prog. Phys. 38, 771-846 (1975)): "If the Källén-Lehmann weight function is continuous, there are no particles associated with the corresponding generalized free field; the interpretation in terms of unstable particles is not adequate". Surely as we take the support of a generalized Lorentz invariant free field to be arbitrarily small we could expect that the behavior, at least as characterized by the VEVs, which constitute complete knowledge of a Wightman field, would eventually be arbitrarily close to the behavior we would expect from a free field? Classical thermodynamics has a complicated relationship with infinity, in that the analytic behavior of phase transitions does not emerge unless we take an infinite number of particles, but the behavior of very large numbers of particles nonetheless can approximate thermodynamic behavior rather well. By this elementary analogy, it seems premature to rule out generalized free fields. It also seems telling, although weakly, that the Källén-Lehmann weight function of an interacting field is nontrivial in quasiparticle approaches. Being able to derive an S-matrix requires that a theory must be asymptotically complete, however real measurements are always at finite time-like separation from state preparations, with the interaction presumably not adiabatically switched off at the times of the preparation and measurement, so that something less than analytically perfect asymptotic completeness ought to be adequate. EDIT: To make this more concrete, the imaginary component of the mass $1$ propagator in real space at time-like separation is $I(t)=\frac{J_1(t)}{8\pi t}$. If we take a smooth unit weight mass distribution$$w_\beta(m)=\frac{\exp(-\frac{\beta}{m}-\beta m)}{2m^2K_1(2\beta)}\ \mathrm{for}\ m>0,\ \mathrm{zero\ for}\ m\le 0,$$ for large $\beta$ this weight function is concentrated near $m=1$, with maximum value $\sqrt{\frac{\beta}{\pi}}$. For this weight function, the imaginary component of the propagator in real space at time-like separation is (using Gradshteyn&Ryzhik 6.635.3) $$I_\beta(t)=\int\limits_0^\infty w_\beta(m)\frac{mJ_1(mt)}{8\pi t}\mathrm{d}m= \frac{J_1\left(\sqrt{2\beta(\sqrt{\beta^2+t^2}-\beta)}\right) K_1\left(\sqrt{2\beta(\sqrt{\beta^2+t^2}+\beta)}\right)}{8\pi tK_1(2\beta)}.$$Asymptotically, this expression decreases faster than any polynomial for large $t$ (because the weight function is smooth), which is completely different from the asymptotic behavior of $I(t)$, $-\frac{\cos(t+\pi/4)}{4\sqrt{2\pi^3t^3}}$, however by choosing $\beta$ very large, we can ensure that $I_\beta(t)$ is close to $I(t)$ out to a large time-like separation that is approximately proportional to $\sqrt{\beta}$. Graphing $I(t)$ and $I_\beta(t)$ near $t=1000$, for example, and for $\beta=2^{20},2^{21},2^{22},2^{23},2^{24}$, we obtain$I(t)$ and $I_\beta(t)$ are very closely in phase, as seen here, until $t$ is of the order of $\beta^{2/3}$ in wavelength units. We can take $\beta$ to be such that this approximation is very close out to billions of years (for which, taking an inverse mass of $10^{-15}m$, $\sqrt{\beta}\approx \frac{10^{25}m}{10^{-15}m}=10^{40}$), or to whatever distance is necessary not to be in conflict with experiment (perhaps more or less than $10^{40}$). This is of course quite finely tuned, however something on the order of the age of the universe would seem necessary for what is essentially a stability parameter, and the alternative is to take the remarkably idealized distribution-equivalent choice $\beta=\infty$ as usual. [I would like to be able to give the real component of the propagator at time-like and space-like separations for this weight function, however Gradshteyn&Ryzhik does not offer the necessary integrals, and nor does my version of Maple.] EDIT(2): Turns out that by transforming Gradshteyn&Ryzhik 6.653.2 we obtain $$R_\beta(r)\!=\!\int\limits_0^\infty\!\!w_\beta(m)\frac{mK_1(mr)}{4\pi^2 r}\mathrm{d}m= \frac{K_1\left(\!\sqrt{2\beta(\beta-\sqrt{\beta^2-r^2})}\right) K_1\left(\!\sqrt{2\beta(\beta+\sqrt{\beta^2-r^2})}\right)}{4\pi^2 rK_1(2\beta)},$$which This post has been migrated from (A51.SE) is real valued for $r>\beta$. As for $I_\beta(t)$, the approximation to the mass $1$ propagator at space-like separation $r$, $R(r)=\frac{K_1(r)}{4\pi^2 r}$, is close for $r$ less than approximately $\sqrt{\beta}$. For the real component at time-like separation, it is almost certain that one simply replaces the Bessel function $J_1(...)$ by $Y_1(...)$.
Go to the first, previous, next, last section, table of contents. The functions described in this chapter accelerate the convergence of a series using the Levin u-transform. This method takes a small number of terms from the start of a series and uses a systematic approximation to compute an extrapolated value and an estimate of its error. The u-transform works for both convergent and divergent series, including asymptotic series. These functions are declared in the header file `gsl_sum.h'. The following functions compute the full Levin u-transform of a series with its error estimate. The error estimate is computed by propagating rounding errors from each term through to the final extrapolation. These functions are intended for summing analytic series where each termis known to high accuracy, and the rounding errors are assumed tooriginate from finite precision. They are taken to be relative errors oforder GSL_DBL_EPSILON for each term. The calculation of the error in the extrapolated value is an O(N^2) process, which is expensive in time and memory. A faster but less reliable method which estimates the error from the convergence of the extrapolated value is described in the next section For the method described here a full table of intermediate values and derivatives through to O(N) must be computed and stored, but this does give a reliable error estimate. . w->sum_plain. The algorithmcalculates the truncation error (the difference between two successiveextrapolations) and round-off error (propagated from the individualterms) to choose an optimal number of terms for the extrapolation. The functions described in this section compute the Levin u-transform of series and attempt to estimate the error from the "truncation error" in the extrapolation, the difference between the final two approximations. Using this method avoids the need to compute an intermediate table of derivatives because the error is estimated from the behavior of the extrapolated value itself. Consequently this algorithm is an O(N) process and only requires O(N) terms of storage. If the series converges sufficiently fast then this procedure can be acceptable. It is appropriate to use this method when there is a need to compute many extrapolations of series with similar converge properties at high-speed. For example, when numerically integrating a function defined by a parameterized series where the parameter varies only slightly. A reliable error estimate should be computed first using the full algorithm described above in order to verify the consistency of the results. w->sum_plain. The algorithmterminates when the difference between two successive extrapolationsreaches a minimum or is sufficiently small. The difference between thesetwo values is used as estimate of the error and is stored in The following code calculates an estimate of \zeta(2) = \pi^2 / 6 using the series, \zeta(2) = 1 + 1/2^2 + 1/3^2 + 1/4^2 + ... After N terms the error in the sum is O(1/N), making directsummation of the series converge slowly. #include <stdio.h> #include <gsl/gsl_math.h> #include <gsl/gsl_sum.h> #define N 20 int main (void) { double t[N]; double sum_accel, err; double sum = 0; int n; gsl_sum_levin_u_workspace * w = gsl_sum_levin_u_alloc (N); const double zeta_2 = M_PI * M_PI / 6.0; /* terms for zeta(2) = \sum_{n=0}^{\infty} 1/n^2 */ for (n = 0; n < N; n++) { double np1 = n + 1.0; t[n] = 1.0 / (np1 * np1); sum += t[n]; } gsl_sum_levin_u_accel (t, N, w, &sum_accel, &err); printf("term-by-term sum = % .16f using %d terms\n", sum, N); printf("term-by-term sum = % .16f using %d terms\n", w->sum_plain, w->terms_used); printf("exact value = % .16f\n", zeta_2); printf("accelerated sum = % .16f using %d terms\n", sum_accel, w->terms_used); printf("estimated error = % .16f\n", err); printf("actual error = % .16f\n", sum_accel - zeta_2); gsl_sum_levin_u_free (w); return 0; } The output below shows that the Levin u-transform is able to obtain an estimate of the sum to 1 part in 10^10 using the first eleven terms of the series. The error estimate returned by the function is also accurate, giving the correct number of significant digits. bash$ ./a.out term-by-term sum = 1.5961632439130233 using 20 terms term-by-term sum = 1.5759958390005426 using 13 terms exact value = 1.6449340668482264 accelerated sum = 1.6449340668166479 using 13 terms estimated error = 0.0000000000508580 actual error = -0.0000000000315785 Note that a direct summation of this series would require 10^10 terms to achieve the same precision as the accelerated sum does in 13 terms. The algorithms used by these functions are described in the following papers, The theory of the u-transform was presented by Levin, A review paper on the Levin Transform is available online, Go to the first, previous, next, last section, table of contents.
A Partial Differential Equation commonly denoted as PDE is a differential equation containing partial derivatives of the dependent variable (one or more) with more than one independent variable. A PDE for a function u(x 1,……x n) is an equation of the form The PDE is said to be linear if f is a linear function of u and its derivatives. The simple PDE is given by; ∂u/∂x (x,y) = 0 The above relation implies that the function u(x,y) is independent of x which is the reduced form of partial differential equation formula stated above. The order of PDE is the order of the highest derivative term of the equation. Representation of Partial Differential Equation In PDEs, we denote the partial derivatives using subscripts, such as; In some cases, like in Physics when we learn about wave equations or sound equation, partial derivative, ∂ is also represented by ∇(del or nabla). Classification of Partial Differential Equation(PDEs) Each type of PDE has certain functionalities that help to determine whether a particular finite element approach is appropriate to the problem being described by the PDE. The solution depends on the equation and several variables contain partial derivatives with respect to the variables. There are three-types of second-order PDEs in mechanics. They are Elliptic PDE Parabolic PDE Hyperbolic PDE Consider the example, au xx+bu yy+cu yy=0, u=u(x,y). For a given point (x,y), the equation is said to be Elliptic if b 2-ac<0 which are used to describe the equations of elasticity without inertial terms. Hyperbolic PDEs describe the phenomena of wave propagation if it satisfies the condition b 2-ac>0. For parabolic PDEs, it should satisfy the condition b 2-ac=0. The heat conduction equation is an example of a parabolic PDE. First-Order Partial Differential Equation In Maths, when we speak about the first-order partial differential equation, then the equation has only the first derivative of the unknown function having ‘m’ variables. It is expressed in the form of; F(x 1,…,xm, u,ux1,….,uxm)=0 Partial Differential Examples Some of the examples which follow second-order PDE is given as Linear Partial Differential Equation If the dependent variable and all its partial derivatives occur linearly in any PDE then such an equation is called linear PDE otherwise a nonlinear PDE. In the above example (1) and (2) are said to be linear equations whereas example (3) and (4) are said to be non-linear equations. Quasi-Linear Partial Differential Equation A PDE is said to be quasi-linear if all the terms with the highest order derivatives of dependent variables occur linearly, that is the coefficient of those terms are functions of only lower-order derivatives of the dependent variables. However, terms with lower-order derivatives can occur in any manner. Example (3) in the above list is a Quasi-linear equation. Homogeneous Partial Differential Equation If all the terms of a PDE contain the dependent variable or its partial derivatives then such a PDE is called non-homogeneous partial differential equation or homogeneous otherwise. In the above four examples, Example (4) is non-homogeneous whereas the first three equations are homogeneous. Partial Differential Equation Solved Question Show that if a is a constant ,then u(x,t)=sin(at)cos(x) is a solution to \(\frac{\partial ^{2}u}{\partial t^{2}}=a^{2}\frac{\partial ^{2}u}{\partial x^{2}}\). Solution Since a is a constant, the partials with respect to t are\(\frac{\partial u}{\partial t}=a\cos (at)\cos (x)\) ; \(\frac{\partial^{2} u}{\partial t^{2}}=-a^{2}\sin (at)\sin (x)\) Moreover, u x = – sin (at) sin (x) and u xx= – sin (at)cos(x), so that Therefore, u(x,t)=sin(at)cos(x) is a solution to \(\frac{\partial ^{2}u}{\partial t^{2}}=a^{2}\frac{\partial ^{2}u}{\partial x^{2}}\). Hence proved. Homogeneous Differential Equation Linear Equations Second Order Differential Equation Solver First Order Differential Equation Register with BYJU’S learning app to get more information about the maths-related articles and formulas. start practice with the problems.
ISSN: 1531-3492 eISSN: 1553-524X All Issues Discrete & Continuous Dynamical Systems - B March 2009 , Volume 11 , Issue 2 Select all articles Export/Reference: Abstract: In this paper we prove the existence of a global attractor with respect to the weak topology of a suitable Banach space for a parabolic scalar differential equation describing a non-Newtonian flow. More precisely, we study a model proposed by Hébraud and Lequeux for concentrated suspensions. Abstract: Designing trajectories for a submerged rigid body motivates this paper. Two approaches are addressed: the time optimal approach and the motion planning approach using a concatenation of kinematic motions. We focus on the structure of singular extremals and their relation to the existence of rank-one kinematic reductions; thereby linking the optimization problem to the inherent geometric framework. Using these kinematic reductions, we provide a solution to the motion planning problem in the under-actuated scenario, or equivalently, in the case of actuator failures. We finish the paper comparing a time optimal trajectory to one formed by a concatenation of pure motions. Abstract: In this paper we construct and study discretizations of an extension of the Zakharov system occurring in plasma physics. This system is intermediate between Euler-Maxwell and Zakharov systems. The usual Zakharov system can be recovered by taking a singular limit. We introduce two numerical schemes that take into account this singular limit process and that are asymptotic preserving. We prove some stability and convergence results and we perform some numerical tests showing that the range of validity of the extended system is wider than that of the usual Zakharov system. Abstract: In this paper we study an optimal boundary control problem for the 3D steady-state Navier-Stokes equation in a cylindrically perforated domain $\Omega_{\epsilon}$. The control is the boundary velocity field supported on the 'vertical' sides of thin cylinders. We minimize the vorticity of viscous flow through thick perforated domain. We show that an optimal solution to some limit problem in a non-perforated domain can be used as basis for the construction of suboptimal controls for the original control problem. It is worth noticing that the limit problem may take the form of either a variational calculation problem or an optimal control problem for Brinkman's law with another cost functional, depending on the cross-size of thin cylinders. Abstract: In this paper, we investigate the mixed generalized Laguerre-Fourier spectral method and its applications to exterior problems of partial differential equations of fourth order. Some basic results on the mixed generalized Laguerre-Fourier orthogonal approximation are established, which play important roles in designing and analyzing various spectral methods for exterior problems of fourth order. As an important application, a mixed spectral scheme is proposed for the stream function form of the Navier-Stokes equations outside a disc. The numerical solution fulfills the compressibility automatically and keeps the same conservation property as the exact solution. The stability and convergence of proposed scheme are proved. Numerical results demonstrate its spectral accuracy in space, and coincide with the analysis very well. Abstract: For three-dimensional competitive Lotka-Volterra systems, Zeeman identified 33 stable equivalence classes. Among these, only classes 26-31 may have limit cycles. We construct two limit cycles without a heteroclinic cycle (classes 30 and 31 in Zeeman's classification). Our construction together with Hofbauer and So [9] and Lu and Luo [10] gives a complete answer to Hofbauer's and So's problem [9] concerning two limit cycles for three-dimensional competitive Lotka-Volterra systems. Abstract: We present two a priori $L^p$-stability estimates to the discrete velocity Boltzmann models. In a close-to-global Maxwellian regime, we derive a local-in-time $L^2$-stability estimate using a macro-micro decomposition and dispersion estimates for smooth perturbations, and as a direct application, we establish that classical solutions in Kawashima's framework [22, 24] are uniformly $L^2$-stable. In a close-to-vacuum regime, we also obtain a local-in-time $L^p$-stability estimates for classical solutions near vacuum. Abstract: We describe a Lohner-type algorithm for the computation of rigorous upper bounds for reachable set for control systems, solutions of ordinary differential inclusions and perturbations of ODEs. Abstract: The semi and fully discrete finite element methods are proposed for investigating vibration analysis of elastic plate-plate structures. In the space directions, the longitudinal displacements on plates are discretized by conforming linear elements, and the corresponding transverse displacements are discretized by the Morley element, leading to a semi-discrete finite element method for the problem under consideration. Applying the second order central difference to discretize the time derivative, a fully discrete scheme is obtained, and two approaches for choosing the initial functions are also introduced. The error analysis in the energy norm for the semi and fully discrete methods are established, and some numerical examples are included to validate the theoretical analysis. Abstract: We establish the variational principle of Kolmogorov-Petrovsky-Piskunov (KPP) front speeds in a one dimensional random drift which is a mean zero stationary ergodic process with mixing property and local Lipschitz continuity. To prove the variational principle, we use the path integral representation of solutions, hitting time and large deviation estimates of the associated stochastic flows. The variational principle allows us to derive upper and lower bounds of the front speeds which decay according to a power law in the limit of large root mean square amplitude of the drift. This scaling law is different from that of the effective diffusion (homogenization) approximation which is valid for front speeds in incompressible periodic advection. Abstract: We consider the growth of an epitaxial thin film on a continuously supplied substrate using both the Burton-Cabrara-Frank (BCF) mean-field model and kinetic Monte-Carlo (KMC) simulation. Of particular interest are effects due to the finite size of the deposition zone, which is modeled by imposing an up- and downwind adatom density equal to the adatom density on an infinite terrace in equilibrium with a step. For the BCF model, we find this scenario admits a steady-state pattern with a specific number of steps separated by alternating widths. The specific spacing between the steps depends sensitively on the processing speed and on whether the number of steps is odd or even, with the range of velocities admitting an odd number of steps typically much narrower. These predictions are only partially confirmed by KMC simulations, however, with particularly poor agreement for an odd number of steps. To investigate further, we consider alternative KMC simulations with the interactions between random walkers on the terraces neglected so as to conform more closely with the mean field model. The latter simulations also more readily allow one to disable the step detachment mechanism, in which case they agree well with the predictions of the BCF model. Abstract: In this paper, self-adaptive proportional control method in economic chaotic system is discussed. It is not necessarily required for the fixed point having stable manifold in the method we used. One can stabilize chaos via time-dependent adjustments of control parameters; also can suppress chaos by adjusting external control signals. Two kinds of chaos about the output systems in duopoly are stabilized in a neighborhood of an unstable fixed point by using the chaos controlling method. The results show that performances of the system are improved by controlling chaos. Furthermore, their applications in practice are presented. The results also show that players can control chaos by adjusting their planned output or variable cost per unit according to the sign of marginal profit. Abstract: In this paper, a new transmission model of human malaria in a partially immune population is formulated. We establish the basic reproduction number $\tilde{R}_0$ for the model. The existence and local stability of the equilibria are studied. Our results suggest that, if the disease-induced death rate is large enough, there may be endemic equilibrium when $\tilde{R}_0 < 1$ and the model undergoes a backward bifurcation and saddle-node bifurcation, which implies that bringing the basic reproduction number below 1 is not enough to eradicate malaria. Explicit subthreshold conditions in terms of parameters are obtained beyond the basic reproduction number which provides further guidelines for accessing control of the spread of malaria. Abstract: We provide a comprehensive study on the planar (2D) orientational distributions of nematic polymers under an imposed shear flow of arbitrary strength. We extend previous analysis for persistence of equilibria in steady shear and for transitions to unsteady limit cycles, from closure models [21] to the Doi-Hess 2D kinetic equation. A variation on the Boltzmann distribution analysis of Constantin et al.[3, 4, 5] and others [8, 22, 23] for potential flow is developed to solve for all persistent steady equilibria, and characterize parameter boundaries where steady states cease to exist, which predicts the transition to tumbling limit cycles. Abstract: The goal of this paper is to examine the evaluation of interfacial stresses using a standard, finite difference based, immersed boundary method (IMBM). This calculation is not trivial for two fundamental reasons. First, the immersed boundary is represented by a localized boundary force which is distributed to the underlying fluid grid by a discretized delta function. Second, this discretized delta function is used to impose a spatially averaged no-slip condition at the immersed boundary. These approximations can cause errors in interpolating stresses near the immersed boundary. To identify suitable methods for evaluating stresses, we investigate three model flow problems at very low Reynolds numbers. We compare the results of the immersed boundary calculations to those achieved by the boundary element method (BEM). The stress on an immersed boundary may be calculated either by direct evaluation of the fluid stress (FS) tensor or, for the stress jump, by direct evaluation of the locally distributed boundary force (wall stress or WS). Our first model problem is Poiseuille channel flow. Using an analytical solution of the immersed boundary formulation in this simple case, we demonstrate that FS calculations should be evaluated at a distance of approximately one grid spacing inward from the immersed boundary. For a curved immersed boundary we present a procedure for selecting representative interfacial fluid stresses using the concepts from the Poiseuille flow test problem. For the final two model problems, steady state flow over a bump in a channel and unsteady peristaltic pumping, we present an 'exclusion filtering' technique for accurately measuring stresses. Using this technique, these studies show that the immersed boundary method can provide reliable approximations to interfacial stresses. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
L # 1 Show that It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline Last edited by krassi_holmz (2006-03-09 02:44:53) IPBLE: Increasing Performance By Lowering Expectations. Offline It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline L # 2 If It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline Let log x = x' log y = y' log z = z'. Then: x'+y'+z'=0. Rewriting in terms of x' gives: IPBLE: Increasing Performance By Lowering Expectations. Offline Well done, krassi_holmz! It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline L # 3 If x²y³=a and log (x/y)=b, then what is the value of (logx)/(logy)? It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline loga=2logx+3logy b=logx-logy loga+3b=5logx loga-2b=3logy+2logy=5logy logx/logy=(loga+3b)/(loga-2b). Last edited by krassi_holmz (2006-03-10 20:06:29) IPBLE: Increasing Performance By Lowering Expectations. Offline Very well done, krassi_holmz! It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline L # 4 Offline It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline You are not supposed to use a calculator or log tables for L # 4. Try again! Last edited by JaneFairfax (2009-01-04 23:40:20) Offline No, I didn't I remember It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline You still used a calculator / log table in the past to get those figures (or someone else did and showed them to you). I say again: no calculators or log tables to be used (directly or indirectly) at all!! Last edited by JaneFairfax (2009-01-06 00:30:04) Offline Offline log a = 2log x + 3log y b = log x log y log a + 3 b = 5log x loga - 2b = 3logy + 2logy = 5logy logx / logy = (loga+3b) / (loga-2b) Offline Hi ganesh for L # 1 since log(a)= 1 / log(b), log(a)=1 b a a we have 1/log(abc)+1/log(abc)+1/log(abc)= a b c log(a)+log(b)+log(c)= log(abc)=1 abc abc abc abc Best Regards Riad Zaidan Offline Hi ganesh for L # 2 I think that the following proof is easier: Assume Log(x)/(b-c)=Log(y)/(c-a)=Log(z)/(a-b)=t So Log(x)=t(b-c),Log(y)=t(c-a) , Log(z)=t(a-b) So Log(x)+Log(y)+Log(z)=tb-tc+tc-ta+ta-tb=0 So Log(xyz)=0 so xyz=1 Q.E.D Best Regards Riad Zaidan Offline Gentleman, Thanks for the proofs. Regards. It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline log_2(16) = \log_2 \left ( \frac{64}{4} \right ) = \log_2(64) - \log_2(4) = 6 - 2 = 4, \, log_2(\sqrt[3]4) = \frac {1}{3} \log_2 (4) = \frac {2}{3}. \, Offline L # 4 I don't want a method that will rely on defining certain functions, taking derivatives, noting concavity, etc. Change of base: Each side is positive, and multiplying by the positive denominator keeps whatever direction of the alleged inequality the same direction: On the right-hand side, the first factor is equal to a positive number less than 1, while the second factor is equal to a positive number greater than 1. These facts are by inspection combined with the nature of exponents/logarithms. Because of (log A)B = B(log A) = log(A^B), I may turn this into: I need to show that Then Then 1 (on the left-hand side) will be greater than the value on the right-hand side, and the truth of the original inequality will be established. I want to show Raise a base of 3 to each side: Each side is positive, and I can square each side: ----------------------------------------------------------------------------------- Then I want to show that when 2 is raised to a number equal to (or less than) 1.5, then it is less than 3. Each side is positive, and I can square each side: Last edited by reconsideryouranswer (2011-05-27 20:05:01) Signature line: I wish a had a more interesting signature line. Offline Hi reconsideryouranswer, This problem was posted by JaneFairfax. I think it would be appropriate she verify the solution. It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline Hi all, I saw this post today and saw the probs on log. Well, they are not bad, they are good. But you can also try these problems here by me (Credit: to a book): http://www.mathisfunforum.com/viewtopic … 93#p399193 Practice makes a man perfect. There is no substitute to hard work All of us do not have equal talents but everybody has equal oppurtunities to build their talents.-APJ Abdul Kalam Offline JaneFairfax, here is a basic proof of L4: For all real a > 1, y = a^x is a strictly increasing function. log(base 2)3 versus log(base 3)5 2*log(base 2)3 versus 2*log(base 3)5 log(base 2)9 versus log(base 3)25 2^3 = 8 < 9 2^(> 3) = 9 3^3 = 27 < 25 3^(< 3) = 25 So, the left-hand side is greater than the right-hand side, because Its logarithm is a larger number. Offline
This question already has an answer here: I'm trying to take the limit of a series involving $\sum_{j=1}^\infty\frac{1}{3^j}$ and thinking that this might have a partial sum representation. Here it says that $\sum_{i=1}^n \frac{1}{3^{i-1}} = \frac{3}{2}(1-\frac{1}{3^n})$ is the partial sum representation for $\sum_{i=1}^n \frac{1}{3^{i-1}}$: http://tutorial.math.lamar.edu/Classes/CalcII/ConvergenceOfSeries.aspx But how is this derived?
As background, I don't know if this kind of calculations were in the literature and/or are interestings. I can to prove that being $\lambda\geq 1$ a fixed integer, $n$ is perfect if and only if $$2n=\left(\prod_{p\mid n}\frac{p^{e_p+1}-1}{p^{\lambda e_p+1}-1}\right)\sigma\left(n^\lambda\right),$$ where we suppose that $n$ has the factorization $n=\prod_{p\mid n}p^{e_p}$. Inspired in the fact that it is easy to prove the following claim (and that simple cases as $\lambda=1,n=6$ or $\lambda=2,n=6$ don't work) I did a conjecture, that is my Question. Claim. Let $n=\prod_{p\mid n}p^{e_p}$ an odd perfect number and as before $\lambda\geq 1$ a fixed integer, then $$\sigma(\xi)\prod_{p\mid n}\left(p^{e_p+1}-1\right)=\left(2^{\lambda+1}-1\right)2n\prod_{p\mid n}\left(p^{\lambda e_p+1}-1\right)$$ holds, where $\xi_{\lambda}=\xi=2^{\lambda}n^{\lambda}$. Question.Let $n\geq 1$ an integer, and we take $\lambda\geq 1$ as a fixed integer. Prove or refute that if $n$ satisfies $$\sigma(\xi)\prod_{p\mid n}\left(p^{e_p+1}-1\right)=\left(2^{\lambda+1}-1\right)2n\prod_{p\mid n}\left(p^{\lambda e_p+1}-1\right),$$ where $\xi=2^{\lambda}n^{\lambda}$, then $n$ is an odd perfect number. Thanks. My attempt (to get the statement as true). I know that the method is to prove by contradiction the statement on assumption that our $n$ has the form $2^{\alpha}m$ for integers $\alpha\geq 1$ and $m\geq 1$ with $(2,m)=1$. My deduction was then, if there are no typos, that $$\left(2^{\lambda(\alpha+1)+1}-1\right)\left(2^{\alpha+1}-1\right)\sigma(m)=\left(2^{\lambda+1}-1\right)2^{\alpha+1}m\left(2^{\lambda\alpha+1}-1\right)$$ in the way to do the comparison $\sigma(m)$ versus $\operatorname{something }\cdot m $, and to try deduce a contradiction, but I don't know how deduce it. Then as motivation one could get such characterization for odd perfect numbers, that I am saying I don't know if is well known, if we can to finish the proof, for the veracity, of the statement.
How do I show that: $$\frac{1}{\sin^{2}\frac{\pi}{14}} + \frac{1}{\sin^{2}\frac{3\pi}{14}} + \frac{1}{\sin^{2}\frac{5\pi}{14}} = 24$$ This is actually problem B $4371$ given at this link. Looks like a very interesting problem. My attempts: Well, I have been thinking about this for the whole day, and I have got some insights. I don't believe my insights will lead me to a $\text{complete}$ solution. First, I wrote $\sin\frac{5\pi}{14}$ as $\sin\frac{9 \pi}{14}$ so that if I put $A = \frac{\pi}{14}$ so that the given equation becomes, $$\frac{1}{\sin^{2}{A}} + \frac{1}{\sin^{2}{3A}} + \frac{1}{\sin^{2}{9A}} =24$$ Then I tried working with this by taking $\text{lcm}$ and multiplying and doing something, which appeared futile. Next, I actually didn't work it out, but I think we have to look for a equation which has roots as $\sin$ and then use $\text{sum of roots}$ formulas to get $24$. I think I haven't explained this clearly. $\text{Thirdly, is there a trick proving such type of identities using Gauss sums ?}$ One post related to this is: How to prove that: $\tan(3\pi/11) + 4\sin(2\pi/11) = \sqrt{11}$ I don't know how this will help as I haven't studied anything yet regarding Gauss sums.
The prime number theorem says that the $n$th prime number is $p_n = \Theta(n \log n)$, so the series $\sum_n 1/(p_n \log p_n)$ should converge by comparison to $\sum_n 1/n (\log n)^2$. However this seems like overkill using deep mathematics. Is there a more elementary proof that $\sum_n 1/(p_n \log p_n)$ converges? A combinatorial proof of the bounds $$\frac{x}{\log x} \ll \pi(x)\ll \frac{x}{\log x}$$ can be found in the following two Math.StackExchange answers: How to prove Chebyshev's result: $\sum_{p\leq n} \frac{\log p}{p} \sim\log n $ as $n\to\infty$? and Are there any Combinatoric proofs of Bertrand's postulate? Also see this answer for an application: Chebyshev: Proof $\prod \limits_{p \leq 2k}{\;} p > 2^k$ The bound is obtained by looking at the central binomial coefficient $\binom{2n}{n}$, and the primes dividing it. This approach is due to Erdos. It is not hard to see that the convergence of your series follows from these bounds. We need not use the prime number theorem to obtain $c_1n\log n <p_n<c_2n\log n$. There are quite elementary arguments for this, see e.g. How prove that $P_{2n}<(n+1)^2$. Then your argument should show that the series $\sum_n \frac{1}{p_n\log n}$ converges. Let $k = p_n, k > 2$ and $n = \pi(k).$ Then $$n = \pi(k) < \frac{6k}{\log k}= \frac{6p_n}{\log p_n} $$ so $$p_n > \frac{1}{6}n \log p_n > \frac{1}{6}n \log n.\tag1$$ This relies on another fact from Apostol, that $\pi(n) < \frac{6n}{\log n}.$ Now the inequality (1) gets you the comparison in the OP. Proof of the line above is given at p. 83 of Apostol and does not rely on the prime number theorem. I wouldn't call it 'easy.'
Found the solution in a 1972's book (George R. Price, Ann. Hum. Genet., Lond, pp485-490, Extension of covariance selection mathematics, 1972). Biased weighted sample covariance: $\Sigma=\frac{1}{\sum_{i=1}^{N}w_i}\sum_{i=1}^N w_i \left(x_i - \mu^*\right)^T\left(x_i - \mu^*\right)$ And the unbiased weighted sample covariance given by applying the Bessel correction: $\Sigma=\frac{1}{\sum_{i=1}^{N}w_i - 1}\sum_{i=1}^N w_i \left(x_i - \mu^*\right)^T\left(x_i - \mu^*\right)$ Where $\mu^*$ is the (unbiased) weighted sample mean: $\mathbf{\mu^*}=\frac{\sum_{i=1}^N w_i \mathbf{x}_i}{\sum_{i=1}^N w_i}$ Important Note: this works only if the weights are "repeat"-type weights, meaning that each weight represent the number of occurrences of one observation, and that $\sum_{i=1}^N w_i=N^*$ where $N^*$ represent the real sample size (real total number of samples, accounting for the weights). I have updated the article on Wikipedia, where you will also find the equation for unbiased weighted sample variance: https://en.wikipedia.org/wiki/Weighted_arithmetic_mean#Weighted_sample_covariance Practical note: I advise you to first multiply column-by-column $w_i$ and $\left(x_i - \mu^*\right)$ and then do a matrix multiplication with $\left(x_i - \mu^*\right)$ to wrap things up and automatically perform the summation. Eg in Python Pandas/Numpy code: import pandas as pd import numpy as np # X is the dataset, as a Pandas' DataFrame mean = mean = np.ma.average(X, axis=0, weights=weights) # Computing the weighted sample mean (fast, efficient and precise) mean = pd.Series(mean, index=list(X.keys())) # Convert to a Pandas' Series (it's just aesthetic and more ergonomic, no differenc in computed values) xm = X-mean # xm = X diff to mean xm = xm.fillna(0) # fill NaN with 0 (because anyway a variance of 0 is just void, but at least it keeps the other covariance's values computed correctly)) sigma2 = 1./(w.sum()-1) * xm.mul(w, axis=0).T.dot(xm); # Compute the unbiased weighted sample covariance Did a few sanity checks using a non-weighted dataset and an equivalent weighted dataset, and it works correctly.
\red\large e^x=\sum_{n=0}^\infty\frac{x^n}{n!} \blue\normal\left {\begin{x+\frac{3x-y}{x^2+y^2}=3}\\{y-\frac{x+3y}{x^2+y^2}=0} \normal f^\prime(x)\ = \lim_{\Delta x\to0}\frac{f(x+\Delta x)-f(x)}{\Delta x} R=\normal \frac{\displaystyle{\sum_{i=1}^n (x_i-\bar{x})(y_i- \bar{y})}}{\displaystyle{\left[ \sum_{i=1}^n(x_i-\bar{x})^2 \right]^{1/2}}} \large \sum_{i=1}^{n}\frac{aibi}{ai+bi} \leq \frac{(\sum_{i=1}^{n}ai)(\sum_{i=1}^{n}bi)}{\sum_{i=1}^{n}ai+\sum_{i=1}^{n}bi} Some example expressions rendered by MimeTeX (it’s good to appear to be smarter than you are 😬!) If an expression fails to be rendered, you would see an error image like this: ecently, the wonderful yourequations.com site (which I’ve been using to occasionally render mathematical expressions on web pages) has ceased it’s service due to heavy traffic. I was thinking about running my own LaTeX rendering server, things turned out to be pretty easy as follow, thanks to the excellent MimeTeX package, a LaTeX reduced subset. It’s also interesting to experiment the “stone age technique” of CGI, first download and compile the package: unzip mimetex.zip cd mimetex cc -DAA mimetex.c gifsave.c -lm -o latex.cgi # test the binary, view the ‘fermat.gif’ image ./latex.cgi -i “a^2+b^2=c^2” -e fermat.gif Uploaded to host, latex.cgi runs without any dependencies. The ugly thing with my (free) Linux host is that although it does allow CGI, it doesn’t allow CGI to return documents of type ‘image/gif’ no matter what. To work around, I wrote a small PHP script, which parses the GET input, calls CGI to generate and save image in a cache directory, then redirects request to the LaTeX image. This also helps not to expose your CGI directly on the web too! $cmd = “$mimetex_path -e “.$full_filename; $cmd = $cmd.” “.escapeshellarg($formula); system($cmd,$status_code); $text = $pictures_path.”/”.$filename; return $text; I’ve been always loving CGI for its simplicity, CGI, Perl, Python… old things never die! Although I have almost no experiences with them, they let you do whatever you want to given a little tweaking know – hows. You can go and test my LaTeX rendering server at http://tkxuyen.com/latex.php, the url syntax: latex.php?formula=[tex] latex code [/tex], here is an example. Please note that MimeTeX is not as full – featured as LaTeX, it can’t render some too – complex expressions and it uses an ugly bitmap font. If your server has some LaTeX support, consider using MathTeX, a more advanced version from the same author.
Introduction Energy and Power Basic Operations Practice Problems Transformation of signals defined piecewise Even and Odd Signals Commonly Encountered Signals What is a signal? The word 'signal' has been used in different contexts in the English language and it has several different meanings . In this class, we will use the term signal to mean a function of an independent variable that carries some information or describes some physical phenomenon. Often (not always) the independent variable will be time, and the signals will describe phenomena that change with time. Such a signal can be denoted by ${x}(t)$, where $t$ is the independent variable and ${x}(t)$ denotes the function of $t$. Notice that this is slightly in contrast to the notation that you may have been used to from your calculus courses. There, you may have used $y=f(x)$ to denote a function of $x$, where $x$ is the dependent variable and $y$ is the independent variable. In this course, since signals will be referred to as ${x}(t)$, ${x}$ refers to the dependent variable typically. Here are two examples of such signals. This notation, even though fairly standard in the literature, is potentially confusing since $x(t)$ is used to refer to two related but different things. Consider the sentence "A recording of John's speech will be denoted by $x(t)$ and a recording of Adele's music will be denoted by $y(t)$". Here $x(t)$ and $y(t)$ refer to the entire signals, i.e., the audio waveforms. However, if you consider the sentence "find all values of $t$ for which $x(t) < 2$", here $x(t)$ refers to the value taken by the signal at time $t$. To elaborate further, it is the function $x$ evaluated at time $t$. In the Example 2 above $x(\pi)= \pi \cos\pi = -\pi$. This terminology is fairly standard in all text books, but in my opinion this leads to confusion. Therefore, we will use underlined variables to denote signals and variables without underlines will refer to values of the signals. With this notation, the signal will be denoted by $\underline{x}(t)$ and the value taken by this signal at time $t$ will be denoted by $x(t)$. Continuous-time (CT) and Discrete-time (DT) signals We will encounter two classes of signals in this course. The first class of signals are those for which the independent variable changes in a continuous manner or, equivalently, the signal $\underline{x}(t)$ is defined for every real value (or a continuum of values) of $t$ in the range $(a,b)$ ($a$ can be $-\infty$ and $b$ can be $\infty$). The two examples considered above are examples of CT signals. In contrast, we will also be interested in signals which are defined only for integer values of the independent variable. These signals are called discrete-time (DT) signals and will be denoted by $\underline{x}[n]$. Such signals arise in two situations - (i) the phenomenon that is being modeled is naturally one for which the independent variable takes only integer values or (ii) We can obtain a DT signal from a CT signal by ‘sampling’ the CT signal. For e.g., we can choose to keep only values of the signal $\underline{x}(t)$ at time instants $nT_s, \forall n$ for a fixed sampling interval $T_s$. From the sampled values we can construct a DT signal $\underline{x}[n]$ by assigning $x[n] = x(nT_s)$. The following two examples elaborate on these two methods. How to specify or describe signals There are two ways in which we will specify or describe signals in this course. The first way is to provide an explicit mathematical description of the signals such as $x(t) = \sin(200\pi t)$ or $x(t) = e^{-t}$. Sometimes, these signals may have to be described piecewise. Often, it will be easier to describe signals by sketching the function described by the signals or "drawing a picture of the signal". One of the skills that a student should develop from this part of the course is to be able to write a mathematical description for a signal defined pictorially and vice versa. The following examples illustrates these ideas.
Studying formulas about velocity and acceleration I came up with a question: if I throw an object in the air with a velocity $v_0$ (suppose i throw it vertically) in how much time its final velocity $v_f$ will reduce to $0$ due to the force go gravity? Here is how I tried to solve the problem: Calculation of the time I know that the final velocity of a object that receive an acceleration is: $$v_f=v_0+at$$ where $a$ is the acceleration and $t$ is the time in which the acceleration acts. I supposed that $v_f$ after a negative acceleration (the gravitational acceleration on Earth $g$) will reduce to $0$ and so I set up the following equation: $$0=\vec{v_0}-\vec{g}\cdot t$$ and solving the equation for $t$ I got that \begin{equation} t=\frac{v_0}{g}\tag{1} \end{equation} Calculation of the space I know that the formula to calculate the space that is made by an object moving with an acceleration is $$S=v_0t+\frac12 at^2$$ But now I can apply $(1)$ to the equation: $$S=v_0\cdot \frac{v_0}{g}-\frac12 g\left(\frac{v_0}{g}\right)^2$$ $$S=\frac{v_0^2}{g}-\frac{v_0^2}{2g}=\frac{v_0^2}{2g}\tag{2}$$ That would be the formula for the space. Reassuming an object thrown in the air with a velocity $v$ will stop moving in the air after a time $t=\frac{v}{g}$ after making a distance $S=\frac{v^2}{2g}$. Is this correct?
A Markov View of the Phase Vocoder Part 2 Introduction Last post we motivated the idea of viewing the classic phase vocoder as a Markov process. This was due to the fact that the input signal’s features are unknown to the computer, and the phase advancement for the next synthesis frame is entirely dependent on the phase advancement of the current frame. We will dive a bit deeper into this idea, and flesh out some details which we left untouched last week. This includes the effect our discrete Fourier transform has on the transition graph, different ways to interpret the correlation data, the effects which decisions we make have on the correlation data, and outlining a transition graph of our own. Let’s remind ourselves of the purpose of this investigation: sinusoidal movement is a reality that must be dealt with when affecting the time information of a signal. However we don’t want to go so far as coding every time detail of a signal into our phase vocoder algorithm, as we would then have completely relinquished precious real-time control. Our goal then is to find some kind of middle ground between complete real-time control of an arbitrary signal, and rigid custom phase vocoders per signal. We hope to outline a transition graph which arises from thinking about the phase vocoder as a Markov chain, and which will be used to inform our phase vocoder time/frequency modification decisions. Properties Let’s start by looking at the idea of a transition matrix and some of the implications of using a discrete Fourier transform. First, the size of the transition graph depends on the size of the FFT, whose frequency and time resolution are inversely proportional. So the larger the transition graph, the more time its frequency data represents. For musical signals, the longer a sinusoidal component stays in a certain frequency range, the more we should expect it to change locations. So if we consider a certain frequency range, the correlation of that frequency range to itself should be smaller the larger N we choose. This property is illustrated in the following equation. \begin{equation}\label{eq1} N_{2} > N_{1} \Rightarrow R^{p,N_{2}}_{\omega_{1},\omega_{2}} < R^{p,N_{1}}_{\omega_{1},\omega_{2}} \end{equation} where $R^{p}$ is the proportional correlation of a given frequency range $\omega_{1}$, with another frequency range, $\omega_{2}$, sufficiently close to $\omega_{1}$. We can see this if we compare two proportional correlation matrices, one of size $N = 1024$ and another of size $N = 4096$. In order to properly compare these, each cell must represent the same frequency range. So we add every $4$ columns and every $4$ rows, and divide the sum by $4$ (since each row is normalized to sum to $1$ and we are adding $4$ rows.) Now we can see how the same frequency range correlates to every other frequency range for two different choices of $N$, and compare how the choice of $N$ affects this correlation. Here is the proportional correlation of a each frequency bin with itself. Below we look at the proportional correlation of bin $k$ with bin $k \pm 1, k \pm 2, k \pm 3$. As we can see, for each frequency range, the proportional correlation matrix from the smaller N (red discs) correlates higher with frequency ranges close to it than the proportional correlation matrix from the larger N (blue circles). This is most true for frequency ranges near the one we are comparing others to, as we see above. This is the behavior we should expect given the above inequality. DSP choices regarding time affect the correlation matrix and should be considered when designing a phase vocoder transition graph. If we think about what the phase vocoder’s main purpose is, time stretching/contracting, perhaps we should expect the time modification factor of $\alpha$ to also have some kind of effect on the correlation matrix, and consequently the transition graph. Consider the correlation matrix of a phase vocoder signal of Bach’s violin sonata in G minor [1] which has been time stretched by a factor of 2. The correlation matrix of the regular signal is on the left, and half speed signal is on the right. Not surprisingly both matrices look similar. However, there are some differences that should be noted. Because we have stretched the duration of the signal by a factor of 2, any sinusoidal movement present in the signal occurs over a period of time twice as long as in the original signal. Moreover, any frequency distance traveled by a sinusoid in an analysis time period is necessarily less than in the original signal. We have now made it more likely that a given sinusoidal component will stay in the same frequency range than in the original signal. This has a similar effect on the correlation matrix as we saw above when we increased the size of the FFT. If we look at the correlation matrix of the time stretched signal, in general a given bin correlates higher with the bins closer to it than it does in the original signal. We can see this in the mean amplitude difference data, and in the amplitude difference standard deviation per frequency bin. The original speed signal (blue) has a larger average amplitude difference (left) and larger amplitude difference standard deviation (right) per frequency bin than the phase vocoder signal which has been time stretched by a factor of 2 (red). Since we are interpreting amplitude difference as one of the markers of sinusoidal movement, once again this implies that the slower we are making our input signal, the more likely sinusoidal components are to stay in frequency bins close to where they currently are. Consequently, the transition graph should tend towards sinusoids staying in neighboring frequency bins. Considerations If we think about the features of this time stretched signal, we can start drawing conclusions about how we should construct our transition graph. We see that the slower sinusoidal movement present in the signal, the greater the correlation each bin has with the bins around it. So when we are modifying the timing of our signal by a factor of $\alpha$, our transition graph should also reflect our choice of alpha. The following equation illustrates the properties we just explored. \begin{equation} \label{eq2}p(x,y,\alpha) = \frac{\alpha \lceil Log_{2}(\frac{x}{64}+1)\rceil }{(x-y)^{2}}, x \neq y, \alpha > 0\end{equation} First, we will say that the probability that a sinusoid is a given channel, $x$, will move to another given channel, $y$, is inversely proportionally to the square difference between those channels. Thus the further in the spectrum you look from a given location, the less and less likely it is as a destination. Secondly, this distance is a function of where the sinusoid is in the spectrum. The log function in the numerator creates a boundary which favors certain bins over others. Since we are using the discrete Fourier Transform, the ceiling operator is used when drawing the boundary lines. For bins who are in the boundary, the probability that they are future destinations of a given sinusoid goes up compared to the regular inverse square distance. For bins outside that boundary, the probability that they are a future destination goes down. We include the time modification factor in the numerator to contract/expand this boundary, thus mimicking the behavior we saw above. The more we slow down our signal (alpha is smaller) the smaller we draw the boundary. Conversely, the more we speed up our signal (alpha is larger) the larger we draw the boundary. The effect of larger N’s that we observed at the beginning conveniently takes care of itself. While the boundary is allowed to get larger as we choose bigger $N$'s, the frequency distance per bin is smaller the larger $N$ is. Thus our inequality from the beginning still holds true. Lastly, we have to define the probability that a given sinusoid will stay in the same place, since in the above equation, $x$ cannot equal $y$. We will let the probability that a sinusoid will stay in a given bin be one minus the average proportional amplitude change. Bins with a lower average proportional amplitude change will have a higher probability that sinusoids already occupying them will stay there. The behavior of this function is illustrated in the following graph. When implementing this behavior in a transition graph, we will limit the scope which a sinusoid can travel. For $N = 1024$, we will consider a generous $24$ bins $\pm 1076 Hz$ for a given bin as the limit for serious destinations of a sinusoid in that location. In order to motivate this, just consider an input signal, and think about how many times a single voice jumps more than 1000 Hz: not many. Ballpark of $3$ times a minute, at most, in general. So if we assume once every $20$ seconds, that’s once every $882,000$ samples, or once every 861 FFT frames which shakes out to a probability of $.0012$ distributed over 412 bins per frame (as must be the case for a Markov process), or $.00000282$ per bin. So for now let’s just call it some small probability of $\epsilon$ and worry about more likely frequency destinations. The following transition graph in Table 1 is based on the data from the linked MATLAB code which can be found at the end of this post. Lastly, let’s dig a bit deeper into some of the signal features we saw at the end of the previous post. It was pretty clear that the correlation matrix of an intentional signal displayed that intention, and conversely that unintentional signal’s correlation matrix displayed that lack of intention. Clearly there are more than these two categories of signals, for instance, intentional signals corrupted by noise. Let’s consider the correlation matrix for the first $7$ seconds of Bach’s violin sonata in G minor which has been corrupted with Gaussian noise. As we can see, the less clear a signal is, the less informative the correlation matrix is. Specifically the areas which showed little spectral activity have now been filled with junk. We should keep this mind as we move forward to next post. Conclusion We have proposed the outline of a phase vocoder transition graph based on the correlation data from Bach's Violin Sonata in G minor. We have showed some of the effects in phase vocoder output based on choices we make. In the world of real-time audio DSP, there are few aspects that we have complete control over. So when we are presented with the fact that the decision we want to make may actually improve performance, we should take full advantage of it. This will be explored next post as we start wrapping up this investigation. Previous post by Christian Yost: A Markov View of the Phase Vocoder Part 1 Next post by Christian Yost: The Phase Vocoder Transform To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments. Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.
Given the below coefficients, if the Diophantine equation $Axy + Bx + Cy + D = \lfloor\frac{n}{3}\rfloor$ has exactly one solution, then $n$ is prime, otherwise $n$ is composite. In a sense, this equation models primes by formalizing the factorization of $n$. The nonnegative solutions for $y$ uniquely encode the exhaustive set of odd factors of $n$. Here are the assumptions: $n=2x+1$ where $x \in \mathbb{N}$, $d=2a+1$ is the divisor of $n$ corresponding to $y$, where $a=y+b$ if $3 \mid n$, otherwise $a=3y+b$ (thus $d$ is decoded from $y$), and where the following coefficients are used: $$\begin{array}{c|c|c|} & \text{A} & \text{B} & \text{C} & \text{D} & \text{b} \\ \hline \text{1.a} & 6 & 5 & 2 & 1 & 2 \\ \hline \text{1.b} & 6 & 7 & 2 & 2 & 3 \\ \hline \text{2.a} & 6 & 5 & 4 & 3 & 2 \\ \hline \text{2.b} & 6 & 7 & 4 & 4 & 3 \\ \hline \text{3} & 2 & 5 & 0 & 0 & 2 \\ \hline \end{array}$$ For each $n$ the equation must be solved once or twice in order to complete the list of factors. Values of $n$ of the form $6j-1$, for any integer $j>0$, must use the $1.a$ and $2.b$ coefficients, those of the form $6j+1$ must use the $1.b$ and $2.a$ coefficients, and multiples of 3 must use the case 3 coefficients. For case 3, 3 is not in the solution (but choosing case 3 coefficients implies 3 is a factor). Here are the solutions for $n=99$: 3 $\rightarrow$ 11 and 33, for $n=119$: 1.a $\rightarrow$ 119, 17 and 2.b $\rightarrow$ 7, and for $n=157$: 1.b $\rightarrow$ 157. The value $x=0$ for case-1 coefficients is a trivial solution, since that solution corresponds to $d=n$. No values of $y$ correspond to $d=1$. The first four cases simplify to two equations, both of which may be evaluated for any $n=6j\pm1$, where $z=c \pmod 2$, $n=f(c)=2g+1$, $g=c+\lceil\frac{c}{2}\rceil$, and $c=\lfloor\frac{n}{3}\rfloor$: $6xy + 5x + 4y + 3 - 2z(y+1) = c$ $6xy + 7x + 2y + 2 + 2z(y+1) = c$ My question is, does this qualify as one of the prime-representing Diophantine equations revealed by Matiyasevich? In either case, please explain what aspects of the above equation meet of violate Matiyasevich's criteria.
This question by caveman is popular, but there were no attempted answers for months until my controversial one. It may be that the actual answer below is not, in itself, controversial, merely that the questions are "loaded" questions, because the field seems (to me, at least) to be populated by acolytes of AIC and BIC who would rather use OLS than each others' methods. Please look at all the assumptions listed, and restrictions placed on data types and methods of analysis, and please comment on them; fix this, contribute. Thus far, some very smart people have contributed, so slow progress is being made. I acknowledge contributions by Richard Hardy and GeoMatt22, kind words from Antoni Parellada, and valiant attempts by Cagdas Ozgenc and Ben Ogorek to relate K-L divergence to an actual divergence. Before we begin let us review what AIC is, and one source for this is Prerequisites for AIC model comparison and another is from Rob J Hyndman. In specific, AIC is calculated to be equal to $$2k - 2 \log(L(\theta))\,,$$ where $k$ is the number of parameters in the model and $L(\theta)$ the likelihood function. AIC compares the trade-off between variance ($2k$) and bias ($2\log(L(\theta))$) from modelling assumptions. From Facts and fallacies of the AIC, point 3 "The AIC does not assume the residuals are Gaussian. It is just that the Gaussian likelihood is most frequently used. But if you want to use some other distribution, go ahead." The AIC is the penalized likelihood, whichever likelihood you choose to use. For example, to resolve AIC for Student's-t distributed residuals, we could use the maximum-likelihood solution for Student's-t. The log-likelihood usually applied for AIC is derived from Gaussian log-likelihood and given by $$ \log(L(\theta)) =-\frac{|D|}{2}\log(2\pi) -\frac{1}{2} \log(|K|) -\frac{1}{2}(x-\mu)^T K^{-1} (x-\mu), $$ $K$ being the covariance structure of the model, $|D|$ the sample size; the number of observations in the datasets, $\mu$ the mean response and $x$ the dependent variable. Note that, strictly speaking, it is unnecessary for AIC to correct for the sample size, because AIC is not used to compare datasets, only models using the same dataset. Thus, we do not have to investigate whether the sample size correction is done correctly or not, but we would have to worry about this if we could somehow generalize AIC to be useful between datasets. Similarly, much is made about $K>>|D|>2$ to insure asymptotic efficiency. A minimalist view might consider AIC to be just an "index," making $K>|D|$ relevant and $K>>|D|$ irrelevant. However, some attention has been given to this in the form of proposing an altered AIC for $K$ not much larger than $|D|$ called AIC$_c$ see second paragraph of answer to Q2 below. This proliferation of "measures" only reinforces the notion that AIC is an index. However, caution is advised when using the "i" word as some AIC advocates equate use of the word "index" with the same fondness as might be attached to referring to their ontogeny as extramarital. Q1: But a question is: why should we care about this specific fitness-simplicity trade-off? Answer in two parts. First the specific question. You should only care because that was the way it was defined. If you prefer there is no reason not to define a CIC; a caveman information criterion, it will not be AIC, but CIC would produce the same answers as AIC, it does not effect the tradeoff between goodness-of-fit and positing simplicity. Any constant that could have been used as an AIC multiplier, including one times, would have to have been chosen and adhered to, as there is no reference standard to enforce an absolute scale. However, adhering to a standard definition is not arbitrary in the sense that there is room for one and only one definition, or "convention," for a quantity, like AIC, that is defined only on a relative scale. Also see AIC assumption #3, below. The second answer to this question pertains to the specifics of AIC tradeoff between goodness-of-fit and positing simplicity irrespective of how its constant multiplier would have been chosen. That is, what actually effects the "tradeoff"? One of the things that effects this, is to degree of freedom readjust for the number of parameters in a model, this led to defining an "new" AIC called AIC$_c$ as follows: $$\begin{align}AIC_c &= AIC + \frac{2k(k + 1)}{n - k - 1}\\&= \frac{2kn}{n-k-1} - 2 \ln{(L)}\end{align} \,,$$ where $n$ is the sample size. Since the weighting is now slightly different when comparing models having different numbers of parameters, AIC$_c$ selects models differently than AIC itself, and identically as AIC when the two models are different but have the same number of parameters.Other methods will also select models differently, for example, "The BIC [sic, Bayesian information criterion] generally penalizes free parameters more strongly than the Akaike information criterion, though it depends..." ANOVA would also penalize supernumerary parameters using partial probabilities of the indispensability of parameter values differently, and in some circumstances would be preferable to AIC use. In general, any method of assessment of appropriateness of a model will have its advantages and disadvantages. My advice would be to test the performance of any model selection method for its application to the data regression methodology more vigorously than testing the models themselves. Any reason to doubt? Yup, care should be taken when constructing or selecting any model test to select methods that are methodologically appropriate. AIC is useful for a subset of model evaluations, for that see Q3, next. For example, extracting information with model A may be best performed with regression method 1, and for model B with regression method 2, where model B and method 2 sometimes yields non-physical answers, and where neither regression method is MLR, where the residuals are a multi-period waveform with two distinct frequencies for either model and the reviewer asks "Why don't you calculate AIC?" Q3 How does this relate to information theory: MLR assumption #1. AIC is predicated upon the assumptions of maximum likelihood (MLR) applicability to a regression problem. There is only one circumstance in which ordinary least squares regression and maximum likelihood regression have been pointed out to me as being the same. That would be when the residuals from ordinary least squares (OLS) linear regression are normally distributed, and MLR has a Gaussian loss function. In other cases of OLS linear regression, for nonlinear OLS regression, and non-Gaussian loss functions, MLR and OLS may differ. There are many other regression targets than OLS or MLR or even goodness of fit and frequently a good answer has little to do with either, e.g., for most inverse problems. There are highly cited attempts (e.g., 1100 times) to use generalize AIC for quasi-likelihood so that the dependence on maximum likelihood regression is relaxed to admit more general loss functions. Moreover, MLR for Student's-t, although not in closed form, is robustly convergent. Since Student-t residual distributions are both more common and more general than, as well as inclusive of, Gaussian conditions, I see no special reason to use the Gaussian assumption for AIC. MLR assumption #2. MLR is an attempt to quantify goodness of fit. It is sometimes applied when it is not appropriate. For example, for trimmed range data, when the model used is not trimmed. Goodness-of-fit is all fine and good if we have complete information coverage. In time series, we do not usually have fast enough information to understand fully what physical events transpire initially or our models may not be complete enough to examine very early data. Even more troubling is that one often cannot test goodness-of-fit at very late times, for lack of data. Thus, goodness-of-fit may only be modelling 30% of the area fit under the curve, and in that case, we are judging an extrapolated model on the basis of where the data is, and we are not examining what that means. In order to extrapolate, we need to look not only at the goodness of fit of 'amounts' but also the derivatives of those amounts failing which we have no "goodness" of extrapolation. Thus, fit techniques like B-splines find use because they can more smoothly predict what the data is when the derivatives are fit, or alternatively inverse problem treatments, e.g., ill-posed integral treatment over the whole model range, like error propagation adaptive Tikhonov regularization. Another complicated concern, the data can tell us what we should be doing with it. What we need for goodness-of-fit (when appropriate), is to have the residuals that are distances in the sense that a standard deviation is a distance. That is, goodness-of-fit would not make much sense if a residual that is twice as long as a single standard deviation were not also of length two standard deviations. Selection of data transforms should be investigated prior to applying any model selection/regression method. If the data has proportional type error, typically taking the logarithm before selecting a regression is not inappropriate, as it then transforms standard deviations into distances. Alternatively, we can alter the norm to be minimized to accommodate fitting proportional data. The same would apply for Poisson error structure, we can either take the square root of the data to normalize the error, or alter our norm for fitting. There are problems that are much more complicated or even intractable if we cannot alter the norm for fitting, e.g., Poisson counting statistics from nuclear decay when the radionuclide decay introduces an exponential time-based association between the counting data and the actual mass that would have been emanating those counts had there been no decay. Why? If we decay back-correct the count rates, we no longer have Poisson statistics, and residuals (or errors) from the square-root of corrected counts are no longer distances. If we then want to perform a goodness-of-fit test of decay corrected data (e.g., AIC), we would have to do it in some way that is unknown to my humble self. Open question to the readership, if we insist on using MLR, can we alter its norm to account for the error type of the data (desirable), or must we always transform the data to allow MLR usage (not as useful)? Note, AIC does not compare regression methods for a single model, it compares different models for the same regression method. AIC assumption #1. It would seem that MLR is not restricted to normal residuals, for example, see this question about MLR and Student's-t. Next, let us assume that MLR is appropriate to our problem so that we track its use for comparing AIC values in theory. Next we assume that have 1) complete information, 2) the same type of distribution of residuals (e.g., both normal, both Student's- t) for at least 2 models. That is, we have an accident that two models should now have the type of distribution of residuals. Could that happen? Yes, probably, but certainly not always. AIC assumption #2. AIC relates the negative logarithm of the quantity (number of parameters in the model divided by the Kullback-Leibler divergence). Is this assumption necessary? In the general loss functions paper a different "divergence" is used. This leads us to question if that other measure is more general than K-L divergence, why are we not using it for AIC as well? The mismatched information for AIC from Kullback-Leibler divergence is "Although ... often intuited as a way of measuring the distance between probability distributions, the Kullback–Leibler divergence is not a true metric." We shall see why shortly. The K-L argument gets to the point where the difference between two things the model (P) and the data (Q) are $$D_{\mathrm{KL}}(P\|Q) = \int_X \log\!\left(\frac{{\rm d}P}{{\rm d}Q}\right) \frac{{\rm d}P}{{\rm d}Q} \, {\rm d}Q \,,$$ which we recognize as the entropy of ''P'' relative to ''Q''. AIC assumption #3. Most formulas involving the Kullback–Leibler divergence hold regardless of the base of the logarithm. The constant multiplier might have more meaning if AIC were relating more than one data set at at time. As it stands when comparing methods, if $AIC_{data,model 1}<AIC_{data,model 2}$ then any positive number times that will still be $<$. Since it is arbitrary, setting the constant to a specific value as a matter of definition is also not inappropriate. AIC assumption #4. That would be that AIC measures Shannon entropy or self information." What we need to know is "Is entropy what we need for a metric of information?" To understand what "self-information" is, it behooves us to normalize information in a physical context, any one will do. Yes, I want a measure of information to have properties that are physical. So what would that look like in a more general context? The Gibbs free-energy equation ($\Delta G = ΔH – TΔS$) relates the change in energy to the change in enthalpy minus the absolute temperature times the change in entropy. Temperature is an example of a successful type of normalized information content, because if one hot and one cold brick are placed in contact with each other in a thermally closed environment, then heat will flow between them. Now, if we jump at this without thinking too hard, we say that heat is the information. But is it the relative information that predicts behaviour of a system. Information flows until equilibrium is reached, but equilibrium of what? Temperature, that's what, not heat as in particle velocity of certain particle masses, I am not talking about molecular temperature, I am talking about gross temperature of two bricks which may have different masses, made of different materials, having different densities etc., and none of that do I have to know, all I need to know is that the gross temperature is what equilibrates. Thus if one brick is hotter, then it has more relative information content, and when colder, less. Now, if I am told one brick has more entropy than the other, so what? That, by itself, will not predict if it will gain or lose entropy when placed in contact with another brick. So, is entropy alone a useful measure of information? Yes, but only if we are comparing the same brick to itself thus the term "self-information." From that comes the last restriction: To use K-L divergence all bricks must be identical. Thus, what makes AIC an atypical index is that it is not portable between data sets (e.g., different bricks), which is not an especially desirable property that might be addressed by normalizing information content. Is K-L divergence linear? Maybe yes, maybe no. However, that does not matter, we do not need to assume linearity to use AIC, and, for example, entropy itself I do not think is linearly related to temperature. In other words, we do not need a linear metric to use entropy calculations. One good source of information on AIC is in this thesis. On the pessimistic side this says, "In itself, the value of the AIC for a given data set has no meaning." On the optimistic side this says, that models that have close results can be differentiated by smoothing to establish confidence intervals, and much much more.
Based on the previous question and the comment in it, imagine two different mean-field Hamiltonians $H=\sum(\psi_i^\dagger\chi_{ij}\psi_j+H.c.)$ and $H'=\sum(\psi_i^\dagger\chi_{ij}'\psi_j+H.c.)$, we say that $H$ and $H'$ are gauge equivalent if they have the same eigenvalues and the same projected eigenspaces. And the Wilson loop $W(C)$ can be defined as the trace of matrix-product $P(C)$(see the notations here ). Now my questions are: (1)"$H$ and $H'$ are gauge equivalent" if and only if "$W(C)=W'(C)$ for all loops $C$ on the 2D lattice ". Is this true? How to prove or disprove it? (2)If the system is on a 2D torus, is $W(L)$ always a positive real number ? Which means that the 'total flux'(the phase of $W(L)$) through the torus is quantized as $2\pi\times integer$, where $L$ is the boundary of the 2D lattice. (3)If the Hamiltonian contains extra terms, say $H=\sum(\psi_i^\dagger\chi_{ij}\psi_j+\psi_i^T\eta_{ij}\psi_j+H.c.+\psi_i^\dagger h_i\psi_i)$, is the Wilson loop still defined as $W(C)=tr(P(C))$? Thanks a lot.This post imported from StackExchange Physics at 2014-03-09 08:40 (UCT), posted by SE-user K-boy
How is the current related to nF? current and nF have different dimensions? $nF$ is the charge $C$ transferred during the reaction, while current $i$ is the rate of charge transfer ($dC/dt$). The following is a derivation, now edited to be more rigorous. The equation you provide is an expression for electrical Joule heating, which follows in the steady-state from the expression for the power (rate of work) generated by the battery. From Ohm's law $P=\left(\frac{dw_{rev}}{dt}\right)=-i^2R=-i E_0 \tag{1}$ where $E_0$ is the (open-circuit) electric potential, $i$ the current, $w_{rev}$ is the reversible electrical work. Note that $E_0$ is the potential when reversible work is done. Now assume a steady state, with constant P and T constraints, with work done and heat generated cancelling (so that the internal energy $U = constant$), that is $Q_{rev} = -w_{rev} \tag{2}$ and therefore also power and rate of heat dissipation equal: $\left(\frac{dQ_{rev}}{dt}\right)=-\left(\frac{dw_{rev}}{dt}\right)=i E_0 \tag{3}$ Now according to the Nernst equation, $\Delta G=w_{rev} =-nFE_0 \tag{4}$ In the steady-state the dissipated heat is equal to the electrical work, so that, combining (2) and (4), we have that $\frac{Q_{rev}}{nF}=-\frac{w_{rev}}{nF}=E_0 \tag{5}$ which leads, combined with (3), to the equation in the book: $\left(\frac{dQ_{rev}}{dt}\right)=i \frac{Q_{rev}}{nF} \tag{6}$ Aside Note I apply the opposite sign convention for work from that in the OP (apologies) - in the convention I use work is positive when performed on the system (charging a battery is positive work, discharging negative work). This does not affect the result of the derivation, since I apply the same sign convention for heat (for an exothermal process heat is negative). On $\Delta H$ and an alternative (longer) derivation For a process at constant p, it is common to encounter the expression $$\Delta H = Q_p$$ However if there is non-PV work, the more general form of this equation is $$\Delta H = Q_p + w_{non-pV}$$ This expression is general and applies when the process is carried out either reversibly or irreversibly. In differential form, $$d\Delta H = dQ_p + dw_{non-pV} \tag{a1}$$ Note also that in the preceding derivation non-pV work is electrical. When the process is carried out reversibly a maximum amount of work is done and $$w_{non-pV,rev} =\Delta G = \Delta H - T \Delta S = \Delta H - Q_{rev}$$ This leads to the following expression at constant T, equal to that in the linked problem (but note the different work sign convention here): $$ d\Delta H = dQ_{rev} + dw_{rev} = d\Delta G + Td\Delta S \tag{a2}$$ where I dropped the "non-pV" subscript since the work is assumed to be electrical. Since H is a state function it must be equal for reversible and irreversible processes, and we can equate (a1) and (a2), leading to the following general expression: $$ d\Delta H = dQ_p + dw_{ele}= d\Delta G + Td\Delta S $$ which leads finally to $$ dQ_p = - dw_{ele} + d\Delta G + Td\Delta S \tag{a3}$$ Taking the time derivative of this equation results in the final expression in the linked problem (making sure to account for differences in the work sign convention): $$\dot{Q} = \dot{Q}_\text{rev} + \dot{Q}_\text{irrev} = IT\,\frac{\mathrm dE_0}{\mathrm dT} + I(E-E_0) \tag{a4}$$ To obtain the equation in this problem it is only necessary to apply the reversibility condition starting from either (a3) or (a4). Starting from (a3), $$ dQ_{rev} = - dw_{ele,rev} + d\Delta G + Td\Delta S $$ But $ dw_{ele,rev} = d\Delta G $ which leads to (somewhat trivially) $$ dQ_{rev} = Td\Delta S $$ Taking the time derivative and inserting the Nernst expression for $\Delta S$ (twice!) gives $$\dot{Q}_\text{rev} = \left(\frac{dQ_{rev}}{dt}\right) = IT\,\frac{\mathrm dE_0}{\mathrm dT} = I\frac{Q_{rev}}{nF}$$ which is the desired expression. The same result can be arrived at by applying the reversibility condition to equation (a4).
Real numbers Framework $\dots,\,-\frac{\pi}{2},\,0,\,1,\dots$ ${\mathbb R}$ Discussion axiomatics and cardinality Specker sequence Choose a programming language and index all Turing machines (or executable program) by $i$. Let $h(i,s)$ be $0$ or $1$, depending on whether the machine with index $i$ halts before $s$ steps. For any given pair of numbers, you can indeed compute $h(i,s)$ by just running the program and wait $s$ time steps. Define a sequence of rationals by having the $n$'th number given by the following sum (where you run through i and run it to step $ s=n-i $) $a_n = \sum_{i+s=n} \dfrac {h(i,s)} {2^i} $ This is some number between 0 and 1. But computing limit n to infinity (and thus s to infinity for all i) requires knowledge whether Turing machines ever halt, which we know to be impossible since the 30's. Thus this real number is not computable. And most real numbers are worse, really, because this number is at least definable. Most reals aren't even that.
Can I mix formation enthalpies and bond enthalpies in the same calculation? I think the answer is no, since they are relative measurements, and possibly relative to different things, but I haven't seen it stated explicitly anywhere. Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.Sign up to join this community Can I mix formation enthalpies and bond enthalpies in the same calculation? I think the answer is no, since they are relative measurements, and possibly relative to different things, but I haven't seen it stated explicitly anywhere. You can combine this data. You just need to find a sensible reaction cycle, and you have to make sure the temperatures (usually 298 K) match. To give a simple example, consider $\ce{CO2(g)}$. What is the $\ce{CO}$ bond enthalpy $\mathrm{\epsilon_{C=O}}$in $\ce{CO2(g)}$? It is one half of the enthalpy for the reaction $$\ce{ CO2(g)->C(g) + 2O(g)}~~~~~\mathrm{\Delta H^\circ = 2 \epsilon_{C=O}}$$ The enthalpy of formation of $\ce{CO2(g)}$ is equal to the enthalpy of combustion of graphite: $$\ce{C(s) + O2(g) -> CO2(g)} ~~~~~~\mathrm{\Delta _f H^\circ = -393.51}\pu{kJ mol^{-1}}$$ The heat of formation of $\ce{C(g)}$ is: $$\ce{C(s) -> C(g) }~~~~~~\mathrm{\Delta _f H^\circ = 716.68}\pu{kJ mol^{-1}}$$ We also need a value of the $\ce{OO}$ bond enthalpy in $\ce{O2}$: $$\ce{O_2(g) -> 2O(g) }~~~~\mathrm{\Delta H^\circ = \epsilon_{O=O} = 498.3}\pu{kJ mol^{-1}}$$ Adding things up we obtain $\mathrm{\epsilon_{C=O}}$: $$\mathrm{\epsilon_{C=O}} = \frac12(393.51+716.68+498.3)\pu{kJ mol^{-1}= 804.2 kJ mol^{-1}}$$
I have $B_s = $ brownian motion at time $s$. $$ \int_0 ^t B_s \, dB_s$$ $$0 \leq t \leq T$$ And want to check if it is a martingale, first from its closed form expression, and then via conditions on the Ito integral. From exercises immediately before this one, it is known to me that the closed form expression is $$ \int _0 ^t B_s \, dB_s = \frac{1}{2}(B_t^2 - t) $$ But - is there a way to actually derive the closed form expression, without prior knowledge, and working from the integral itself?
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-10 of 18 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Does anyone here understand why he set the Velocity of Center Mass = 0 here? He keeps setting the Velocity of center mass , and acceleration of center mass(on other questions) to zero which i dont comprehend why? @amanuel2 Yes, this is a conservation of momentum question. The initial momentum is zero, and since there are no external forces, after she throws the 1st wrench the sum of her momentum plus the momentum of the thrown wrench is zero, and the centre of mass is still at the origin. I was just reading a sci-fi novel where physics "breaks down". While of course fiction is fiction and I don't expect this to happen in real life, when I tired to contemplate the concept I find that I cannot even imagine what it would mean for physics to break down. Is my imagination too limited o... The phase-space formulation of quantum mechanics places the position and momentum variables on equal footing, in phase space. In contrast, the Schrödinger picture uses the position or momentum representations (see also position and momentum space). The two key features of the phase-space formulation are that the quantum state is described by a quasiprobability distribution (instead of a wave function, state vector, or density matrix) and operator multiplication is replaced by a star product.The theory was fully developed by Hilbrand Groenewold in 1946 in his PhD thesis, and independently by Joe... not exactly identical however Also typo: Wavefunction does not really have an energy, it is the quantum state that has a spectrum of energy eigenvalues Since Hamilton's equation of motion in classical physics is $$\frac{d}{dt} \begin{pmatrix} x \\ p \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \nabla H(x,p) \, ,$$ why does everyone make a big deal about Schrodinger's equation, which is $$\frac{d}{dt} \begin{pmatrix} \text{Re}\Psi \\ \text{Im}\Psi \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \hat H \begin{pmatrix} \text{Re}\Psi \\ \text{Im}\Psi \end{pmatrix} \, ?$$ Oh by the way, the Hamiltonian is a stupid quantity. We should always work with $H / \hbar$, which has dimensions of frequency. @DanielSank I think you should post that question. I don't recall many looked at the two Hamilton equations together in this matrix form before, which really highlight the similarities between them (even though technically speaking the schroedinger equation is based on quantising Hamiltonian mechanics) and yes you are correct about the $\nabla^2$ thing. I got too used to the position basis @DanielSank The big deal is not the equation itself, but the meaning of the variables. The form of the equation itself just says "the Hamiltonian is the generator of time translation", but surely you'll agree that classical position and momentum evolving in time are a rather different notion than the wavefunction of QM evolving in time. If you want to make the similarity really obvious, just write the evolution equations for the observables. The classical equation is literally Heisenberg's evolution equation with the Poisson bracket instead of the commutator, no pesky additional $\nabla$ or what not The big deal many introductory quantum texts make about the Schrödinger equation is due to the fact that their target audience are usually people who are not expected to be trained in classical Hamiltonian mechanics. No time remotely soon, as far as things seem. Just the amount of material required for an undertaking like that would be exceptional. It doesn't even seem like we're remotely near the advancement required to take advantage of such a project, let alone organize one. I'd be honestly skeptical of humans ever reaching that point. It's cool to think about, but so much would have to change that trying to estimate it would be pointless currently (lol) talk about raping the planet(s)... re dyson sphere, solar energy is a simplified version right? which is advancing. what about orbiting solar energy harvesting? maybe not as far away. kurzgesagt also has a video on a space elevator, its very hard but expect that to be built decades earlier, and if it doesnt show up, maybe no hope for a dyson sphere... o_O BTW @DanielSank Do you know where I can go to wash off my karma? I just wrote a rather negative (though well-deserved, and as thorough and impartial as I could make it) referee report. And I'd rather it not come back to bite me on my next go-round as an author o.o
Any integer greater than 1 is called a prime number if and only if its positive factors are 1 and the number p itself. The basic ideology involved in this post is flawed and the post has now been moved to Archives. – The Editor Prime Generating Formulas We all know how hard it is to predict a formula for prime numbers! They have… Consider two natural numbers $n_1$ and $n_2$, out of which one is twice as large as the other. We are not told whether $n_1$ is larger or $n_2$, we can state following two propositions: PROPOSITION 1: The difference $n_1-n_2$, if $n_1 >n_2$, is different from the difference $n_2-n_1$, if $n_2 >n_1$. PROPOSITION 2: The difference $n_1-n_2$, if $n_1 >n_2$, is the same… 11th December 2013 (or in short 11-12-13) is just a few hours away here. It’s the last date of the 21st century with such an extraordinary pattern of numbers. In any calendar of the world, such date will be seen in 22nd century after 87 years and 54 days from today — on February 3, 2101, if all kind of date… Abel prize is one of the most prestigious awards given for outstanding contribution in mathematics, often considered as the Nobel Prize of Mathematics. Niels Henrik Abel Memorial fund, established on 1 January 2002, awards the Abel Prize for outstanding scientific work in the field of mathematics. The prize amount is 6 million NOK (about 1010000 USD) and was awarded for the first… Every student or graduate knows how hard the first experience of passing exams is. Preliminary preparation starves the nervous system and the physical condition of the human body, however, the exam itself is always a stressful situation, which requires a candidate a great manifestation of mental and physical abilities. Therefore, just the knowledge of a subject is not enough for… Exam have been haunting student since forever, and although you’re willing to do whatever you can to retain essential information, sometimes you end up spending weeks studying with useless revision techniques. We’re accustomed to employ our own techniques when it comes to studying such as making sticky notes, highlighting, or drawing charts. However, recent studies conducted in the US have… Just discovered Barry Martin’s Hopalong Orbits Visualizer — an excellent abstract visualization, which is rendered in 3D using Hopalong Attractor algorithm, WebGL and Mrdoob’s three.js project. Hop to the source website using your desktop browser (with WebGl and Javascript support) and enjoy the magic. PS: Hopalong Attractor Algorithm Hopalong Attractor predicts the locus of points in 2D using this algorithm… This mathematical fallacy is due to a simple assumption, that $ -1=\dfrac{-1}{1}=\dfrac{1}{-1}$ . Proceeding with $ \dfrac{-1}{1}=\dfrac{1}{-1}$ and taking square-roots of both sides, we get: $ \dfrac{\sqrt{-1}}{\sqrt{1}}=\dfrac{\sqrt{1}}{\sqrt{-1}}$ Now, as the Euler’s constant $ i= \sqrt{-1}$ and $ \sqrt{1}=1$ , we can have $ \dfrac{i}{1}=\dfrac{1}{i} \ldots \{1 \}$ $ \Rightarrow i^2=1 \ldots \{2 \}$ . This is complete contradiction to the… People really like to twist the numbers and digits bringing fun into life. For example, someone asks, “how much is two and two?” : the answer should be four according to basic (decimal based) arithmetic. But the same with base three (in ternary number system) equals to 11. Two and Two also equals to Twenty Two. Similarly there are many ways you can… Here is an interesting mathematical puzzle alike problem involving the use of Egyptian fractions, whose solution sufficiently uses the basic algebra. Problem Let a, b, c, d and e be five non-zero complex numbers, and; $ a + b + c + d + e = -1$ … (i) $ a^2+b^2+c^2+d^2+e^2=15$ …(ii) $ \dfrac{1}{a} + \dfrac{1}{b} +\dfrac{1}{c} +\dfrac{1}{d} +\dfrac{1}{e}= -1$… The greatest number theorist in mathematical universe, Leonhard Euler had discovered some formulas and relations in number theory, which were based on practices and were correct to limited extent but still stun the mathematicians. The prime generating equation by Euler is a very specific binomial equation on prime numbers and yields more primes than any other relations out there in… “Irrational numbers are those real numbers which are not rational numbers!” Def.1: Rational Number A rational number is a real number which can be expressed in the form of where $ a$ and $ b$ are both integers relatively prime to each other and $ b$ being non-zero. Following two statements are equivalent to the definition 1. 1. $ x=\frac{a}{b}$… If you are aware of elementary facts of geometry, then you might know that the area of a disk with radius $ R$ is $ \pi R^2$ . The radius is actually the measure(length) of a line joining the center of disk and any point on the circumference of the disk or any other circular lamina. Radius for a disk… Triangle inequality has its name on a geometrical fact that the length of one side of a triangle can never be greater than the sum of the lengths of other two sides of the triangle. If $ a$ , $ b$ and $ c$ be the three sides of a triangle, then neither $ a$ can be greater than $… Ramanujan (1887-1920) discovered some formulas on algebraic nested radicals. This article is based on one of those formulas. The main aim of this article is to discuss and derive them intuitively. Nested radicals have many applications in Number Theory as well as in Numerical Methods . The simple binomial theorem of degree 2 can be written as: $ {(x+a)}^2=x^2+2xa+a^2 \… This is the last month of the glorious prime year 2011. We are all set to welcome upcoming 2012, which is not a prime but a leap year. Calendars have very decent stories and since this blog is based on mathematical approach, let we talk about the mathematical aspects of calendars. The international calendar we use is called Gregorian Calendar,… This is a puzzle which I told to my classmates during a talk, a few days before. I did not represent it as a puzzle, but a talk suggesting the importance of Math in general life. This is partially solved for me and I hope you will run your brain-horse to help me solve it completely. If you didn’t notice,… This is not just math, but a very good test for linguistic reasoning. If you are serious about this test and think that you’ve a sharp [at least average] brain then read the statement (only) below –summarize it –find the conclusion and then answer that whether summary of the statement is Yes or No. [And if you’re not serious about…
All about the Light Absorption’s theory on the basis of Jablonski diagram. According to the Grotthus – Draper Law of photo-chemical activation: Only that light which is absorbed by a system, canbring a photo-chemical change. However it is not true that all the kind of light(s) that are absorbed could bring a photo-chemical change. The absorption of light may result in a number of other phenomena as well. For instance, the light absorbed may cause only a decrease in the intensity of the incident radiation. This event is governed by the Beer-Lambert Law. Secondly, the light absorbed may be re-emitted almost instantaneously, within $10^{-8}$ seconds, in one or more steps. This phenomenon is well-known as fluorescence. Sometimes the light absorbed is given out slowly and even long after the removal of the source of light. This phenomenon is known as phosphorescence. The phenomena of fluorescence and phosphorescence are best explained with the help of the . Jablonski Diagram What is Jablonski’s Diagram? In order to understand Jablonski diagram, we first need to go through some basic facts. Many molecules have an even number of electrons and thus in the ground state, all the electrons are spin paired. The quantity $ \mathbf {2S+1} $ , where $ S $ is the total electronic spin, is known as the spin multiplicity of a state. When the spins are paired $ \uparrow \downarrow $ as shown in the figure, the upward orientation of the electron spin is cancelled by the downward orientation so that total electronic spin $ \mathbf {S=0} $ . That makes spin multiplicity of the state 1. $ s_1= + \frac {1}{2}$ ; $ s_2= – \frac {1}{2}$ so that $ \mathbf{S}=s_1+s_2 =0$ . Hence, $ \mathbf {2S+1}=1 $ When by the absorption of a photon of a suitable energy $ h \nu $ , one of the paired electrons goes to a higher energy level (excited state), the spin orientation of the single electrons may be either parallel or anti-parallel. [see image] • If spins are parallel, $ \mathbf {S=1} $ or $ \mathbf {2S+1=3} $ i.e., the spin multiplicity is 3. This is expressed by saying that the molecule is in the triplet excited state. • If the spins are anti-parallel, then $ \mathbf{S=0} $ so that $ \mathbf {2S+1=1} $ which is the singlet excited state, as already discussed. See, since the electron can jump to any of the higher electronic states depending upon the energy of the photon absorbed, we get a series of singlet excited states, $ {S_n} $ and a series of triplet excited state $ {T_n}$where $ n =1, 2, 3 \ldots $ . Thus $ S_1, , S_2, , S_3, \ldots $ etc are respectively known as first singlet excited states, second singlet excited states and so on. Similarly, in $ T_1, , T_2,, ….. $, they are respectively known as first triplet excited state, second triplet excited state and so on. Make sure, you are not confused in $ \mathbf{S}$ & $ S_n $. Feel free to ask questions, send feedback and even point out mistakes. Great conversations start with just a single word. How to write better comments?
Good evening all, I am determined to determine this determinant: $$D = \det{\left[x_j^{n-i} - x_j^{2n-i}\right]_{i,j=1}^{n}}$$ Looking at the smaller cases, leads me to believe that $$D = \prod_{1 \leq i < j \leq n}\left(x_i-x_j\right)\prod_{i=1}^n \left(1-{x_i}^n\right)$$ although I am having trouble showing this. I know that, since the determinant is an alternating function in the variables $x_1,\dots x_n$ it follows that $$ \frac{D}{\displaystyle\prod_{1 \leq i < j \leq n}\left(x_i-x_j\right)} $$ is a symmetric polynomial of degree $n^2$ (the degree of D minus the degree of the Vandermonde part). How can I show that this symmetric polynomial is exactly $\prod_{i=1}^n \left(1-{x_i}^n\right)$ ? Your help is, as always, much appreciated.
# 1 Problem with understanding the proof of Sauer Lemma I will replicate the proof here which is from the book "Learning from Data" Sauer Lemma: $B(N,K) \leq \sum_{i=0}^{k-1}{n\choose i}$ Proof: The statement is true whenever k = 1 or N = 1 by inspection. The proof is by induction on N. Assume the statement is true for all $N \leq N_o$ and for all k. We need to prove that the statement for $N = N_0 + 1$ and fpr all k. Since the statement is already true when k = 1(for all values of N) by the initial condition, we only need to worry about $k \geq 2$. By (proven in the book), $B(N_0 + 1, k) \leq B(N_0, k) + B(N_0, k-1)$ and applying induction hypothesis on each therm on the RHS, we get the result. **My Concern** From what I see this proof only shows that if $B(N, K)$ implies $B(N+1, K)$. I can't see how it shows $B(N, K)$ implies $B(N, K+1)$. This problem arises because the $k$ in $B(N_0 + 1, K)$ and $B(N_0, K)$ are the same, so i think i need to prove the other induction too. Why the author is able to prove it this way? # 2 Re: Problem with understanding the proof of Sauer Lemma OK i think i will just post it below. I can't find an edit button. I mean for 2 variable induction, shouldn't we prove B(N,k) implies B(N+1,k) and B(N, K+1)? # 3 Re: Problem with understanding the proof of Sauer Lemma You can imaging that the induction hypothesis to be satisfying the inequality for "all k", and then, satisfies the inequality for "all k" too. Hope this helps. __________________ When one teaches, two learn. Thread Tools Display Modes
I have a perhaps stupid question about Noether's theorem. In Abelian gauge theory, say $$\mathcal{L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\bar{\Psi}(iD\!\!\!\!/-m)\Psi, \tag{1.0} $$ where $D_{\mu}=\partial_{\mu}+iA_{\mu}$ is the covariant derivative, the equations of motion are $$\partial_{\mu}F^{\mu\nu}=J^{\nu} \tag{1.1}$$ $$(iD\!\!\!\!/-m)\Psi=0 \tag{1.2} $$ where $$J^{\mu}=\bar{\Psi}\gamma^{\mu}\Psi \tag{1.3} $$ is a conserved current, $$\partial_{\mu}J^{\mu}=0 \tag{1.4} $$ following (1.1). In fact, the spinor current (1.3) is also the Noether current associated with a global $U(1)$ symmetry. One can check that the current (1.3) is indeed gauge invariant. In nonabelian case, we consider the spinor-Yang-Mills theory $$\mathcal{L}=-\frac{1}{4}\mathrm{Tr}(F_{\mu\nu}F^{\mu\nu})+\bar{\Psi}(iD\!\!\!\!/-m)\Psi. \tag{2.0} $$ One finds the equations of motion $$D_{\mu}F^{\mu\nu}=J^{\nu} \tag{2.1} $$ $$(iD\!\!\!\!/-m)\Psi=0 \tag{2.2} $$ where $$J^{\mu}=\sum_{a=1}^{N}\left(\bar{\Psi}\gamma^{\mu}T_{a}\Psi\right)T_{a} \tag{2.3} $$ is a Lie-algebra-valued spinor current, and $\left\{T_{a}\right\}_{a=1,\cdots,N}$ is the set of generators of the gauge group. In this case, the current $J^{\mu}$ is "covariantly conserved", $$D_{\mu}J^{\mu}=0 \tag{2.4} $$ because $$D\star J=DD\star F=F\wedge\star F-\star F\wedge F=F^{a}\wedge\star F^{b}[T_{a},T_{b}]\equiv 0$$ where the last equality holds because $F^{a}\wedge\star F^{b}$ is an inner-product $\langle F^{a},F^{b}\rangle$, which is symmetric. However, there should be no constant of motion (i.e. locally conserved charge) associated with the spinor current (2.3) because of the covariant derivative. On the other hand, there still can be a global part of the gauge symmetry of (2.0). Considering a "global gauge transformation" acting on (2.0), by Noether's theorem of the global symmetry, one finds a locally conserved current $$J^{\mu}=\sum_{a=1}^{N}\left(\bar{\Psi}\gamma^{\mu}T_{a}\Psi\right)T_{a} \tag{*} $$ which is identical to the spinor current (2.3). Why would Noether's theorem fail in such a case?
Suppose we know, that the dynamics of theory with chiral fermions (say, left) and gauge field (for simplicity, abelian) leads us to presence of anomalous commutator of canonical momentum $\mathbf E(\mathbf x)$: $$ [E_{i}(\mathbf x), E_{j}(\mathbf y)] \sim \Delta (A_{i}(\mathbf x), A_{j}(\mathbf y)), \qquad (1) $$ where $\mathbf A$ is canonical coordinate. Is this information the only one which we need to conclude that corresponding fermion current $J_{\mu}^{L}$ isn't conserved, i.e., $$ \partial_{\mu}J^{\mu}_{L} \neq 0? $$ In particular, if the answer is "yes", for the left charge $Q_{L} = \int d^{3}\mathbf r J_{L}^{0}$ there must be $$ \frac{dQ_{L}}{dt} \sim [H, Q_{L}] \neq 0 $$ due to anomalous commutator. Another formultion of the question: does the presence of anomalous commutator $(1)$ guarantee the presence of gauge anomaly, i.e., current non-conservation? An edit It seems that the answer is obviously yes. First, even if I don't know precise structure of anomalous commutators, it is enough to know that they are non-zero. Thus, in particular, the "Gauss laws" $G(x) = \nabla \cdot \mathbf E - J^{0}$ don't commute with each other. This means, that physical states $|\psi\rangle$ of theory no longer satisfy the relation $$ G(x)|\psi\rangle = 0 $$ This means violation of the unitarity, and hence the gauge anomaly. In the case when we know the precise form $(1)$ of anomalous commutators, it's elementary to compute the anomalous conservation law: by using the Gauss law, we have $$ \frac{dQ_{L}}{dt} \sim [H,Q_L] =|Q_L = \int d^{3}\mathbf r \nabla \cdot \mathbf E| = [H, \int d^{3}\mathbf r \nabla \cdot \mathbf E(\mathbf r)] = $$ $$ = \left[\frac{1}{2}\int d^{3}\mathbf y \frac{\mathbf E^{2}(\mathbf y )}{2},\int d^{3}\mathbf r \nabla \cdot \mathbf E(\mathbf r)\right]= \int d^{3}\mathbf r E_{i}(\mathbf r)\partial_{j}\Delta^{ij}(\mathbf A,\mathbf r) $$
Using a modified foam evaluation, we give a categorification of the Alexander polynomial of a knot. We also give a purely algebraic version of this knot homology which makes it appear as the... Pages We provide a finite dimensional categorification of the symmetric evaluation of \mathfrak{sl}_N-webs using foam technology. As an output we obtain a symmetric link homology theory categorifying... We present state sums for quantum link invariants arising from the representation theory of U_q(\mathfrak{gl}_{N|M}). We investigate the case of the N-th exterior power of the standard... Recently, we constructed the first-principle derivation of the holographic dual of N=4 SYM in the double-scaled γ-deformed limit directly from the CFT side. The dual fishchain model is a novel... We study various non-perturbative approaches to the quantization of the Seiberg-Witten curve of {\cal N}=2, SU(2) super Yang-Mills theory, which is closely related to the modified Mathieu... We explore the heat current in the quantum Hall edge at filling factors \nu = 1 and \nu = 2 in the presence of dissipation. Dissipation arises in the compressible strip forming at the edge in... Proteins are a matter of dual nature. As a physical object, a protein molecule is a folded chain of amino acids with multifarious biochemistry. But it is also an instantiation along an... By using the notion of a rigid R-matrix in a monoidal category and the Reshetikhin--Turaev functor on the category of tangles, we review the definition of the associated invariant of long knots... Future galaxy clustering surveys will probe small scales where non-linearities become important. Since the number of modes accessible on intermediate to small scales is very high, having a... Recently, string theory on \text{AdS}_3 \times \text{S}^3 \times \mathbb{T}^4 with one unit of NS-NS flux (k=1) was argued to be exactly dual to the symmetric orbifold of \mathbb{T}^4, and in... We study metastable behavior in a discrete nonlinear Schrödinger equation from the viewpoint of Hamiltonian systems theory. When there are n < \infty sites in this equation, we consider... In this short paper we determine the effects of structure on the cosmological consistency relation which is valid in a perfect Friedmann Universe. We show that within \LambdaCDM the consistency...
Sims and Uhlig argue: although classical (frequentist) -values are asymptotically equivalent to Bayesian posteriors, they should not be interpreted as probabilities. This is because the equivalence breaks down in non-stationary models. The paper uses small sample sizes, with - This post examines how the results change with , when the asymptotic behavior kicks in. The Setup Consider a simpler AR(1) model: \begin{equation} y_t=\rho y_{t-1} + \epsilon_t \end{equation} To simplify things, suppose . Classical inference suggests that for , the estimator is asymptotically normal and converges at rate : \begin{equation} \sqrt{T}(\hat{\rho}-\rho) \rightarrow^{L} N(0,(1-\rho^2)) \end{equation} For , however, we get a totally different distribution, which converges at rate , instead of rate : \begin{equation} T(\hat{\rho}-\rho)=T(\hat{\rho}-1)= \rightarrow^{L} \frac{(1/2)([W(1)]^2-1)}{\int_0^1 [W(r)]^2 dr} \end{equation} where is a Brownian motion. Although it looks complicated, it is easier to visualize when you see is actually a variable. This is left skewed, as the probability that a is less than one is 0.68 and large realizations of in the numerator get down-weighted by a large denominator (it is the same Brownian motion in the numerator and denominator). In the paper, the authors choose 31 values of between 0.8 to 1.1 in increments of 0.01. For each they simulate 10,000 samples of the AR(1) model described above with . Finally, they run an OLS regression of on to get the distributions for (the OLS estimator of ). Below I show the distribution of for selected values of : Another way to think about the data is to look at the distribution of given observed values of . This is symmetric about 0.95: Their problem with using -values as probabilities is that if we observe , we can reject the null of , but we fail to reject the null of (think about the area in the tails after normalizing the distribution to integrate to 1), even though given is roughly symmetric about 0.95: The problem is distortion by irrelevant information: Values of much below 0.95 are more likely given than are values of much above 0.95 given . This is irrelevant as we have already observed , so we know it is not far above or below. The prior required to generate these results (i.e. the prior that would let us interpret -values as posterior probabilities) is sample dependent. Usually, classical inference is asymptotically equivalent to Bayesian inference using a flat prior, but it is not the case here. The authors show that classical analysis is implicitly putting progressively more weight on values of above one as gets closer to 1. Testing with Larger Samples At first, I found the results counter-intuitive. The first figure above shows that the skewness arrives gradually in finite samples. This is strange, because the asymptotic properties of are only non-normal for . I figured this was the result of using small samples of . Under a flat prior, the distribution of given the data and having variance of 1 is: \begin{equation} \rho \sim N(\hat{\rho}, (\sum\limits_{t=1}^T y_{t-1})^{-1}) \end{equation} This motivates my intuition for why the skewness arrives slowly: even for small samples, as gets close to 1, can be very large. I repeat their analysis, except instead of , I use . As you can see, the asymptotic behavior kicks in and the skewness arrives only at : I also found that for the distribution of conditional on , does not spread out more for smaller values of , that is a small sample result. Conclusion The point of this paper is to show that the classical way of dealing with unit roots implicitly makes undesirable assumptions - you need a sample-dependent prior which puts more weight on high values of . To a degree, the authors’ results are driven by the short length of the simulated series. The example where you reject but fail to reject wouldn’t happen in large samples, as the asymptotic kick in and faster rate of convergence for gives the distribution less spread. For now, however, the authors’ criticism is still valid. With quarterly data from 1950-Present you get about 260 observations. Macroeconomics will have to survive until year 4,450 for there to be 10,000 observations, and that’s a long way off.
1. Introduction MathJax is an open source JavaScript display engine for mathematics that works in all modern browsers. It allows the author of a web page to use familiar \(\LaTeX\) notations to descibe their maths. Here's a few famous equations to give you a taste: \[ E = mc^2 \] Probably the most famous of them all. Einstein's equation relating mass \(m\) and energy \(E\) via the speed of light \(c\). Since \(c^2\) is such a big number (about 90,000,000,000,000,000) even a tiny piece of matter equates to a huge amount of energy. \[ e^{i\pi}+1=0 \] This astonishing equation connects five major constants of mathematics; \(e\), Euler's Number, \(\pi\), the ratio of the circumference of a circle to its diameter, \(i\), the square root of -1, 0, the additive identity and 1, the multiplicative identity. \[ e = \sum_{n=0}^\infty \frac{1}{n!} \] Euler's number \(e\) defined as the sum of an infinite series. This guide is derived from the one written by Michael Downes of the American Mathematical Society. It has been edited slightly to reflect the capababilities of the MathJax JavaScript system for showing maths in web pages.
Even before quantization, charged bosonic fields exhibit a certain "self-interaction". The body of this post demonstrates this fact, and the last paragraph asks the question. Notation/ Lagrangians Let me first provide the respective Lagrangians and elucidate the notation. I am talking about complex scalar QED with the Lagrangian $$\mathcal{L} = \frac{1}{2} D_\mu \phi^* D^\mu \phi - \frac{1}{2} m^2 \phi^* \phi - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$ Where $D_\mu \phi = (\partial_\mu + ie A_\mu) \phi$, $D_\mu \phi^* = (\partial_\mu - ie A_\mu) \phi^*$ and $F^{\mu \nu} = \partial^\mu A^\nu - \partial^\nu A^\mu$. I am also mentioning usual QED with the Lagrangian $$\mathcal{L} = \bar{\psi}(iD_\mu \gamma^\mu-m) \psi - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$ and "vector QED" (U(1) coupling to the Proca field) $$\mathcal{L} = - \frac{1}{4} (D^\mu B^{* \nu} - D^\nu B^{* \mu})(D_\mu B_\nu-D_\nu B_\mu) + \frac{1}{2} m^2 B^{* \nu}B_\nu - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$ The four-currents are obtained from Noether's theorem. Natural units $c=\hbar=1$ are used. $\Im$ means imaginary part. Noether currents of particles Consider the Noether current of the complex scalar $\phi$ $$j^\mu = \frac{e}{m} \Im(\phi^* \partial^\mu\phi)$$ Introducing local $U(1)$ gauge we have $\partial_\mu \to D_\mu=\partial_\mu + ie A_\mu$ (with $-ieA_\mu$ for the complex conjugate). The new Noether current is $$\mathcal{J}^\mu = \frac{e}{m} \Im(\phi^* D^\mu\phi) = \frac{e}{m} \Im(\phi^* \partial^\mu\phi) + \frac{e^2}{m} |\phi|^2 A^\mu$$ Similarly for a Proca field $B^\mu$ (massive spin 1 boson) we have $$j^\mu = \frac{e}{m} \Im(B^*_\mu(\partial^\mu B^\nu-\partial^\nu B^\mu))$$ Which by the same procedure leads to $$\mathcal{J}^\mu = \frac{e}{m} \Im(B^*_\mu(\partial^\mu B^\nu-\partial^\nu B^\mu))+ \frac{e^2}{m} |B|^2 A^\mu$$ Similar $e^2$ terms also appear in the Lagrangian itself as $e^2 A^2 |\phi|^2$. On the other hand, for a bispinor $\psi$ (spin 1/2 massive fermion) we have the current $$j^\mu = \mathcal{J}^\mu = e \bar{\psi} \gamma^\mu \psi$$ Since it does not have any $\partial_\mu$ included. "Self-charge" Now consider very slowly moving or even static particles, we have $\partial_0 \phi, \partial_0 B \to \pm im\phi, \pm im B$ and the current is essentially $(\rho,0,0,0)$. For $\phi$ we have thus approximately $$\rho = e (|\phi^+|^2-|\phi^-|^2) + \frac{e^2}{m} (|\phi^+|^2 + |\phi^-|^2) \Phi$$ Where $A^0 = \Phi$ is the electrostatic potential and $\phi^\pm$ are the "positive and negative frequency parts" of $\phi$ defined by $\partial_0 \phi^\pm = \pm im \phi^\pm$. A similar term appears for the Proca field. For the interpretation let us pass back to SI units, in this case we only get a $1/c^2$ factor. The "extra density" is $$\Delta \rho = e\cdot \frac{e \Phi}{mc^2}\cdot |\phi|^2$$ That is, there is an extra density proportional to the ratio of the energy of the electrostatic field $e \Phi$ and the rest mass of the particle $mc^2$. The sign of this extra density is dependent only on the sign of the electrostatic potential and both frequency parts contribute with the same sign (which is superweird). This would mean that classicaly, the "bare" charge of bosons in strong electromagnetic fields is not conserved, only this generalized charge is. After all, it seems a bad convention to call $\mathcal{J}^\mu$ the electric charge current. By multiplying it by $m(c^2)/e$ it becomes a matter density current with the extra term corresponding to mass gained by electrostatic energy. However, that does not change the fact that the "bare charge density" $j^0$ seems not to be conserved for bosons. Now to the questions: On a theoretical level, is charge conservation at least temporarily or virtually violated for bosons in strong electromagnetic fields? (Charge conservation will quite obviously not be violated in the final S-matrix, and as an $\mathcal{O}(e^2)$ effect it will probably not be reflected in first order processes.) Is there an intuitive physical reason why such a violation is not true for fermions even on a classical level? Charged bosons do not have a high abundance in fundamental theories, but they do often appear in effective field theories. Is this "bare charge" non-conservation anyhow reflected in them and does it have associated experimental phenomena? Extra clarifying question: Say we have $10^{23}$ bosons with charge $e$ so that their charge is $e 10^{23}$. Now let us bring these bosons from very far away to very close to each other. As a consequence, they will be in a much stronger field $\Phi$. Does their measured charge change from $e 10^{23}$? If not, how do the bosons compensate in terms of $\phi, B, e, m$? If this is different for bosons rather than fermions, is there an intuitive argument why? This post imported from StackExchange Physics at 2015-06-09 14:50 (UTC), posted by SE-user Void
Since TMs are equivalent to algorithms, they must be able to perform algoriths like, say, mergesort. But the formal definition allows only for decision problems, i.e, acceptance of languages. So how can we cast the performance of mergesort as a decision problem? Usually, Turing machines are explained to calculate functions $f:A \rightarrow B$, of which decision problems are a special case where $B = \mathbb{B}$. You can define two kinds of Turing Machines, transducers and acceptors. Acceptors have two final states (accept and reject) while transducers have only one final state and are used to calculate functions. Let $\Sigma$ be the alphabet of the Turing Machine. Transducers take an input $x \in \Sigma^*$ on an input tape and compute a function $f(x) \in \Sigma^*$ that is written on another tape (called output tape) when (and if) the machine halts. The are various results that link together acceptors and transducers. For example: Let $\Sigma=\{0, 1\}$. Given a language $L \subseteq \Sigma^*$ you can always define $f : L \to \{0,1\}$ to be the charateristic function of $L$ i.e. $$ f(x) = \begin{cases} 1 & \mbox{if $x \in L$} \\ 0 & \mbox{if $x \not\in L$} \end{cases} $$ In this case an acceptor machine for $L$ is essentially the same as a transducer machine for $f$ and vice versa. For more details you can see Introduction to the Theory of Complexity by Crescenzi (download link at the bottom of the page). It is lincensed under Creative Commons. We focus on studying the decision problems in undergrad complexity theory courses because they are simpler and also questions about many other kinds of computations problems can be reduced to questions about decision problems. However there is nothing in the definition of Turing machine by itself that restricts it to dealing with decision problems. Take for example the number function computation problems: we want to compute some function $f:\{0,1\}^*\to\{0,1\}^*$. We say that a Turing machine compute this function if on every input $x$, the machine halts and what is left over the tape is equal to $f(x)$. It is easy to define what it means for a Turing Machine to compute a function: The input is written on the tape at the beginning, and the output is whatever is on the tape when the machine halts (if it halts). Most complexity classes we introduce initally, like $\mathsf{P}$, are defined for decision problems. However, there are also complexity classes for function problems, like $\mathsf{FP}$. We can convert between a decision problem and function problem pretty easily in most cases. In a function problem, you are given input $x$ and want to compute $f(x) = y$. In the equivalent decision problem, you are given input $(x,y)$, you want to decide whether $f(x) = y$ or not. (Or even whether $f(x) \leq y$, or so on.) Often it turns out that we can solve the first in polynomial time if and only if we can solve the second in polynomial time, so complexity-wise, the distinction is not very important.
Machine $1$ is currently working. Machine $2$ will be put in use at time $t$ from now. If the lifetime of machine $i$ is exponential with rate $\lambda_i=1,2$, what is the probability that machine $1$ is the first machine to fail? I tried a few things but I can not get the answer Let $X_1\sim exp(\lambda_1)$ and $X_2\sim exp(\lambda_2)$ maybe I'm wrong playing, but I think the exercise asks $$P(X_1<X_2|X_2=t)=\frac{P(X_1<X_2,X_2=t)}{P(X_2=t)}$$ I believe that times are independent of each machine, but I think I can not take it. Can anyone give a hand? EDIT: The corret answer is $$1-e^{-\lambda_1t}+e^{-\lambda_1t}\frac{\lambda_1}{\lambda_1+\lambda_2}$$
AFM has three differing modes of operation. These are contact mode, tapping mode and non-contact mode. Contact mode In contact mode the tip contacts the surface through the adsorbed fluid layer on the sample surface. The detector monitors the changing cantilever deflection and the force is calculated using Hooke’s law: F = − k x ( F = force, k = spring constant, x = cantilever deflection) The feedback circuit adjusts the probe height to try and maintain a constant force and deflection on the cantilever. This is known as the deflection setpoint. Tapping mode In tapping mode the cantilever oscillates at or slightly below its resonant frequency. The amplitude of oscillation typically ranges from 20 nm to 100 nm. The tip lightly “taps” on the sample surface during scanning, contacting the surface at the bottom of its swing. Because the forces on the tip change as the tip-surface separation changes, the resonant frequency of the cantilever is dependent on this separation. \[\omega = \omega_0 \sqrt{ 1 - \frac{1}{k} \frac{\mathrm{d}F}{\mathrm{d}z} }\] The oscillation is also damped when the tip is closer to the surface. Hence changes in the oscillation amplitude can be used to measure the distance between the tip and the surface. The feedback circuit adjusts the probe height to try and maintain a constant amplitude of oscillation i.e. the amplitude setpoint. Non-contact mode In non-contact mode the cantilever oscillates near the surface of the sample, but does not contact it. The oscillation is at slightly above the resonant frequency. Van der Waals and other long-range forces decrease the resonant frequency just above the surface. This decrease in resonant frequency causes the amplitude of oscillation to decrease. In ambient conditions the adsorbed fluid layer is often significantly thicker than the region where van der Waals forces are significant. So the probe is either out of range of the van der Waals forces it attempts to measure, or becomes trapped in the fluid layer. Therefore non-contact mode AFM works best under ultra-high vacuum conditions. Comparison of modes Advantage Disadvantage Contact Mode Tapping Mode Non-contact Mode Note: This animation requires Adobe Flash Player 8 and later, which can be downloaded here.
(I apologize if this question is too theoretical for this site.) This is related to the answer here, although I came up with it independently of that. $\:$ Suppose we have a unit mass planet at each integer point in 1-d space. $\:$ As described in that answer, the sum of the forces acting on any particular planet is absolutely convergent. $\;\;$ Suppose we move planet_0 to point $\epsilon$, where $\: 0< \epsilon< \frac12 \:$. $\;\;$ For similar reasons, those sums will still be absolutely convergent. Now we let Newtonian gravity apply. $\:$ What will happen? If it's unclear what an answer might look like, you could consider the following more specific questions: planet_0 will start out moving right, and all of the other planets will start out moving to the left. Will there be a positive amount of time before any of them turn around? (As opposed to, for example, each planet_n for $\: n\neq 0 \:$ turning around at time 1/|n|.) Will there be a positive amount of time before any collisions occur? "Obviously" (at least, I hope I'm right), planet_0 will collide with planet_1. $\:$ Will that be the first collision? How long will it be before there are any collisions? $\:\:$ (perhaps just an approximation for small $\:\epsilon\:$)
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Increase in Water Pressure As a Result of a Boat Entering a Lock According to Archimedes principle 1000kg of water is displaced. Water has a density of 1000kg per cubic metre, so 1 cubic metre of water is displaced. The increse in the water level is \[\Delta h= \frac{\Delta V}{A}=\frac{1}{10 \times 5}=0.02 m\]or 2 cm. The increase in pressure is due solely to this extra 2cm depth of water. The increasde in pressure is \[\Delta P= \rho g \Delta h = 1000 \times 9.8 \times 0.02 =196 Pa\]. This is a comparatively small increase in pressure less than 0.2% of atmospheric pressure ( \[p_{Atmospheric} \simeq 10^5Pa\]). The increase in the force on the bottom of the the lock is \[|Delta F = A \delta P= 10 \times 5 \times196=9800N\]and this is equal to the weight of the boat ( \[W= mg=1000 \times 9.8=9800N\])
Let $$a_n = \frac{(-1)^n}{\sin n + \ln n }$$ We wish to show that $\sum a_n$ converges. In order to do this, first note that $a_n$ is negative when $n$ is odd, and positive when $n$ is even. We will write $\sum a_n < \sum b_i + \sum c_j$, where $b_i$ are negative terms (with only odd indices $i$) that are smaller in absolute value than the negative terms $a_i$, and $c_j$ are positive terms (with only even indices $j$) that are larger in absolute value than the positive terms $a_j$. We want to choose $\{b_i\}$ and $\{c_j\}$ to satisfy the following conditions: $\sum b_i$ (sum taken over odd $i$) converges by the alternating series test $\sum c_j$ (sum taken over even $j$) converges by the alternating series test $|a_i| > |b_i|$ or equivalently, $a_i < b_i$, for all odd $i$ $a_j < c_j$ for all even $j$ For $i$ odd, let $$b_i = \frac{-1}{2 + \ln n } > a_i$$ For $j$ even, let $$c_j = \frac{1}{-2 + \ln n } > a_i$$ (I am choosing $2$ and $-2$ here because they are greater than the maximum of $\sin n$ and less than the minimum of $\sin n$, respectively) Note that $b_i$ and $c_j$ are both monotonically decreasing for $i,j > 10$. (I am choosing $10$ here because it is greater than $e^2$, to avoid negative denominators due to $-2 + \ln n$) Therefore, by the alternating series test, we know that the following sums must converge: $$\displaystyle\sum_{i=11,\, i \text{ odd}}^\infty b_i$$ $$\displaystyle\sum_{j=10,\, j \text{ even}}^\infty c_j$$ Therefore, $\sum a_n$ is bounded above by the sum of two convergent series: $$\displaystyle\sum_{n=10}^\infty a_n < \left(\displaystyle\sum_{i=11,\, i \text{ odd}}^\infty b_i \right) + \left(\displaystyle\sum_{j=10,\, j \text{ even}}^\infty c_j\right)$$
Evolution, a change in the trait distribution of a population, occurs by four mechanisms; drift, mutation, migration, and selection. E.g. These mechanisms cause a change in the mean of trait, or variance in a trait etc.. Just to summarise the conditions you gave: a) Trait $i$ has variation b) Trait $i$ covaries with fitness ($\omega$) c) Trait $i$ is heritable from parent to offspring Genetic drift is the random loss of genetic variance from the population by stochastic processes. Mutation (generally) generates new genetic variation within populations. Migration can lead to loss (emigration) or gain (immigration) of genetic variation from a population, as variants come or go from a population. Selection is mechanism underlying adaptation. This is what is in condition b of the conditions you gave. We can predict the response to selection using some simple quantitative genetics in the form of the (multivariate) breeders equation. $\Delta z = G\beta$ Where $\Delta z$ is the response in the trait, $G$ is the genetic (co)variance matrix, and $\beta$ is the selection. In a simple toy example we could look at a single trait, $i$. We allow $i$ to satisfy all of the above conditions. This means that both $G$ and $\beta$ are non-zero, and therefore, $\Delta z_i$ will also be non-zero. Just putting some random numbers in to the equation to make it clear, if $G = 0.5$ and $\beta = 0.8$ then; $\Delta z_i = 0.5 \times 0.8 = 0.4$ Mutation, drift, and migration could all oppose or distort the effect of selection. In other words, they can cause the actual and predicted response to be different from one another (i.e. $\Delta z_i \neq 0.4$), and there will not necessarily be a change in the phenotypic distribution. However, selection is in trait $i$. This is because genetic correlations can arise among traits, either by linkage (close proximity in the DNA) or pleiotropy (when one gene affects more than one trait). For example, we may see no response at all in $i$ ($\Delta z_i = 0$) when both $G_i$ and $\beta_i$ are non-zero, if selection on a second unmeasured trait, $j$, opposes selection on $i$ ($\beta_i = -\beta_j$), and trait $j$ has equal variance ($G_i = G_j$) and is perfectly covarying with $i$, $cov_{i,j} 1$. Approximately this would be; also factor that could cause a difference between the predicted and actual response to selection $\Delta z_i = (G_i \times \beta_i) + (G_{i,j} \times \beta_j$) $\Delta z_i = (0.5 \times 0.8) + (0.5 \times -0.8) = 0$ This is a major issue in the study of evolutionary biology. Genetic correlations can cause severe disparity between the actual and predicted response to selection and univariate methods are insufficient as a result. More studies adopt multivariate methods these days, though these are generally limited to just a handful of traits so its still not perfect. In reality methodological and logistical constraints present a huge obstacle to being able to predict $\Delta z_i$.
Urban traffic flow in one area or between any pair of locations can be approximated by a linear combination of three steady basis flows corresponding to commuting, business, and other purposes, with random perturbations. Motivation Knowledge of the urban human mobility is essential in traffic modeling for simulation, forecasting and control The mobility pattern and the consequential traffic flow can also interact with the land use Better understanding of human mobility can help to more easily control the spreading of contagious diseases by limiting the contact among individuals Previous statistical inferences of urban human mobility mostly focus on the individual level, while this work analyzes the collective dynamics Basis Traffic Flows: The Constancy Hot locations: 1 Central Railway Station 2 Municipal Square 3 Finance & Trade Zone 4 South Railway Station 5 International Airport S i,j : traffic flow time series at location (i,j) B (k): basis time series P i,j : traffic power for basis series at location (i,j) Method: Non-negative matrix factorization, Linear optimization Result: find three purpose-based categories for the trips B 1 : commuting between home and workplace B 2: Business traveling between two workplaces B 3: trips from or to other places Daily Traffic Power: The Variation Data: the traffic power P from trips data S and three basis series B $min\sum_{t-1}^h|\mathbf{S}_{i,j}^{(t)}-\mathbf{P}_{i,j}\times\mathbf{B}^{(t)}|$ a : the relative deviation v : a vector of average traffic flow r, n : parameters of binomial functions satisfying v k =rn $\sigma$ : weight for the distribution components Model: based on a series of binomial distributions $D_{\alpha,v_k}(a)=\sum_{k=0}^{|a\times v_k|}\left(\begin{array}{c}n\\ k\end{array}\right)r^k(1-r)^{n- }$ $D_\alpha(a)=\sum_k\sigma_kD_{\alpha,v_k}(a)$ Results: our model can describe the daily traffic deviation correctly Travel Purposes vs. Land Use Map References M. Gonzalez, C. Hidalgo, and A. Barabasi, ``Understanding Individual Human Mobility Patterns", Nature, 453:779--782 (2008). C. Peng, X. Jin, K. Wong, M. Shi, and P. Lio', ``Collective Human Mobility Pattern from Taxi Trips in Urban Area", under review.
I have many a times encountered (and used myself) the following technique: $$\int \sin x \mathrm{d}x = \int \operatorname{Im}(e^{ix}) \mathrm{d}x = \operatorname{Im} \left( \int e^{ix} \mathrm{d}x \right) = \operatorname{Im}( -ie^{ix}) + C = -\cos x + C$$ Not only in this case, but I've used this kind of transform many a times, instinctively, to solve many of those monster trig integrals (and it works like a miracle) but never justified it. Why and how is this interchange of integral and imaginary part justified? At first, I thought it might be always true that we can do such a type of interchange anywhere, so, I tried the following: $\operatorname{Im}(f(z)) = f(\operatorname{Im}(z))$. But this is clearly not true, as the LHS is always real but RHS can be, possibly, complex too. Second thoughts. I realized that we are dealing with operators here and not functions really. Both integral and imaginary parts are operators. So we have a composition of operators and we are willing to check when do these operators commute? I couldn't really make out any further conclusions from here and am stuck with the following questions: When and why is the following true: $\int \operatorname{Im}(f(z)) \mathrm{d}z= \operatorname{Im} \left( \int f(z) \mathrm{d}z \right)$? (Provided that $f$ is integrable) Is it always true? (Because like I've used it so many times and never found any counter example) Edit : I am unfamiliar with integration of complex-valued functions but what I have in mind is that while doing such a thing, I tend to think of $i$ as just as some constant (Ah! I hope this doesn't sounds like really weird), as I stated in the example in the beginning. To be more precise, I have something of like this in my mind: because a complex-valued function $f(z)$ can be thought of as $f(z) = f(x+iy) = u(x,y) + iv(x,y)$ where $u$ and $v$ are real-valued functions and we can now use our definition for integration of real-valued functions as$$\int f(z) \mathrm{d}z = \int (u(x,y) + iv(x,y)) \mathrm{d}(x+iy) = \left(\int u\mathrm{d}x - \int v\mathrm{d}y\right) +i\left(\int v\mathrm{d}x + \int u\mathrm{d}y\right)$$
My question is somewhat similar to Dimension of a Finite Field Extension is a Power of 2 but there's a small complication. Claim 1: Let $K$ be a field of characteristic $\neq 2$, and suppose that every polynomial of odd degree in $K[x]$ has a root in $K$. Let $L/K$ be a finite extension. Then $[L:K] = 2^n$ for some $n$. Claim 2: Let $K,L$ be as above. If $L = L^2$, then $L$ is algebraically closed. For Claim 1, I have the following reasoning, but it doesn't quite get me all the way there. Since every odd degree polynomial has a root, it is reducible into a linear factor and even degree polynomial, so all irreducible polynomials are even degree. Since $L/K$ is finite, it is finitely generated, so $L = K(\alpha_1,\ldots, \alpha_n)$ for some $\alpha_1,\ldots, \alpha_n \in L$. Then we have a tower of fields $$ K \subset K(\alpha_1) \subset K(\alpha_1,\alpha_2) \subset \ldots \subset K(\alpha_1,\ldots, \alpha_n) = L $$ In each step of the tower, $K(\alpha_1, \ldots, \alpha_{i+1}) = K(\alpha_1, \ldots, \alpha_i)$ and the irreducible polynomial of $\alpha_{i+1}$ is even degree, so each extension is even. Thus $[L:K]$ is even. If I could show that each step of the tower is of degree 2, then I would have proved claim 1, but I don't know why that would be the case. Is there a reason that we can't have irreducible quartics over $K$?
This is going to be very long question. A 1 kg block situated on a rough incline is connected to a spring constant 100Nm as shown in fig. The block is released from rest with the spring in the unstretched position. The block moves 10cm down the incline before coming to rest. Find the coefficient of friction between the block and the incline. Assume the spring has a negligible mass and the pulley is frictionless. The way I tackled the question was by using forces acting at equilibrium. The forces acting on the block are: 1.The force due spring up the incline. 2.The force due friction up the incline . 3.Components of gravity. $R=mg \cos 37$. The force of friction $f$ will be up the incline $f=\mu mg\cos 37$. The component of gravity along the downward slope = $mg \sin 37$. Putting it all together . $mg\sin 37 - kx - \mu mg \cos 37 = 0$. $mg \sin 37-kx = \mu mg \cos 37$. Here this is get messed up, $\mu$ turns out to negative once you put the values in. If we change the direction of frictional force, the answer comes out to be 0.5. But the real solution floating on the internet solves the question by using work done . Work done by friction $-\mu xmg \cos 37$. Work done by gravity $xmg \sin 37$. Energy stored $\frac{kx^2}{2}$. Therefore $xmg \sin 37 - \mu xmg \cos 37 = \frac{kx^2}{2}$. Putting values gives the answer around 0.125. Where did I went wrong?
GR stands alone in its ability to pass both weak and strong field tests of gravity fields. From 1905 to 1915, there was renewed interest in a somehow modified scalar field theory. Here is the action of a scalar gravity model, with $z^{\mu}$ a worldline parameterized by $\lambda$:$$\begin{align*}I_{sg} =& - m\int d \lambda \sqrt{-\left(e^\Phi \frac{d z_0}{d \lambda}, e^\Phi \frac{d z_1}{d \lambda}, e^\Phi \frac{d z_2}{d \lambda}, e^\Phi \frac{d z_3}{d \lambda}\right)\left(e^\Phi \frac{d z_0}{d \lambda}, e^\Phi \frac{d z_1}{d \lambda}, e^\Phi \frac{d z_2}{d \lambda}, e^\Phi \frac{d z_3}{d \lambda}\right)\eta} \\ =& - m\int d \lambda \sqrt{e^{2\Phi} \left( \frac{d z_0}{d \lambda}\right )^2 - e^{2\Phi} \left( \frac{d z_1}{d \lambda}\right )^2 - e^{2\Phi} \left(\frac{d z_2}{d \lambda}\right )^2 - e^{2\Phi} \left(\frac{d z_3}{d \lambda}\right )^2} \end{align*}$$ The terms under the square root look just like a metric (completely flat in this case). Physicists knew the $g_{00}$ term from Newton's law. Physicists completely guessed at the $g_{uu}$ terms [see footnote], and that guess was wrong. When a path is varied in the presence of a gravity field, the changes in $g_{uu}$ should be approximately inverse of $g_{00}$. I am exploring an action (scalar gravity coupling) that at the very least is consistent with experimental tests unknown to physicists prior to 1919:$$\begin{align*}I_{sgc} =& - m\int d \lambda \sqrt{-\left(\frac{1}{e^\Phi} \frac{d z_0}{d \lambda}, e^\Phi \frac{d z_1}{d \lambda}, e^\Phi \frac{d z_2}{d \lambda}, e^\Phi \frac{d z_3}{d \lambda}\right)\left(\frac{1}{e^\Phi} \frac{d z_0}{d \lambda}, e^\Phi \frac{d z_1}{d \lambda}, e^\Phi \frac{d z_2}{d \lambda}, e^\Phi \frac{d z_3}{d \lambda}\right)\eta} \\ =& - m\int d \lambda \sqrt{e^{-2\Phi} \left( \frac{d z_0}{d \lambda}\right )^2 - e^{2\Phi} \left( \frac{d z_1}{d \lambda}\right )^2 - e^{2\Phi} \left(\frac{d z_2}{d \lambda}\right )^2 - e^{2\Phi} \left(\frac{d z_3}{d \lambda}\right )^2} \end{align*}$$The terms under the square root look like the Rosen metric which will pass all weak field tests (take the velocity equal to square root of $g_{00}$ to $g_{uu}$ to calculate light diffraction, or get the equations of motion, and use the constants of motion to get the same value). Rosen's bi-metric tensor theory fails strong tests of GR, specifically it allows for dipole modes of gravity wave emission which are not consistent with observations of energy loss in binary pulsars. In the scalar gravity coupling model, the $g_{00}$ must be exactly the inverse of $g_{uu}$. It predicts slightly more bending that GR to second order PPN accuracy (11.7 versus 11.0 $\mu$arcsecond). New proposals for gravity are usually pretty easy to shoot down. Do you see a flaw in this one? It looks like the Rosen metric and thus curves spacetime for weak fields (good), and is simpler than a bi-metric theory for strong field tests (good). It sure is simpler than GR. Does GR finally have a real competitor? [footnote]: Metrics can be written in an unending variety of coordinates. I have written this action presuming a point source and Cartesian coordinates. Had I chosen spherical coordinates, then there would be no gravity term in front of $z_2$ or $z_3$. Note added in response to an issue raised by @Trimok. This question has been put on hold. For the record, I certainly do believe I "have specific questions evaluating new theories in the context of established science are usually allowed". There is both an action, and a metric associated with the scalar gravity coupling model. That is the canonical method for doing gravity research. The outcome is specific down to 0.7 $\mu$arcseconds (details below), something established physics like work with strings never achieves. It is a rare, but should be a valuable bird of a model that could respect the strong equivalence principle, meaning that gravity is only about making the interval dynamic. Given a metric for any model, the post post Newtonian deflection can be calculated using a formula from Epstein and Shapiro (Phys. Rev. D, 22:12, p. 2947, 1980):$$\begin{align*}d\tau^2 =& \left(1 - 2 \gamma \frac{G M}{c^2 R} + 2 \beta \left(\frac{G M}{c^2 R}\right)^2 \right) dt^2 - \left( 1 + 2 \gamma \frac{G M}{c^2 R} + \frac{3}{2} \epsilon \left(\frac{G M}{c^2 R}\right)^2 \right) dR^2/c^2 \\\theta_{_{ppN}} =& ~\pi (2 + 2 \gamma - \beta + \frac{3}{4} \epsilon) \left(\frac{G M}{c^2 R} \right)^2 \\\theta_{_{GR}} =& ~\frac{15}{16} \pi \left(\frac{G M}{c^2 R} \right)^2 \\\theta_{_{sgc}} =& ~4 \pi \left(\frac{G M}{c^2 R} \right)^2\end{align*}$$The difference between the two will be about 6%. A longer blog on this subject is online. Feel free to contact me if you any technical issues with the proposal, large or small.This post imported from StackExchange Physics at 2014-03-05 14:56 (UCT), posted by SE-user sweetser
Classical grand canonical ensemble Set range $ N\in\mathbb N $ definiendum $ (\ \langle \mathcal M_N, H_N,\pi_N,\pi_{N,0},{\hat\rho}_N,{\hat\rho}_{N,0} \rangle\ )_N \in \mathrm{it} $ postulate $\forall N.\ \langle \mathcal M_N, H_N,\pi_N,\pi_{N,0},{\hat\rho}_N,{\hat\rho}_{N,0} \rangle$ … classical canonical ensemble A sequence of phase spaces with their partition functions, one for each particle number. todo: the $N$th manifold is a submanifold of the $N+1$'th. Discussion Reference Wikipedia: Grand canonical ensemble
There are $4!$ ways to place the men in such a way that men and women will sit alternatively. Place them and label the spots where the men have taken their seats clockwise with $a,b,c,d,e$. Now we will take a look at the possible configurations for women. Without further conditions there are $5!$ configurations for women. Let $A$ denote the set of these configurations where the man that sitsat $a$ has his wife next to him. This similar for $B,C,D,E$ where the capitals correspond with labels$b,c,d,e$ respectively. The answer to the question is then $4!\left|A^{\complement}\cap B^{\complement}\cap C^{\complement}\cap D^{\complement}\cap E^{\complement}\right|=4!\left(5!-\left|A\cup B\cup C\cup D\cup E\right|\right)$so it remains to find $\left|A\cup B\cup C\cup D\cup E\right|$. This can be done by means of inclusion/exclusion. Up to a certainlevel we can also make use of symmetry (e.g. notice that of course$\left|A\cap B\right|=\left|B\cap C\right|$) but here we must becareful. At first hand we find that: $$\left|A\cup B\cup C\cup D\cup E\right|=5\left|A\right|-5\left|A\cap B\right|-5\left|A\cap C\right|+5\left|A\cap B\cap C\right|+5\left|A\cap B\cap D\right|-5\left|A\cap B\cap C\cap D\right|+\left|A\cap B\cap C\cap D\cap E\right|$$ Then checking the cases one by one we find: $\left|A\right|=2\times4!=48$ $\left|A\cap B\right|=3\times3!=18$ $\left|A\cap C\right|=4\times3!=24$ $\left|A\cap B\cap C\right|=4\times2!=8$ $\left|A\cap B\cap D\right|=6\times2!=12$ $\left|A\cap B\cap C\cap D\right|=5\times1!=5$ $\left|A\cap B\cap C\cap D\cap E\right|=2\times0!=2$ So our final answer is: $$4!\left(5!-5\times48+5\times18+5\times24-5\times8-5\times12+5\times5-26\right)=24\times13=312$$ I hope that I did not make any mistakes. Check me on it.
Definition:Bounded Metric Space Contents Definition Let $M = \left({A, d}\right)$ be a metric space. Let $M' = \left({B, d_B}\right)$ be a subspace of $M$. $M'$ is bounded (in $M$) if and only if: $\exists a \in A, K \in \R: \forall x \in B: d \left({x, a}\right) \le K$ $M'$ is bounded if and only if: $\exists K \in \R: \forall x, y \in M': d \left({x, y}\right) \le K$ Then $D$ is bounded (in $\C$) if and only if there exists $M \in \R$ such that: $\forall z \in D: \cmod z \le M$ Let $M = \left({X, d}\right)$ be a metric space. Let $M' = \left({Y, d_Y}\right)$ be a subspace of $M$. Also known as If the context is clear, it is acceptable to use the term bounded space for bounded metric space. Also see Results about bounded metric spacescan be found here.
№ 9 All Issues Volume 59, № 12, 2007 Ukr. Mat. Zh. - 2007. - 59, № 12. - pp. 1587–1593 The behavior of closed polynomials, i.e., polynomials $f ∈ k[x_1,…,x_n]∖k$ such that the subalgebra $k[f]$ is integrally closed in $k[x_1,…,x_n]$, is studied under extensions of the ground field. Using some properties of closed polynomials, we prove that, after shifting by constants, every polynomial $f ∈ k[x_1,…,x_n]∖k$ can be factorized into a product of irreducible polynomials of the same degree. We consider some types of saturated subalgebras $A ⊂ k[x_1,…,x_n]$, i.e., subalgebras such that, for any $f ∈ A∖k$, a generative polynomial of $f$ is contained in $A$. Ukr. Mat. Zh. - 2007. - 59, № 12. - pp. 1594–1600 Let $ψ_m^D$ be orthogonal Daubechies wavelets that have $m$ zero moments and let $$W^k_{2, p} = \left\{f \in L_2(\mathbb{R}): ||(i \omega)^k \widehat{f}(\omega)||_p \leq 1\right\}, \;k \in \mathbb{N},$$. We prove that $$\lim_{m\rightarrow\infty}\sup\left\{\frac{|\psi^D_m, f|}{||(\psi^D_m)^{\wedge}||_q}: f \in W^k_{2, p} \right\} = \frac{\frac{(2\pi)^{1/q-1/2}}{\pi^k}\left(\frac{1 - 2^{1-pk}}{pk -1}\right)^{1/p}}{(2\pi)^{1/q-1/2}}.$$ Ukr. Mat. Zh. - 2007. - 59, № 12. - pp. 1601–1618 We study extremal problems of the geometric theory of functions of a complex variable. Sharp upper estimates are obtained for the product of inner radii of disjoint domains and open sets with respect to equiradial systems of points. Ukr. Mat. Zh. - 2007. - 59, № 12. - pp. 1619–1638 For a binding neuron with threshold 2 stimulated by a Poisson stream, we determine the intensity of the output stream and the probability density for the lengths of the output interpulse intervals. For threshold 3, we determine the intensity of the output stream. Ukr. Mat. Zh. - 2007. - 59, № 12. - pp. 1639–1646 We prove that the collection $(X, Y, Z)$ is the Lebesgue triple if $X$ is a metrizable space, $Y$ is a perfectly normal space, and $Z$ is a strongly $\sigma$-metrizable topological vector space with stratification $(Z_m)^{\infty}_{m=1}$, where, for every $m \in \mathbb{N}$, $Z_m$ is a closed metrizable separable subspace of $Z$ arcwise connected and locally arcwise connected. On the uniform convergence of wavelet expansions of random processes from Orlicz spaces of random variables. I Ukr. Mat. Zh. - 2007. - 59, № 12. - pp. 1647–1660 We establish conditions under which there exists a function c( t) > 0 such that $\sup\cfrac{X (t)}{c(t)} < \infty$, where X( t) is a random process from an Orlicz space of random variables. We obtain estimates for the probabilities $P\left\{ \sup\cfrac{X (t)}{c(t)} > \varepsilon\right\}, \quad \varepsilon > 0$.. Ukr. Mat. Zh. - 2007. - 59, № 12. - pp. 1661–1673 We establish conditions for the existence and uniqueness of a solution of the mixed problem for the ultraparabolic equation $$u_t + \sum^m_{i=1}a_i(x, y, t) u_{y_i} - \sum^n_{i,j=1} \left(a_{ij}(x, y, t) u_{x_i}\right)_{x_j} + \sum^n_{i,j=1} b_{i}(x, y, t) u_{x_i} + b_0(x, y, t, u) =$$ $$= f_0(x, y, t, ) - \sum^n_{i=1}f_{i, x_i} (x, y, t) $$ in an unbounded domain with respect to the variables x. Generalized boundary values of solutions of quasilinear elliptic equations with linear principal part Ukr. Mat. Zh. - 2007. - 59, № 12. - pp. 1674–1688 We establish conditions for the nonlinear part of a quasilinear elliptic equation of order $2m$ with linear principal part under which a solution regular inside a domain and belonging to a certain weighted $L_1$-space takes boundary values in the space of generalized functions. On the solution of the basic integral equation of actuarial mathematics by the method of successive approximations Ukr. Mat. Zh. - 2007. - 59, № 12. - pp. 1689–1698 We study the basic integral equation of actuarial mathematics for the probability of (non)ruin of an insurance company regarded as a function of the initial capital. We establish necessary and sufficient conditions for the existence of a solution of this equation, general sufficient conditions for its existence and uniqueness, and conditions for the uniform convergence of the method of successive approximations for finding the solution. Ukr. Mat. Zh. - 2007. - 59, № 12. - pp. 1699-1700 Ukr. Mat. Zh. - 2007. - 59, № 12. - pp. 1701–1706 In this paper, Chebyshev’s theorem (1850) about Bertrand’s conjecture is re-extended using a theorem about Sierpinski’s conjecture (1958). The theorem had been extended before several times, but this extension is a major extension far beyond the previous ones. At the beginning of the proof, maximal gaps table is used to verify initial states. The extended theorem contains a constant r, which can be reduced if more initial states can be checked. Therefore, the theorem can be even more extended when maximal gaps table is extended. The main extension idea is not based on r, though. Ukr. Mat. Zh. - 2007. - 59, № 12. - pp. 1707–1713 We consider sets of linear expansions of dynamical systems on a torus with general alternating Lyapunov function. Ukr. Mat. Zh. - 2007. - 59, № 12. - pp. 1714–1721 We establish necessary and sufficient conditions for the absolute asymptotic stability of solutions of linear parabolic differential equations with delay. Ukr. Mat. Zh. - 2007. - 59, № 12. - pp. 1722-1724 Ukr. Mat. Zh. - 2007. - 59, № 12. - pp. 1725-1729
Suppose $X(t)$ is a Levy process with almost surely positive increments (for all $t_1 < t_2$ $P(X(t_1) < X(t_2)) = 1$) Define $$\nu X(t) := \sup \{\tau \in \mathbb{R_+}| X(\tau) < t\}$$ It is not hard to see, that $\nu X$ is also a stochastic process with almost surely positive increments. My question is: Is it a Levy process too? It is not hard to see, that $\nu X (0) = 0$ and for all $t_1 < t_2$ $\nu X(t_1) - \nu X(t_2) = \mu\{\tau \in \mathbb{R}| X(\tau) \in (t_1; t_2]\}$, which means, that the conditions (1), (3) and (4) are satisfied. However, I do not know, whether the increments of $\nu X(t)$ are independent or not. They are indeed uncorrelated, but uncorrelatedness $\neq$ independence. One can also notice, that $\nu \nu X(t) = X(t)$ almost surely. However, it does not seem to be very helpful.
I know Fermi's Golden Rule in the form $$\Gamma_{fi} ~=~ \sum_{f}\frac{2\pi}{\hbar}\delta (E_f - E_i)|M_{fi}|^2$$ where $\Gamma_{fi}$ is the probability transition rate, $M_{fi}$ are the transition matrix elements. I'm struggling to do a derivation based on the density of states. I know that under certain circumstances it's a good approximation to replace $\sum_f$ with $\int_F \rho(E_f) \textrm{d}E_f$ to calculate the transition probability, for some energy range $F$. Doing this calculation I obtain $$\Gamma_{fi} ~=~ \int \rho(E_f) \frac{2\pi}{\hbar}\delta (E_f - E_i) |M_{fi}|^2\textrm{d}E_f.$$ Now assuming that the $M_{fi}$ are constant in the energy range under the integral we get $$\Gamma_{fi} ~=~ \rho(E_i) \frac{2\pi}{\hbar} |M_{fi}|^2.$$ Now this is absolutely not what is written anywhere else. Other sources pull the $\rho(E_f)$ out of the integral to obtain Fermi's Golden Rule of the form $$\Gamma_{fi} ~=~ \rho(E_f) \frac{2\pi}{\hbar} |M_{fi}|^2$$ for any $f$ with $E_f$ in $F$ which makes much more physical sense. But why is what I've done wrong? If anything it should be more precise, because I have actually done the integral! Where have I missed something?
As @coffeemath noted, your answer is actually the complement of the original expression. So this method is not correct. I think the easiest way is to use the identities $(x + y)' = x'y'$ and $(xy)' = x' + y'$ (known as De Morgan's laws) and $x'' = x$. We have $$AB' + A'B = ((AB')'(A'B)')' = ((A'+B)(A+B'))'=$$$$=(A'A + A'B' + AB + BB')' = (AB + A'B')' = (AB)'(A'B')' = (A'+B')(A+B).$$Hence the original expression equals $$ (AB'+A'B)C = (A'+B')(A+B)C.$$ Alternatively, if you're familiar with how to get a SOP or a POS from the truth table, then you can just construct the truth table for the function $f(A, B)$ realized by the expression $AB' + A'B$ and construct a POS directly from it. In our case the truth table looks like this:$$\begin{array}{|c|c|}\hlineA & B & f(A, B) \\ \hline0 & 0 & 0 \\0 & 1 & 1 \\1 & 0 & 1 \\1 & 1 & 0 \\ \hline\end{array}$$ To construct the POS representation of $f(A, B)$ you should look at pairs $(a, b)$ with $f(a, b) = 0$. With every such pair you associate a sum $A^\sigma + B^\tau$, where $A^\sigma = A$ if $a = 0$ and $A^\sigma = A'$ if $a = 1$, and same for $B^\tau$. After that you just construct the product of all these sums. In our case we have two pairs $(0, 0)$ and $(1, 1)$. Corresponding sums are $A + B$ and $A' + B'$. Finally, we have the POS $(A'+B')(A+B)$.
The truth is you've been lied to, or at least that the usual notation makes the innate connection way more difficult to see than necessary. To see how the two are the same, let me tell you about the wedge product. The wedge product of vectors is like the cross product, in that it is anticommutative--$a \wedge b = -b \wedge a$--but it does not produce a vector. Instead, we directly interpret its result as a planar object--in fact, as the planar object that would be perpendicular to the vector from the cross product. We formalize that relationship as follows: we say that $a \times b= -i a \wedge b$, where the $i = \hat x \wedge \hat y \wedge \hat z$ is the unit pseudoscalar, representing a volume. The pseudoscalar itself is an object of interest, as it converts wedges to dot products and vice versa when you move it through expressions. In fact, $$a \cdot (b \times c) = a \cdot (i [b \wedge c]) = i (a \wedge b \wedge c)$$ How does all this wedge stuff connect to linear algebra? Quite simply, actually. Most linear operators will "distribute" over wedges in an intuitive way. That is, for a linear operator $\underline T$, $$\underline T(a \wedge b) = \underline T(a) \wedge \underline T(b)$$ This is unlike the cross product, which doesn't follow such a simple law. The advantage of this is that, uniquely, $$\underline T(a \wedge b \wedge c) = \alpha a \wedge b \wedge c$$ for some scalar $\alpha$, for every $a, b, c$. We call $\alpha$ the determinant! It is the special number by which every volume object is dilated or shrunk by the linear transformation, and here it becomes geometrically clear that that is the case: $a \wedge b \wedge c$ is literally multiplied by $\alpha = \det T$ as a result of the transformation. Now, build up a linear transformation as so. Let $l, m, n$ be vectors, so that a transformation looks like $$\underline T(a) = (a \cdot e_1) l + (a \cdot e_2) m + (a \cdot e_3) n$$ Then $l,m,n$ are the columns of the matrix representation of $\underline T$, and $\underline T(i) = l \wedge m \wedge n$. This completes the connection between the determinant and the scalar triple product.
I am trying to implement a numerical scheme for a PDE, and it takes as input dirichlet boundary conditions, i.e. $u=0$ on $\partial\Omega$, as well as the hessian of $u$ on the boundary also. I am handling a non standard boundary condition, called the second boundary condition, where we prescribe $\Omega,\Omega^*\subset\Bbb R^2$, convex, and our requirement is that $\nabla u(\Omega)=\Omega^*$. It can be shown that this is equivalent to $\nabla u(\partial\Omega)=\partial \Omega^*$. But say we can find a map $-F:\partial\Omega\to\partial\Omega^*$, of closed form, so we can state: $\nabla u(x)=-F(x)$ on $\partial\Omega$, Then taking the divergence of both sides we get: $-\Delta u=f(x)$, on $\partial\Omega$, where $f(x)=\nabla\cdot F$. So now we have poissons equation, but my issue is that the PDE is defined on a closed domain, namley $\partial \Omega$. Is this a pathological problem? since if not, we can find $u$ (with a free constant), and by induce dirichlet boundary conditions. Thank you for your help in advance.
View Answer question_answer1) From an external point P, tangents PA and PB are drawn to a circle with centre O. If \[\angle PAB=50{}^\circ \], then find \[\angle AOB.\] question_answer2) In Fig. 1, AB is a 6 m high pole and CD is a ladder inclined at an angle of \[60{}^\circ \] to the horizontal and reaches up to a point D of pole. If \[AD=2.54\text{ }m\], find the length of the ladder. (Use \[\sqrt{3}=1.73\]) View Answer question_answer3) Find the 9th term from the end (towards the first term) of the A.P 5, 9, 13,......, 185. View Answer question_answer4) Cards marked with number 3, 4, 5,......, 50 are placed in a box and mixed thoroughly. A card is drawn at random from the box. Find the probability that the selected card bears a perfect square number. View Answer question_answer5) If \[x=\frac{2}{3}\] and \[x=-3\] are roots of the quadratic equation \[a{{x}^{2}}+7x+b=0\], find the values of a and b. View Answer question_answer6) Find the ratio in which y-axis divides the line segment joining the points \[A(5,-6)\] and \[B(-1,-4)\]. Also find the coordinates of the point of division. question_answer7) In Fig. 2, a circle is inscribed in a \[\Delta \,ABC\], such that it touches the sides AB, BC and CA at points D, E and F respectively. If the lengths of sides AB, BC and CA are 12 cm, 8 cm and 10 cm respectively, find the lengths of AD, BE and CF. View Answer question_answer8) The x-coordinate of a point P is twice its y-coordinate. If P is equidistant from \[Q(2,-5)\]and \[R(-3,6)\], find the coordinates of P. View Answer question_answer9) How many terms of the A.P. 18, 16, 14, ?. Be so that their sum is zero? question_answer10) In Fig. 3, AP and BP are tangents to a circle with centre O, such that \[AP=5\text{ }cm\] and \[\angle APB=60{}^\circ \]. Find the length of chord AB. question_answer11) In Fig. 4, ABCD is a square of side 14 cm. Semi-circles are drawn with each side of square as diameter. Find the area of the shaded region, \[\left( use\,\pi =\frac{22}{7} \right)\] question_answer12) In Fig. 5, is a decorative block, made up of two solids - a cube and a hemisphere. The base of the block is a cube of side 6 cm and the hemisphere fixed on the top has a diameter of 3.5 cm. Find the total surface area of the block, use \[\left( use\,\,\pi =\frac{22}{7} \right)\] question_answer13) In Fig. 6, ABC is a triangle coordinates of whose vertex A are (\[0,\text{ }1\]). D and E respectively are the mid-points of the sides AB and AC and their coordinates are (1, 0) and (0, 1) respectively. If F is the mid-point of BC, find the areas of \[\Delta \text{ }ABC\] and \[\Delta \text{ }DEF\] . question_answer14) In Fig. 7, are shown two arcs PAQ and PBQ. Arc PAQ is a part of circle with centre O and radius OP while arc PBQ is a semi-circle drawn on PQ as diameter with centre M. If \[OP=PQ=10\text{ }cm\] show that area of shaded region is 25\[\left( \sqrt{3}-\frac{\pi }{6} \right)c{{m}^{2}}\]. View Answer question_answer15) If the sum of first 7 terms of an A.P. is 49 and that of its first 17 terms is 289, find the sum of first n terms of the A.P. View Answer question_answer16) Solve for x: \[\frac{2x}{x-3}+\frac{1}{2x+3}+\frac{3x+9}{(x-3)(2x+3)}=0,x\ne 3,-3/2\] View Answer question_answer17) A well of diameter 4 m is dug 21 m deep. The earth taken out of it has been spread evenly all around it in the shape of a circular ring of width 3 m to form an embankment. Find the height of the embankment. View Answer question_answer18) The sum of the radius of base and height of a solid right circular cylinder is 37 cm. If the total surface area of the solid cylinder is 1628 sq. cm, find the volume of the cylinder,\[\left( use\,\pi =\frac{22}{7} \right)\] View Answer question_answer19) The angles of depression of the top and bottom of a 50 m high building from the top of a tower are \[45{}^\circ \] and \[60{}^\circ \] respectively. Find the height of the tower and the horizontal distance between the tower and the building, (use \[\sqrt{3}=1.73\]) question_answer20) In a single throw of a pair of different dice, what is the probability of getting (i) a prime number on each dice? (ii) a total of 9 or 11? View Answer question_answer21) A passenger, while boarding the plane, slipped from the stairs and got hurt. The pilot took the passenger in the emergency clinic at the airport for treatment. Due to this, the plane got delayed by half an hour. To reach the destination 1500 km away in time, so that the passengers could catch the connecting flight, the speed of the plane was increased by 250 km/hour than the usual speed. Find the usual speed of the plane. What value is depicted in this question? View Answer question_answer22) Prove that the lengths of tangents drawn from an external point to a circle are equal. View Answer question_answer23) Draw two concentric circles of radii 3 cm and 5 cm. Construct a tangent to smaller circle from a point on the larger circle. Also measure its length. question_answer24) In Fig. 8, O is the centre of a circle of radius 5 cm. T is a point such that \[OT=13\text{ }cm\] and OT intersects circle at E. If AB is a tangent to the circle at E, find the length of AB, where TP and TQ are two tangents to the circle. View Answer question_answer25) Find x in terms of a, b and c: \[\frac{a}{x-a}+\frac{b}{x-b}=\frac{2c}{x-c},x\ne a,b,c\] View Answer question_answer26) A bird is sitting on the top of a 80 m high tree. From a point on the ground, the angle of elevation of the bird is \[45{}^\circ \]. The bird flies away horizontally in such a way that it remained at a constant height from the ground. After 2 seconds, the angle of elevation of the bird from the same point is \[30{}^\circ \]. Find the speed of flying of the bird. (Take \[\sqrt{3}=1.732\]) View Answer question_answer27) A thief runs with a uniform speed of 100 m/minute. After one minute a policeman runs after the thief to catch him. He goes with a speed of 100 m/minute in the first minute and increases his speed by 10 W minute every succeeding minute. After how many minutes the policeman will catch the thief. View Answer question_answer28) Prove that the area of a triangle with vertices \[(t,t-2),(t+2,t+2)\] and \[(t+3,t)\] is independent of t. question_answer29) A game of chance consists of spinning an arrow on a circular board, divided into 8 equal parts, which comes to rest pointing at one of the numbers 1, 2, 3,..., 8 (Fig. 9), which are equally likely outcomes. What is the probability that the arrow will point at (i) an odd number, (ii) a number greater than 3, (iii) a number less than 9. question_answer30) An elastic belt is placed around the rim of a pulley of radius 5 cm. (Fig. 10) From one point C on the belt, the elastic belt is pulled directly away from the centre O of the pulley until it is at P, 10 cm from the point O. Find the length of the belt that is still in contact with the pulley. Also find the shaded area. (use \[\pi =3.14\] and \[\sqrt{3}=1.73\]) View Answer question_answer31) A bucket open at the top is in the form of frustum of a cone with a capacity of \[12308.8\text{ }c{{m}^{3}}\]. The radii of the top and bottom circular ends are 20 cm and 12 cm respectively. Find the height of the bucket and the area of metal sheet used in making the bucket. (use \[\pi =3.14\]) You need to login to perform this action. You will be redirected in 3 sec
In the theory of relativistic wave equations, we derive the Dirac equation and Klein-Gordon equation by using representation theory of Poincare algebra. For example, in this paper the Dirac equation in momentum space (equation [52], [57] and [58]) can be derived from the 1-particle state of irreducible unitary representation of the Poincare algebra (equation [18] and [19]). The ordinary wave function in position space is its Fourier transform (equation [53], [62] and [65]). Note at this stage, this Dirac equation is simply a classical wave equation. i.e. its solutions are classical Dirac 4-spinors, which take values in $\Bbb{C}^{2}\oplus\Bbb{C}^{2}$. If we regard the Dirac waves $\psi(x)$ and $\bar{\psi}(x)$ as a 'classical fields', then the quantized Dirac fields are obtained by promoting them into fermionic harmonic oscillators. What I do not understand is that when we are doing the path-integral quantization of Dirac fields, we are, in fact, treating $\psi$ and $\bar{\psi}$ as Grassmann numbers, which are counter-intuitive for me. As far as I understand, we do path-integral by summing over all 'classical fields'. While the 'classical Dirac wave $\psi(x)$' we derived in the beginning are simply 4-spinors living in $\Bbb{C}^{2}\oplus\Bbb{C}^{2}$. How can they be treated as Grassmann numbers instead? As I see it, physicists are trying to construct a 'classical analogue' of Fermions that are purely quantum objects. For instance, if we start from a quantum anti-commutators $$[\psi,\psi^{\dagger}]_{+}=i\hbar1 \quad\text{and}\quad [\psi,\psi]_{+}=[\psi^{\dagger},\psi^{\dagger}]_{+}=0, $$ then we can obtain the Grassmann numbers in the classical limit $\hbar\rightarrow0$. This is how I used to understand the Grassmann numbers. The problem is that if the Grassmann numbers are indeed a sort of classical limit of anticommuting operators in Hilbert space, then the limit $\hbar\rightarrow0$ itself does not make any sense from a physical point of view since in this limit $\hbar\rightarrow0$, the spin observables vanish totally and what we obtain then would be a $0$, which is a trivial theory. Please tell me how exactly the quantum Fermions are related to Grassmann numbers.
Search Now showing items 1-10 of 20 Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC (Elsevier, 2013-12) The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ... Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (American Physical Society, 2013-12) The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ... Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2013-10) Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ... Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2013-03) The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ... Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE (Springer, 2013-06) Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ... Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (American Physical Society, 2013-02) The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ... Mid-rapidity anti-baryon to baryon ratios in pp collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV measured by ALICE (Springer, 2013-07) The ratios of yields of anti-baryons to baryons probes the mechanisms of baryon-number transport. Results for anti-proton/proton, anti-$\Lambda/\Lambda$, anti-$\Xi^{+}/\Xi^{-}$ and anti-$\Omega^{+}/\Omega^{-}$ in pp ... Charge separation relative to the reaction plane in Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV (American Physical Society, 2013-01) Measurements of charge dependent azimuthal correlations with the ALICE detector at the LHC are reported for Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV. Two- and three-particle charge-dependent azimuthal correlations ... Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC (Springer, 2013-09) We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ...
When you write "I do not see how it could be a function WITHIN first-order arithmetic," it is certainly surprising, but this is in fact the case: figuring out how to internally express the provability predicate is the bulk of Godel's argument. If you actually read the proof, you will see how it is done. Briefly, the point is that if $\varphi$ is provable from $T$, then there is a certificate for this fact: a (natural number which codes a) proof! A proof is a finite string of sentences satisfying some simple properties, corresponding to the axioms of $T$ and the inference rules of first-order logic. Via Godel coding, it turns out that all relevant statements about proofs can be expressed as first-order arithmetic statements about the codes of proofs. (Note that this is not true for truth! A certificate for $\varphi$ being true in a structure is a family of Skolem functions, which are second-order objects. There are additional complications as well, but this is the big difference I want to highlight.) EDIT: A few steps towards understanding the provability predicate: CLAIM: Peano arithmetic proves "Every finite sequence of 0s and 1s, which has exactly as many 0s as 1s, has even length." Note that this is silly - the language of arithmetic does not let us talk directly about sequences! Still, we can make sense of this statement via (a silly example of) Godel numbering: to the finite binary sequence $\alpha=a_1a_2a_3. . . a_n$, where $a_i\in\{0, 1\}$, we associate the number $Code(\alpha)=2^{a_1+1}3^{a_2+1}5^{a_3+1} . . . p_n^{a_n+1}$. (Here "$p_i$" denotes the $i$th prime number.) Now, we have: There is a formula $\varphi(x)$ in the language of arithmetic such that for every numeral $k$, we have $PA\vdash \varphi(k)$ iff $k=Code(\alpha)$ for some finite binary string $\alpha$ with exactly as many 0s as 1s. (If exponentiation isn't in the language of arithmetic, then this is actually pretty difficult! For that reason we often use the conservative extension $PA_{exp}$ instead of $PA$; it's not really any different, but it makes some technical steps easier.) There is a formula $\psi(x, y)$ in the language of arithmetic such that for every pair of numerals $k, l$, we have $PA\vdash \psi(k, l)$ iff $k=Code(\alpha)$ for some binary string $\alpha$ of length $l$. Finally, $PA$ proves the sentence $\forall x, y(\varphi(x)\implies \exists z(\psi(x, y)\iff y=2z))$, that is, "Every finite binary string has even length." (This of course depends on the exact choices of $\varphi$ and $\psi$, so the previous two steps really should be made more explicit.) Note that the above paragraphs take place outside of $PA$, but the formulas $\varphi$ and $\psi$ are formulas of $PA$. There's a deep philosophical issue here: is $PA$ really talking about finite binary strings, or just "simulating" them somehow? (This is closely related to the "Chinese Room" thought experiment of Searle, and many others.) While this is interesting, we're just going to sidestep it: in proving Godel's theorem, we fix a coding scheme, and prove that all the relevant facts - when interpreted by this scheme - are in $PA$. In particular, we get: A formula $Prov(x, y)$ in the language of arithmetic. A proof - in the "metatheory" - that, for any pair of numerals $m, n$, we have $PA\vdash Prov(m, n)$ iff $m$ is the code of a sentence $\sigma$ and $n$ is the code of a proof of $\sigma$ from $PA$. So there is a metatheory here, but its role is interpreting the predicate "$Prov$," not in formulating it. (And, by the way, we can ask what sort of metatheory we need, and the answer is, in a precise sense, "not very much" . . . but that's going a bit too far for now.) As far as "better examples" go, there are many sentences not obviously of the form "$Con(T)$" now known to be independent of the theory $ACA_0$, which is a conservative extension of $PA$ to a larger language (so there are more things that are expressible - this just makes the examples better). Some examples include: The Paris-Harrington theorem. The linear order $\epsilon_0=\omega^{\omega^{\omega^{...}}}$ is in fact well-ordered. (Due to Gerhard Gentzen, who showed that "$\epsilon_0$ is well-founded" implies $Con(PA)$ over a very weak base theory.) Every two-player perfect information game, which is guaranteed to end after finitely many moves, is determined. And many more. If you are interested in this sort of thing, you should check out Reverse Mathematics.
I've discussed a procedure for divergent summation using the matrix of Eulerian numbers occasionally in the last years (initially here, and here in MSE and MO but not in that generality and thus(?) without conclusive answers)); now I would like to formalize/make waterproof one heuristic. Consider some (divergent) series $S = K + a + b + c + d + ... $. To assign a finite sum to it I'm trying to use the matrix of the Eulerian numbers (let's call it Eulermatrix for now) to do something in the spirit of Abel- or possibly(?) Borel-summation: I define the function $$f(x) = K + ax + bx^2 + cx^3 + dx^4 + ... $$ with the goal to arrive at a valid/meaningful result for the case that $x \to 1$. The use of the Eulermatrix (see end of the post) implies then the introduction of a transform of $f(x)$ in the form $$ g(x) = K + ax + bx^2/2! + cx^3/3! + dx^4/4! + ... $$ If I now evaluate the sequence of partial sums according to the decomposition of the Eulerian numbers in combination with the reciprocal factorials I arrive at the form$$ s_n(x) = \sum_{k=0}^{n-1} (-1)^k ((n-k)x) ^k {g^{(k)}((n-k)x) \over k!} \qquad n=1 \ldots \infty \tag1$$and if this converges to a finite value then I assume this as the (divergent) sum of the series and assume $$S = \lim _{n \to \infty} s_n(1)$$I get meaningful results for series which have growth rates like geometric series with $q \lt 1$ and I think that the formalism of conversion into a double-sum and changing order of evaluation using the Eulermatrix gives also the range for $q$ as $- \infty \lt q \lt 1$ (because the $g(x)$ function is then entire). I can even evaluate the classical series $0! - 1! + 2! - 3! + \ldots - $ to which already L. Euler assigned the value of about $0.596...$ which is now called Gompertz constant. So I'm confident, that my general reasoning using the Eulermatrix is valid. But now: what I'm asking here is about the expression for $s_n(x)$ which reminds me of the Maclaurin-expression (which expresses a function by its derivatives at zero), but which I've not seen yet. Is that a valid transform? And up to which growth rate can this method handle series (I think, preferably alternating series)? The Eulermatrix with a rowscaling by reciprocal factorials: $$ \small \begin{bmatrix} 1 & . & . & . & . & . \\ {1 \over 1!} & . & . & . & . & . \\ {1 \over 2!} & {1 \over 2!} & . & . & . & . \\ {1 \over 3!} & {4 \over 3!} & {1 \over 3!} & . & . & . \\ {1 \over 4!} & {11 \over 4!} & {11 \over 4!} & {1 \over 4!} & . & . \\ {1 \over 5!} & {26 \over 5!} & {66 \over 5!} & {26 \over 5!} & {1 \over 5!} & .\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{bmatrix} $$
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-2 of 2 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
Note that the equation you give can be trivially restated as$$\left[-\frac{\hbar^2}{2m}\frac{\mathrm{d^2}}{\mathrm{d}x^2}\right]\psi(x) = \left[E-U(x)\right]\psi(x)\text{,}$$where $\hbar = h/2\pi$. So if in this situation we interpret kinetic energy as the difference between total and potential energy, this tells us that the kinetic energy operator must in the position basis be:$$\hat{T} = -\frac{\hbar^2}{2m}\frac{\mathrm{d}^2}{\mathrm{d}x^2}\text{.}$$We can also definite a potential energy operator, which in this basis is just trivial multiplication: $\hat{U}\left[\psi(x)\right] = U(x)\psi(x)$. The first doubt that arises is the following: the book says that $E$ is the "constant total energy" of the particle. But wait, since $E = K + U$ and $U$ varies, clearly $E$ should vary. How can $E$ be constant if $U$ is not? What the books means by "constant total energy" is that the total energy has a definite value, i.e., the given wavefunction is an eigenstate of the total energy:$$\left[\hat{T} + \hat{U}\right]\psi = E\psi\text{.}$$If one measures the energy of the particle correctly, one will get this definite value $E$. This won't happen for either the kinetic or potential energies, except in some trivial cases: measuring them will give a random eigenvalue of the corresponding operators. However, given an energy eigenstate as we have here, the kinetic and potential energies are time-independent in the sense that the probability distribution of the possible measurement results does not vary in time. The same holds for any observable that's not explicitly dependent on time. Also, when we write $E-U(x)$ isn't this simply $K$, the kinectic energy? Why do we bother then writing explicitly $E-U$? That wouldn't make sense. If we simply write $K$ as a constant in the equation you give, that would suggest that the kinetic energy takes on a definite value, and this is not true for the wavefunction given. To have a definite value of kinetic energy, the state $\psi$ must be an eigenstate of kinetic energy, $\hat{T}\psi = K\psi$.
I want to compare two Feynman diagrams and be able to say which one describes a process that is more likely to happen. As far as I understand, this is done by considering the order of the diagram. In the case of, for example, $e^+ e^- \rightarrow \mu^+ \mu^-$ (tree level) I have the two electrons coupling to the photon - introducing a factor of $\alpha_{EM}$, the coupling constant - and the two muons coupling to the same photon, introducing another factor of $\alpha_{EM}$. At tree level, therefore, $$\Gamma_{tree} (e^+ e^- \rightarrow \mu^+ \mu^-) \propto \alpha_{EM}^2,$$ but if I were to include an electron-positron loop I'd have another $\alpha_{EM}^2$ therefore giving $$\Gamma_{1 loop} (e^+ e^- \rightarrow \mu^+ \mu^-) \propto \alpha_{EM}^4$$and so on. In this case I understand that the power of $\alpha_{EM}$ is the order of the diagram. ANYWAY, what I really want to quantify is this kind of processes: (a) I have 4 vertices of the $W$ bosons... so what, is this $\propto \alpha_{weak}^4$? (b) By the same logic, $\propto \alpha_{EM}*\alpha_{weak}^2$? Is (a) less probable than (b) then?
an object must have infinite mass if it is to be traveling at the speed of light No, that's not true at all. An object's mass is a fixed property that doesn't change, regardless of what speed it travels at. But its energy does change. The energy increases with increasing speed, according to the formula $$E = \frac{mc^2}{\sqrt{1 - v^2/c^2}}$$ Here $m$ is the mass, an inherent property of the object, and $E$ is the energy. What you're probably thinking about is the fact that an object that has a nonzero mass ($m > 0$) can never travel at the speed of light, because it would require an infinite amount of energy for it to do so. But zero-mass particles like the photon (the quantum of light) can travel at the speed of light without requiring infinite energy. (In fact, they can't travel at any slower speed.) Some people use the term "mass" to mean the quantity that I'm calling energy (in different units), and the term "rest mass" when they want to refer to what I call the mass. In that case, one would say that a material object traveling at the speed of light would have infinite mass. I refer you to another answer I've written for more information on the historical context of the terms. But, do light waves have any sort of measurable mass? Or in that same vein, do sound waves? No, but they do have energy. For light waves, the energy is related to the frequency $\omega$ of the wave, $$E = n\hbar\omega$$ ($n$ is the number of photons), and for sound waves, the energy is related to the amplitude (particle displacement) $\xi$ and the frequency $\omega$, $$E = A\rho \xi^2\omega^2$$ where $A$ is the cross-sectional area of the sound wave. You could calculate an "equivalent mass" as $m_\text{eq} = E/c^2$, which would tell you the amount of mass it would take to have the same energy (at rest) as a given light or sound wave. If it were possible to convert the energy of the wave into mass directly, $m_\text{eq}$ would be the amount of mass you'd get. But that's the only sense in which a wave has mass.
In the context of a paper dealing with Statistical Quality Control, I am defining and using the concept of the mean shift divided by the standard deviation of a (normally distributed one-dimensional) random variable, which is $\delta\; =\; \frac{\left|\;\mu_0-\mu\;\right|}{\sigma_0} \text{,}$ where $\mu_0$ and $\sigma_0$ are the desired or initially assumed values for the population mean and standard deviation, respectively, and $\mu$ represents the real value for the population mean. I have named $\delta$ the of the random variable, but this name has been rejected by the paper reviewer, as it is too close to the term standardized deviation standard deviationbut represents a rather different concept. So my question is how do you name the concept of the mean shift divided by the standard deviation of a one-dimensional random variable? Is there a standard name for it? If not, how would you call it? EDIT: The parameter $\delta$ can be described as the deviation of the mean in sigma units.
I'm wondering how would one go on about to calculate the static structure function with the ground state being $|\phi_0\rangle$: $S_\vec{q}=\frac{1}{N}\langle \phi_0|\hat{n}_{\vec{q}}\hat{n}_{-\vec{q}} | \phi_0\rangle$ for the case $\vec{q}\neq 0$, knowing the operator $\hat{n}_\vec{q}$ is defined as follows: $\hat{n}_\vec{q}=\sum_{\vec{k},\sigma} a^\dagger_{\vec{k}\sigma}a_{\vec{k}+\vec{q}\sigma}$ $\Rightarrow \hat{n}_\vec{q}\hat{n}_{-\vec{q}}=\sum_{\vec{k},\sigma} a^\dagger_{\vec{k}\sigma}a_{\vec{k}+\vec{q}\sigma}\sum_{\vec{k}',\sigma'} a^\dagger_{\vec{k}'\sigma'}a_{\vec{k}'-\vec{q}\sigma'}\\ =\sum_{\vec{k},\sigma} a^\dagger_{\vec{k}\sigma}a_{\vec{k}+\vec{q}\sigma} a^\dagger_{\vec{k}\sigma}a_{\vec{k}-\vec{q}\sigma}$ All that I understand is that this chain of operators will remove a particle with momentum $\vec{k}$, such as $|\vec{k}|\leq k_F$, where $k_F$ is the Fermi momentum, create (and later destroy) a particle with momentum $(\vec{k}-\vec{q})$, which lies outside the Fermi sphere due to the Pauli exclusion principle, and then put that particle back to its original position in the momentum sphere with radius $k_F$. For this reason, summing over the spins would just yield a factor of 2, since $\sigma\neq\sigma'$ is not allowed. And that taking the continuum limit in the sum should allow to solve the problem. However, I don't have the faintest clue as to how to solve for $S_\vec{q}$... EDIT: The $\vec{q}=0$ case just gives 1; we were asked to solve this explicitly for $\vec{q}\neq0$ but we did not cover Wick's theorem, which, unfortunately, would have helped a lot...
The likelihood of an observation $x$ under a gamma distribution is $$L(x | \alpha, \beta) \propto \beta^\alpha x^{\alpha-1} \frac {\exp(-x\beta)} {\Gamma(\alpha)}$$ Suppose I have some observations d from a gamma distribution with unknown shape and reciprocal scale parameters $\alpha$ and $\beta$ I wish to evaluate $ \int_{\forall \alpha} \int_{\forall \beta} L(x | \alpha, \beta) \times L(\alpha,\beta | \boldsymbol{d}) \ d\alpha \ d\beta $ in order to find the likelihood of a new observation $x$ from this distribution. Is this integral tractable if I use Miller's prior (http://www.leg.ufpr.br/lib/exe/fetch.php/projetos:mci:tabelasprioris.pdf) for the distribution of the parameters of the gamma distribution, eg: $$L(\alpha,\beta | \boldsymbol{d}) \propto \frac {p^{\alpha - 1} \exp(-\beta s)} {(\Gamma(\alpha)\beta^{-\alpha})^n} $$ Where $n$ is the number of observations, $s$ is their sum and $p$ is their product? In other words, is it possible to exactly calculate $$\int_0^\infty \int_0^\infty \beta^\alpha x^{\alpha-1} \frac {\exp(-x\beta)} {\Gamma(\alpha)} \frac {p^{\alpha - 1} \exp(-\beta s)} {(\Gamma(\alpha)\beta^{-\alpha})^n} \ d\alpha \ d\beta$$ If this is not possible analytically, do I have to resort to maximum likelihood estimation, or is there another approach that will be viable? Such as an alternative prior? Or numerical techniques?
In a previous post, I presented a case for choosing a non-central version of Fisher’s exact test for most of bioinformatics’ uses of this test. I will now present an implementation of this test in javascript that could easily be embedded in web interfaces. Although javascript is probably the least likely language to implement statistical methods, I hope this article will fill in as many details as possible to make it trivial to port it to other languages if the need arises. At the very least, it should demystify the inner working of this extremely useful test. I’ll limit myself to the context of $2\times 2$ contingency tables. Finally, I must admit that I was pleasantly surprised by the performance of this code! These first functions compute various expressions needed by the non-central hypergeometric distribution. More specifically, they use log-probabilities to avoid manipulating small numbers that would risk producing an underflow. function lngamm (z) { // Reference: "Lanczos, C. 'A precision approximation // of the gamma function', J. SIAM Numer. Anal., B, 1, 86-96, 1964." // Translation of Alan Miller's FORTRAN-implementation // See http://lib.stat.cmu.edu/apstat/245 var x = 0; x += 0.1659470187408462e-06 / (z + 7); x += 0.9934937113930748e-05 / (z + 6); x -= 0.1385710331296526 / (z + 5); x += 12.50734324009056 / (z + 4); x -= 176.6150291498386 / (z + 3); x += 771.3234287757674 / (z + 2); x -= 1259.139216722289 / (z + 1); x += 676.5203681218835 / (z); x += 0.9999999999995183; return (Math.log (x) - 5.58106146679532777 - z + (z - 0.5) * Math.log (z + 6.5)); } function lnfact (n) { if (n <= 1) return (0); return (lngamm (n + 1)); } function lnbico (n, k) { return (lnfact (n) - lnfact (k) - lnfact (n - k)); } The “lnbico” (ln of the binomial coefficient) function returns the log of the number of combinations for choosing $k$ in $n$: \binom{n}{k} & = & \frac{n!}{k!(n-k)!}\\ \mbox{lnfact}(n) & = & \ln(n!)\\ \mbox{lnbico}(n,k) & = & \ln\binom{n}{k}\\ & = & \mbox{lnfact}(n) – (\mbox{lnfact}(k) + \mbox{lnfact}(n – k)) \end{eqnarray*} The probability distribution of the non-central hypergeometric is as follows (with $\omega$ being the odds ratio threshold): \[ \frac{\binom{m_1}{x} \binom{m_2}{n-x} \omega^x}{ \sum_{y=x_{\min}}^{x_{\max}} \binom{m_1}{y} \binom{m_2}{n – y} \omega^y }\] Where, \[\begin{eqnarray*} x_{\min} & = & \max(0, n – m_2)\\ x_{\max} & = & \min(n, m_1) \end{eqnarray*}\] Notation used in the code for contingency matrix and marginals: \[\begin{array}{|cc||c}\hline \bf n_{11} & \bf n_{12} & n \\ \bf n_{21} & \bf n_{22} & \\\hline\hline m_1 & m_2 & \\ \end{array}\] For computing the test’s p-value, counts must be rearranged and I’ve used the following notation: \[\begin{array}{|cc||c}\hline x & ? & n \\ ? & ? & \\\hline\hline m_1 & m_2 & \\ \end{array}\] Here, deciding on an $x$ allows to fill the table using the sums. The p-value of Fisher’s exact test is then computed by adding the probabilities for scenarios where $x$ is greater than the observed $n_{11}$. This is stored as “den_sum” and computed by the last “for” loop of the function. Notice that all sums are done using scaled probabilities (- max_l in log-space), this avoids summing over very small numbers and risking an underflow. Since the final p-value is the result of a ratio, the scaling factor cancels itself out in Math.exp (den_sum – sum_l). Finally, “w” is used to pass the odds ratio threshold ($\omega$) required. function exact_nc (n11, n12, n21, n22, w) { var x = n11; var m1 = n11 + n21; var m2 = n12 + n22; var n = n11 + n12; var x_min = Math.max (0, n - m2); var x_max = Math.min (n, m1); var l = []; for (var y = x_min; y <= x_max; y++) { l[y - x_min] = (lnbico (m1, y) + lnbico (m2, n - y) + y * Math.log (w)); } var max_l = Math.max.apply (Math, l); var sum_l = l.map (function (x) { return Math.exp (x - max_l); }).reduce (function (a, b) { return a + b; }, 0); sum_l = Math.log (sum_l); var den_sum = 0; for (var y = x; y <= x_max; y++) { den_sum += Math.exp (l[y - x_min] - max_l); } den_sum = Math.log (den_sum); return Math.exp (den_sum - sum_l); }; Please be advised that this code has been only very lightly tested. I confirmed that it returns identical results to the R implementation (fisher.exact) on a limited number of non-trivial test cases. If you find any issue, I’d be happy to investigate!
A general model (with continuous paths) can be written $$ \frac{dS_t}{S_t} = r_t dt + \sigma_t dW_t^S$$where the short rate $r_t$ and spot volatility $\sigma_t$ are stochastic processes. In the Black-Scholes model both $r$ and $\sigma$ are deterministic functions of time (even constant in the original model). This produces a flat smile for any expiry $T$. And we have the closed form formula for option prices $$ C(t,S;T,K) = BS(S,T-t,K;\Sigma(T,K)) $$where $BS$ is the BS formula and $\Sigma(T,K) = \sqrt{\frac{1}{T-t}\int_t^T \sigma(s)^2 ds}$. This is not consistent with the smile observed on the market. In order to match market prices, one needs to use a different volatility for each expiry and strike. This is the implied volatility surface $(T,K) \mapsto \Sigma(T,K)$. In the local volatility model, rates are deterministic, instant volatility is stochastic but there is only one source of randomness $$ \frac{dS_t}{S_t} = r(t) dt + \sigma_{Dup}(t,S_t) dW_t^S$$this is a special case of the general model with $$ d\sigma_t = (\partial_t \sigma_{Dup}(t,S_t) + r(t)S_t\partial_S\sigma_{Dup}(t,S_t) + \frac{1}{2}S_t^2\partial_S^2\sigma_{Dup}(t,S_t)) dt + \frac{1}{2}S_t\partial_S\sigma_{Dup}(t,S_t)^2 dW_t^S $$What is appealing with this model is that the function $\sigma_{Dup}$ can be perfectly calibrated to match all market vanilla prices (and quite easily too). The problem is that while correlated to the spot, statistical study show that the volatility also has its own source of randomness independent of that of the spot. Mathematically, this means the instant correlation between the spot and vol is not 1 contrary to what happens in the local volatility model. This can be seen in several ways: The forward smile. Forward implied volatility is implied from prices of forward start options: ignoring interest rates, $$ C(t,S;T\to T+\theta,K) := E^Q[(\frac{S_{T+\theta}}{S_{T}}-K)_+] =: C_{BS}(S=1,\theta,K;\Sigma(t,S;T\to T+\theta,K))$$Alternatively, it is sometimes defined as the expectation of implied volatility at a forward date. In a LV model, as the maturity $T$ increases but $\theta$ is kept constant, the forward smile gets flatter and higher. This is not what we observe in the markets where the forward smile tends to be similar to the current smile. This is because the initial smile you calibrate the model too has decreasing skew: $$\partial_K \Sigma(0,S;T,K) \xrightarrow[T\to +\infty]{} 0$$ Smile rolling. In a LV model, smile tends to move in the opposite direction of the spot and get higher independently of the direction of the spot.This is not consistent with what is observed on markets. See Hagan and al. Managing Smile Risk for the derivation. This means that $\partial_S \Sigma_{LV}(t,S;T,K)$ often has the wrong sign so your Delta will be wrong which can lead to a higher hedging error than using BS. Barrier options. In FX markets, barrier options like Double No Touch are liquid but a LV model calibrated to vanilla prices does not reproduce these prices. This is a consequence of the previous point. The LV model is a static model. Its whole dynamic comes from the volatility surface at time 0. But the vol surface has a dynamic that is richer than that. There are alternatives using multiple factors like SV models, LSV models (parametric local vol like SABR or fully non parametric local vol), models of the joint dynamic of the spot and vol surface etc... but the LV model remains the default model in many cases due to its simplicity, its ability to calibrate the initial smile perfectly and its numerical efficiency.
The story is about a compact oriented $4$-manifold $X$. In such case, the degree two cohomology $H^2(X,\mathbb{R})$ has a natural non-degenerate symmetric bilinear pairing, given by Poincaré duality. In terms of differential forms, it is simply $\alpha.\beta= \int_X \alpha \wedge \beta$ and in terms of homology, it is simply the intersection of 2-cycles inside X (in a 4-manifods, two 2-cycles intersect generally in a bunch of points and the number of these intersection points is the value of the pairing between these two cycles). A real non-degenerate symmetric pairing is completely determined by its signature. In the case of $H^2(X,\mathbb{R})$, this signature is denoted by $(b_+,b_-)$ where $b_+$ is the number of positive signs and $b_-$ is the number of negative signs. Of course, $b_+ + b_- = b_2(X)$, the dimension of the total space $H^2(X,\mathbb{R})$. So the assumption $b_+=1$ means that the signature is $(+,-,...,-)$ with $r=b_2(X)-1$ minus signs. In other words, the space $H^2(X,\mathbb{R})$ equipped with the pairing exactly look like the Minkowski space of special relativity. In particular, one has a "light cone", where $\alpha.\alpha=0$, the interior of the cone is exactly $H^2(X,\mathbb{R})^+$, with two components corresponding to the "past and future cones". Dividing by $\mathbb{R}^+$ is modding out by "time translation". For each of the components, if one has in mind the image of a cone in dimension three (case $r=2$), this is equivalent to intersecting the interior of the cone with an horizontal plane, and one obtains one disk. In higher dimensions, one obtains a ball of dimension $r$, which is exactly $\mathbb{H}_X$. One standard construction of the hyperbolic space of dimension $r$ is as one component of the paraboloid $\alpha.\alpha=1$ in a Minkowski space of dimension $r+1$, which is exactly our setting. One obtain an identification of the hyperbolic space with $\mathbb{H}_X$ by projection of the hyperboloid on a section of the cone. It is the standard way to go from the paraboloid model to the disk model of the hyperbolic space. In paticular, say for $r=2$, $\mathbb{H}_X$ is really a disk and one shoud think to this disk as the disk model of the hyperbolic plane. One can think of $\mathbb{H}_X$ as a parameter space for metrics, each Riemannian metric on $X$ determines a point in $\mathbb{H}_X$, via the decomposition of 2-foms in self dual and anti self dual parts with respect to the metric. More precisely, the point in $\mathbb{H}_X$ is the unique self dual class with respect to the metric. For a Kähler metric, this is indeed the Kähler class. The walls are walls for Donaldson or equivalently Seiberg-Witten invariants. Donaldson theory is about count of instantons on $X$, i.e. count of anti self dual connections on $X$ with respect to some gauge group $G$. The notion of instanton depends on the metric of $X$, but as the "count" is given by some index of some elliptic operator, one expects these numbers to be constant under continuous deformation of the metric. This is always true if $b_+>1$ and almost true of $b_+=1$. The problems comes from the existence of abelian instantons, i.e. configurations of $U(1)$ gauge fields with anti self dual curvature. Such $U(1)$ gauge fields lives on some line bundle $L$. For a metric defining a point $\omega$ in $\mathbb{H}_X$, the self-dual part of the curvature $F$ of the $U(1)$ connection is $\omega.F$. So we have an anti sel dual connection exactly when $\omega.F=0$ i.e. when $\omega.c_1(L)=0$ where $c_1(L)$ is the first Chern class of $L$ (which is the class of $F$ up to an easy constant). So the walls in $\mathbb{H}_X$ i.e. the $\omega$ in $\mathbb{H}_X$ such that for the corresponding metric there exists some abelian instantons, are exactly the $\omega$ such that there exists $L$ line bundle such that $\omega.c_1(L)=0$. Remark that $c_1(L) \in H^2(X,\mathbb{Z})$ ("flux quantization"). In particular, there are at most countably many walls dividing $\mathbb{H}_X$ into chambers. In each of these chambers, the Donaldson invariants are constant but when one cross a wall, a $G$-instanton can absorb or emit an abelian instanton and so the number of $G$-instantons jumps hence non-trivial wall-crossing. There are many places in mathematics and physics where there are "walls", "wall crossing", "stability" and the precise meaning of the terms depends on the precise story. If the last part of the question is about the relation between the wall crossing of Donaldson invariants and other kinds of wall-crossing, then one should be more precise about the latter.
Previous abstract Next abstract Session 61 - New Views of the Magellanic Clouds. Display session, Wednesday, June 12 Great Hall, Three hydrogen recombination lines, H90\alpha, H92\alpha, and H109\alpha have been observed with the Australia Telescope Compact Array toward the 30 Doradus Nebula, the giant HII region in the Large Magellanic Cloud. Emphasis is placed on the more sensitive H90\alpha observations which also include the He90\alpha and H113\beta lines. The distributions of radial velocity and electron temperature have been mapped across the source. The heliocentric velocity of the ionized gas varies between 260 km/s and 300 km/s with a mean value of 280 km/s. In some places the line is double, suggesting that pieces of one of more expanding shells are being seen. The mean (LTE) electron temperature is 15000K with a range of \pm3500K. The \beta/\alpha ratio across most of the source is close to the expected LTE value of 0.28, but in some small regions has been determined to be as low as 0.17\pm.04. Where the helium abundance, Y^+ = [n(He)]/[n(H)], can be determined, it varies from 0.07\pm.01 to 0.14\pm.02, compared to a typical solar value of 0.1. Program listing for Wednesday
I've been coming across the condition "IMFCC: having infinitely many finite conjugacy classes" often in recent times and I was wondering if there is any serious difference between having "IMFCC" and having an infinite centre. More precisely: $\mathbf{Question:}$ Is there a finitely generated group $\Gamma$ which has IMFCC and is not quasi-isometric to a group with infinite centre? In case the above might be too hard, here is a possible variant: recall a subgroup $M<G$ is almost-malnormal if $\forall g \in G \setminus M,$ the subgroup $gMg^{-1} \cap M$ is finite $\mathbf{Question~(variant):}$ Let $\Gamma$ be a finitely generated group which has "IMFCC". Is there a subgroup $H < \Gamma$ so that $H$ has infinite centre and if $K$ is another subgroup such that $H < K < \Gamma$ then $K$ is not almost-malnormal? I do not claim there is any link between quasi-isometry and almost-malnormal subgroups, it's just a geometric and an algebraic condition which seem to express the fact that groups are similar.
Entry structure Meta Order of keywords 'Entry type' 'Formal definition' 'Comment on formalities' Elaboration Alternative definitions Universal property Discussion Theorems Examples Reference Context Requirements Subset of The 'Comment on formalities' is a plain text that adds everything that keeps the definition above to be fully formal. The “Elaboration” puts the definition into words. The “Discussion” is more informal as it tried to capture the motivation for the definition itself, and says explains the moral behind it. Entry types Each entry starts with a title ( Entry structure here) followed by the type of article ( Meta here). At the moment there are the entry types Meta, Note, Framework, Collection, Type, Haskell, Haskell class Set, Function, Category, Functor, Natural transformation, Tuple The entries end in lists of edges to other entries. The formal content of the edge tags depends on the entry type (e.g. for two connected Set entries, the tag “Subset” means the related entry defined as subset for but I might overload that word for other types too, if the meaning is sufficiently close). Possible lines in a formula block See Template entry Remarks: We don't find 'rule' in Set. Meta-logical language The root of the graph contains discussions on all sorts of notation used, but here is a short overview of the primitive symbols - the ones which aren't introduced in terms of previously defined ones. (E.g. the symbol for function definition $:=$ is primitive, while the symbol for function concatenation $\circ$ is defined in the context of set- or category theory.) I uses letters and words using the common alphabets ($a,b$ etc. and $\alpha,\beta$ etc.) and indices. For formal expressions, I use connectives and quantifiers from first order logic ($\land, \lor, \forall, \exists, \neg, (, ), =$) and the notation necessary for type judgements and definitions ($\frac{P}{Q}, \prod, \sum$). The symbol $\dots$ is used either for a continuation (as in $a,b,\dots$) or to denote “is a”, as in ${\bf C}\dots\mathrm{Category}$, while the word 'in' denotes informal membership. The symbol $\in$ is introduced axiomatically. $\mathrm{it}$ New symbols beyond the this language are introduced throughout the wiki (e.g. $\circ$). The symbol $\equiv$ is used in predicate definitions. The symbol $:=$ is used for function definitions, usually in connection with argument brackets and arrows $\to$ and $\mapsto$. Entry types Every entry has one of the following key words associated with it. They can be considered part of the meta logic of the wiki. Meta: This is the type of some entries at the root of the graph (like this one), and their content is completely informal. Note: Literally just notes I take Framework: Notes on mathematical frameworks close to foundation. Some of those are more or less formally specified within another framework, but some could also either be written down quite differently, or even taken as starting point Set: This is the most common type of entry. It denotes that the mathematical object defined in the entry is a set in all the common set theories, see Set theory. Function: This type Function denotes a function in the set theory, together with domain and codomain. Type: Entries with heading Type denote types as introduced in Type theory. Category: This one denotes categories as given in the typed definition in Category theory. Note that some categories will be small enough to model them as a set. (old, probably not to be used at all: Object: The type Object denotes an object of some category. These will also be set s sometimes.) Collection: This is used to enable us to speak about large mathematical objects without getting in trouble with paradoxes and without formally introducing a new type within our type theory. Our law here is the following: An entity $X$ defined by an entry of type Collection can nowhere in the wiki be used on the right hand side of $\in$. Assigning an entry the type Collection is really just a safety mechanism and doesn’t necessitate that it wouldn't be possible to give it one of the other types. todo: call those “sort” to be less confusing. Graph structure The graph contains vertices $A,B,C,\dots$ with a type (Set, Category, Theorem,…) and arrows $A\mapsto_f B, A\mapsto_h B, B\mapsto_g C, \dots $ with a type (Subset of, Context, Related,…). Each vertex $B$ corresponds to an entry in the wiki and the text contains in a distinguished section Its own name Its vertex type A list of arrow types, each followed by the names of source vertices $A_1, B_2, \dots$ The latter list gives us collections of typed arrows $A_1\mapsto B, A_2\mapsto B, \dots$. Entry structure Most entries contain definitions of mathematical objects in terms of axioms (although a few only state theorems). Specifically, they generally specify collections of objects $x$ by predicates $P(x)$ in the sense of $X_B=\{x|P_B(x,X_B)\}$ (although a few are proper classes as well as categories or objects). Now the description of how the entries are structured: Each entry $B$ start with its name, followed by its type. Then a required context is declared (blue), which is a list of mathematical objects needed for the definition that's following. E.g. “let $G\in\mathrm{Group}$ be a group, let $n\in\mathbb N$ be a natural number and let $\log$ denote the logarithm”. Note that formalizing the context let's one “level up” certain objects. E.g. if $a\in A$ is in the context of the definiton of a function $f:B\to C$, then one is lead to consider is the function $\bar f:A\times B\to C$ makes sense as well, etc. Then the name of the defined object is stated (orange) Then the axioms are given (green, possibly preceded by temporal context declarations in blue) Then comes the informal Discussion section then we have the Parents sectionwith is a list of types, each containing names of other entries, see next section Arrow types The types of arrows $A\mapsto B$ carry mathematical information about the entries. Here are the details: Subset of For this arrow, both source and target must represent a set, or more generally a category. The vertex $B$ can be obtained from $A$ by keeping $A$'s context and adding axioms $P_\mathrm{new}$ to $P_A'$. For sets we find $X_B=\{x|P_A(x,X_A)\land P_\mathrm{new}(x)\}=\{x|x\in X_A\land P_\mathrm{new}(x)\}\subset X_A$. Note: As discussed in the Discussion section of this entry below, formally the above specification is executed only up to isomorphism. Of course, this implies that all (unstructured) sets contain all sets of same or smaller cardinality, but I'm not attempting to point out all subset relations anyway, only those with purpose for doing mathematics. Moreover, some entries are structured sets $X:=\langle A,B,C,...\rangle$ (e.g. in set theory, a group $\langle G,*\rangle$ formally is a set $G$ and a map $*:G\times G\to G$), and so for not to have to introduce “Subgroup of” and “Subfield of” etc. etc., we extend “Subset of”-edges for lists to mean the first entries (like $\pi_1(X)=A$) have the subset relation and the structure is carries over simply by restriction. Subset of for Collection: If a is of type A, and we form the collection of A's, then the pair <a,b>, <a,<b,c»,etc. are also in it. This formalizes the notion of object with structure. Element of The $X_B$ is an element of $X_A$. Context Here we find the names of the entries which are used in the context. Note: As also discussed in the Discussion section of this entry below, for readability I don't necessarily always specify the full context. Theorem entries can't be rRequirements for sets or other entries. This is because the specification of sets doesn't depend on the truth of other sentences, only their within a particular theory at hand does. Restriction of Function restriction. So in the set theoretic model of functions as set of pairs, this is a special case of subset of - here this is a special case of Subset. Refinement of 150121 - this should include partially evaluated functions. E.g. $f(x,y):=x^2+y^3$, then $g(y):=9+y^3$ is a refinement of it. Equivalent to The is a “bijection of information” to be pointed out. Both entries define object which can be directly read of from one another. (This should be the same as Refinement of in both directions.) Related This is the informal arrow, connecting to entries which are conceptually related. 'some sort of functional relation arrow' “Specification of” is elements of the context are fixed. “Structure of” $A\leftarrow \langle A,\dots\rangle$, as in $\langle\mathbb{R},+_\mathbb{R},\cdot_\mathbb{R}\rangle$ is a structure of $\mathbb{R}$. “Language” or “Vocabolary” would be a good idea (e.g. for things like $\log$, see discussion above), in addition to “Context”, which I now treat more specific. Entry language todo: rewrite/update this 1. The Definition section All the entries start with a Definition section in which the object is defined, which the entry represents. The information of what and how that thing is defined is highlighted in green color, with necessary complementary information in blue. Similarly, I will use yellow and orange when making predicate definitions in the Ramification section. Color scheme context contex definiendum set definition inclusion subset postulate formula range fixated term universal quantification predicate predicate definition Below the boxes, there might also be explainatory text. In the informal discussion I use ^, as in $a^2+b^2=c^2$ What the use of the boxes does is to export information about the objects (sets) to the separate blue boxes. This way I need more lines, but the definitional expression are more clearer and without potential ballast. Besides general declaration of types, what is nice is that it decomposes expressions of the form $\forall x\ (P(x)\implies Q(x))$ and $\exists x\ (P(x)\land Q(x))$ into two parts respectively. Such a restricted quantification is also discussed in the entry set theory. At the same time, I use the boxes to decompose the set-builder notation $X:=\{f(x)|P(x)\}$. In the entry set theory, there is also a section dedicated to the set-builder notation. Why do I do this? I want to talk about mathematical objects and their special cases. When I started to write down the relation between differential equations, like for example “$y'(x)=a(x)y(x)+b(x)$” and “$y'(x)=\sin(x)y(x)$,” I realized that I would have to quantify the function spaces for $a$ and $b$, and it would be madness to do this in one line. If I wanted to restrict the domain of interest, I would actually even have to write the initial function as “$\forall x\in \mathbb R\ (y'(x)=a(x)y(x)+b(x))$.” Or, even a little more formally “$\forall x\ [x\in \mathbb R\implies y'(x)=a(x)y(x)+b(x)]$.” So what I do is to specify the domain of all our variables! Notice however that I will still write down formulas implies universal closure: We might drop the all-quantors before every-day equations and post them I free variables, e.g. we write “$(n+1)^2=n^2+2n+1$”, and not “$\forall n\ ((n+1)^2=n^2+2n+1)$”. Now follow some examples in full formallity. Firstly, a motivating example. Axiom of choice: context $X,\ \emptyset \notin X$ context $x\in X$ context $f:X \to \bigcup X$ postulate $ \forall X\ \exists f\ \forall x\ ( f(x) \in x )$ What it meas in words: We consider any nonempty sets $X$ which contain smaller sets $x$. Now by the axiom we always have a function $f$ for $X$, which can reach into the small sets, i.e. for all $x$, the value $f(x)$ is a chosen element from that $x$. In standard notation without the external “blue box quantification”, this reads on Wikipedia “$\forall X\ \left[ \emptyset \notin X \implies \exists f: X \to \bigcup X : \forall A \in X \, ( f(A) \in A ) \right].$” which itself uses some abbreviations to express “$\forall X\ \left[ \emptyset \notin X \implies \exists f\ [(f: X\to\bigcup X)\land \forall A\ (A\in X \land ( f(A) \in A ))] \right].$” Now I give a second more abstract example: context $a$ context $b, P(b)$ context $c, Q(c)$ context $d, R(d)$ postulate $\forall b\ \exists c\ ((S(a)=d)\land T(b,c,d))$ says “$\forall b\ [P(b)\implies \exists c\ (Q(c)\land (S(a)=d)\land T(b,c,d))]$, where $a$ is any set and $d$ is any set for which $R(d)$.” We usually want to use this notation to define sets. Since I first provide input for the defintion and also state the name of the thing I define, I switch between blue and green two times: Definition context $B, B\neq \emptyset$ postulate $f(x)\in Z_B$ context $a, P(a)$ context $b\in B$ postulate $\forall a\ \exists b\ R(a,x,b)$ says “Let $B$ be any except for $\emptyset$. Then $Z_B=\{f(x)|\forall a\ (P(a)\implies \exists b\ (b\in B\land R(a,x,b)))\}$.” We might also extend the defintion section by introducing common alternative ways of writing mathematics. Use of headline levels ===== Entry title e.g. Group ==== Main Sections e.g. Discussion === SubSections Cohenrence - explanation why the formula makes sense Terminology - comment on the name Notation - alternative notations in use Theorems - theorems which don't deserve their own entry Predicates - … Alternative definitions - … Ad hoc introduction - motivation of how this entry can be understood as “axiom of choice” Derivation - explanation of the derivation from the entries in the context section todo: notation rules, and maybe links to the Glossary note: seperate the axioms part from the set definitions part. the biggest set is Set which can be viewed as the class of sets for some axiomatic framework. Functions (the sets), defined in Function are the internal homs of Set - and/but they are generally defined not via $\equiv$ but via $\:=$, which implies that these sets know their corresponding arrow in Set and hence their dom and codom can be obtained from those $f$ via $\mathrm{dom}(f)$ resp. $\mathrm{codom}(f)$. == Headlines e.g. names of formulas remark: it there is reason to put an external link into the main body, it must also be in the References. no requirement links to Meta entries (otherwise stuff like the unordered pair gets connected to set theory, which makes the grath - at least in the graphviz mode - not nice to look at.) Keywords in 'An apple pie from scratch' Elaboration explaining the definition Motivation motivation for the Idea associated key insight
In an earlier post, I discussed the basic and most important aspects of Set theory, Functions and Real Number System. In the same, there was a significant discussion about the union and intersection of sets. Restating the facts again, given a collection $ \mathcal{A}$ of sets, the union of the elements of $ \mathcal{A}$ is defined by $ \displaystyle{\bigcup_{A \in… Sequence of real numbers A sequence of real numbers (or a real sequence) is defined as a function $ f: \mathbb{N} \to \mathbb{R}$ , where $ \mathbb{N}$ is the set of natural numbers and $ \mathbb{R}$ is the set of real numbers. Thus, $ f(n)=r_n, \ n \in \mathbb{N}, \ r_n \in \mathbb{R}$ is a function which produces a sequence… Sets In mathematics, Set is a well defined collection of distinct objects. The theory of Set as a mathematical discipline rose up with George Cantor, German mathematician, when he was working on some problems in Trigonometric series and series of real numbers, after he recognized the importance of some distinct collections and intervals. Cantor defined the set as a ‘plurality… Here is an interesting mathematical puzzle alike problem involving the use of Egyptian fractions, whose solution sufficiently uses the basic algebra. Problem Let a, b, c, d and e be five non-zero complex numbers, and; $ a + b + c + d + e = -1$ … (i) $ a^2+b^2+c^2+d^2+e^2=15$ …(ii) $ \dfrac{1}{a} + \dfrac{1}{b} +\dfrac{1}{c} +\dfrac{1}{d} +\dfrac{1}{e}= -1$… The greatest number theorist in mathematical universe, Leonhard Euler had discovered some formulas and relations in number theory, which were based on practices and were correct to limited extent but still stun the mathematicians. The prime generating equation by Euler is a very specific binomial equation on prime numbers and yields more primes than any other relations out there in… “Irrational numbers are those real numbers which are not rational numbers!” Def.1: Rational Number A rational number is a real number which can be expressed in the form of where $ a$ and $ b$ are both integers relatively prime to each other and $ b$ being non-zero. Following two statements are equivalent to the definition 1. 1. $ x=\frac{a}{b}$… Ramanujan (1887-1920) discovered some formulas on algebraic nested radicals. This article is based on one of those formulas. The main aim of this article is to discuss and derive them intuitively. Nested radicals have many applications in Number Theory as well as in Numerical Methods . The simple binomial theorem of degree 2 can be written as: $ {(x+a)}^2=x^2+2xa+a^2 \… This is a brief list of free e-books on Algebra, Topology and Related Mathematics. I hope it will be very helpful to all students and teachers searching for high quality content. If any link is broken, please email me at gaurav(at)gauravtiwari.org. Abstract Algebra OnLine by Prof. Beachy This site contains many of the definitions and theorems from the area of…
I don't really follow major breakthroughs, but my favorite paper this year was Raphael Zentner's Integer homology 3-spheres admit irreducible representations in $SL_2(\Bbb C)$. It has been known for quite some time, and is a corollary of the geometrization theorem, that much of the geometry and topology of 3-manifolds is hidden inside their fundamental group. In fact, as long as a (closed oriented) 3-manifold cannot be written as a connected sum of two other 3-manifolds, and is not a lens space $L(p,q)$, the fundamental group determines the entire 3-manifold entirely. (The first condition is not very serious - there is a canonical and computable decomposition of any 3-manifold into a connected sum of components that all cannot be reduced by connected sum any further.) A very special case of this is the Poincare conjecture, which says that a simply connected 3-manifold is homeomorphic to $S^3$. It became natural to ask how much you could recover from, instead of the fundamental group, its representation varieties $\text{Hom}(\pi_1(M), G)/\sim$, where $\sim$ identifies conjugate representations. This was particularly studied for $G = SU(2)$. Here is a still-open conjecture in this area, a sort of strengthening of the Poincare conjecture: if $M$ is not $S^3$, there is a nontrivial homomorphism $\pi_1(M) \to SU(2)$. (This is obvious when $H_1(M)$ is nonzero.) Zentner was able to resolve a weaker problem in the positive: every closed oriented 3-anifold $M$ other than $S^3$ has a nontrivial homomorphism $\pi_1(M) \to SL_2(\Bbb C)$. $SU(2)$ is a subgroup of $SL_2(\Bbb C)$, so this is not as strong. He does this in three steps. 1) Every hyperbolic manifold supports a nontrivial map $\pi_1(M) \to SL_2(\Bbb C)$; this is provided by the hyperbolic structure.2) (This is the main part of the argument.) If $M$ is the "splice" of two nontrivial knots in $S^3$ (delete small neighborhoods of the two knots and glue the boundary tori together in an appropriate way), then there's a nontrivial homomorphism $\pi_1(M) \to SU(2)$.3) Every 3-manifold with the homology of $S^3$ has a map of degree 1 to either a hyperbolic manifold, a Seifert manifold (which have long been known to have homomorphisms to $SL_2(\Bbb C)$, or the splice of two knots, and degree 1 maps are surjective on fundamental groups. The approach to (2) is to write down the representation varieties of each knot complement and understand that the representation variety of the splice corresponds to a sort of intersection of these representation varieties. So he tries to prove that they absolutely must intersect. And now things get cool: there's a relationship between these representation varieties and solutions to a certain PDE on 4-manifolds called the "ASD equation". Zentner proves that if these things don't intersect, you can find a certain perturbation of this equation that has no solutions. But Kronheimer and Mrowka had previously proved that in the context that arises, that equation must have solutions, and so you derive your contradiction. This lies inside the field of gauge theory, where one tries to understand manifolds by understanding certain PDEs on them. There's another gauge-theoretical invariant called instanton homology, which is the homology of a chain complex where the generators (sorta) correspond to representations $\pi_1(M) \to SU(2)$. (The differential counts solutions to a certain PDE, like before.) So there's another question, a strengthening of the one Zentner made partial progress towards: "If $M \neq S^3$, is $I_*(M)$ nonzero?" Who knows.
The issue here is precisely what you mean when you say "regular path $J_t$ of almost complex structures". I am not particularly expert in the theory of holomorphic curves, so I won't comment on that; I expect the situation is precisely analagous to the finite-dimensional setting I will describe below (I'm sure there are more technical issues involving the corner points). Let $M$ be a smooth manifold, and $S \subset N$ a closed submanifold. Given a map $f: M \to N$ transverse to $S$, we have that $\mathcal M_f = f^{-1}(S)$ is a smooth manifold (with corners etc if $S$ and $N$ have them). The condition "$f \pitchfork S$" is the precise analogy to "$J_t$ is regular" in this finite-dimensional setting. Suppose you have a path of functions $f$, written as a map $f_t: I \times M \to N$. Then I would say that this is a regular path of functions if the map $I \times M \to N$ is transverse to $S$, and the same for $\{0,1\} \times M \to N$. Then the "parameterized moduli space" $\mathcal M_I$ is a smooth manifold with boundary equal to $\mathcal M_0 \sqcup \mathcal M_1$. This space is equipped with a smooth map $\mathcal M_I \to I$ which is a submersion near the boundary. The point is that each individual $f_t$ is not assumed to be regular; spelling it out, it is entirely possible that for $(t,x)$, we have $f_t(x) \in S$, and $\text{Im}(df_t) \subset N_{f_t(x)} S$ is not the entirety of the normal space, but rather of codimension 1; and so that together with the additional vector $\frac{d}{dt} f_t(x) \in N_{f_t(x)} S$, we span the whole of the normal space. The second point is that an individual $\mathcal M_t$ is regular if and only if $t \in I$ is a regular value of the projection $\mathcal M_I \to I$. So if, in addition, $f_t$ is regular for each $t$, then by definition the projection $\mathcal M_I \to I$ is a submersion; as you know well, a proper smooth submersion is a fiber bundle, and this provides a global diffeomorphism $\mathcal M_I \cong I \times \mathcal M_0$. In almost every interesting application, given any two regular functions / almost complex structures / whatever, it will be essentially impossible to find a path through regular (blahs) between them; but it will be possible to find a "regular path" of (blahs) between them in the sense given above. A simple case to think about the discussion above with is $S$ a hypersurface and $M$ a point! If $S$ disconnects the codomain, then to get from one side to another you will need to cross $S$. In this situation, $\mathcal M_I$ will be a finite set of points which project to the interior of $I$.
Here is a variation of a job-scheduling Problem. Let $J = \{j_1,...j_n\}$ be a set of Jobs for $1 \leq i \leq n$. Given Job length $|j_i|\in \mathbb{N}$, deadline $f_i \in \mathbb{N}$, profit $p_i \ge 0$ and starting-time $s_i \in \mathbb{N}$. I am looking for a greedy approximation factor given that the Job length may only be distinguished by factor k. $$max_i|j_i| \leq k \cdot min_i|j_i|$$ The Greedy algorithm of this Problem is fairly stupid. Greedy takes a job with the biggest profit. I created an example (3-Job-Scheduling): Let $J = \{j_1,j_2,j_3\}$ with $|j_1| = 2, j_2 = j_3 = 1$ and $s_1 = 0; s_2 = 0; s_3 = 1$, $f_1 = 2;f_2 = 1; f_3 = 2$ $p_1 = w; p_2 = p_3 = (w-1)$ What I want to show is that Greedy gives us w while 2(w-1) is the optimal solution. My question: Is this valid for n-Job-Scheduling (the general case). Is this the worst-case? I can't think of anything worse. So I figured since the problem is a k-Matroid (is this a common term?) there will be a an approximation factor $\frac{1}{k-\epsilon}$ for any $\epsilon > 0.$ I know this is not exactly a proof yet, but am I on the right way? Thanks for your help!
Search Now showing items 1-5 of 5 Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Elliptic flow of muons from heavy-flavour hadron decays at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (Elsevier, 2016-02) The elliptic flow, $v_{2}$, of muons from heavy-flavour hadron decays at forward rapidity ($2.5 < y < 4$) is measured in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The scalar ... Centrality dependence of the pseudorapidity density distribution for charged particles in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2013-11) We present the first wide-range measurement of the charged-particle pseudorapidity density distribution, for different centralities (the 0-5%, 5-10%, 10-20%, and 20-30% most central events) in Pb-Pb collisions at $\sqrt{s_{NN}}$ ... Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays (Elsevier, 2014-11) The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...