text
stringlengths
256
16.4k
Women’s Health Survey: One-Sample Hotelling's T-Square Section In 1985, the USDA commissioned a study of women’s nutrition. Nutrient intake was measured for a random sample of 737 women aged 25-50 years. Five nutritional components were measured: calcium, iron, protein, vitamin A and vitamin C. In previous analyses of these data, the sample mean vector was calculated. The table below shows the recommended daily intake and the sample means for all the variables: Variable Recommended Intake \((\mu_{o})\) Mean Calcium 1000 mg 624.0 mg Iron 15mg 11.1 mg Protein 60g 65.8 g Vitamin A 800 μg 839.6 μg Vitamin C 75 mg 78.9 mg One of the questions of interest is whether women meet the federal nutritional intake guidelines. If they fail to meet the guidelines, then we might ask for which nutrients the women fail to meet the guidelines. The hypothesis of interest is that women meet nutritional standards for all nutritional components. This null hypothesis would be rejected if women fail to meet nutritional standards on any one or more of these nutritional variables. In mathematical notation, the null hypothesis is the population mean vector \(μ\) equals the hypothesized mean vector \(\mu_{o}\) as shown below: \(H_{o}\colon \mu = \mu_{o}\) Let us first compare the univariate case with the analogous multivariate case in the following tables. Focus of Analysis Section Univariate Case Measuring only a single nutritional component (e.g. Calcium). Data: scalar quantities \(X _ { 1 } , X _ { 2 } , \ldots , X _ { n }\) Multivariate Case Measuring multiple (say p) nutritional components (e.g. Calcium, Iron, etc). Data: p × 1 random vectors \(\mathbf{X} _ { 1 } , \mathbf{X} _ { 2 } , \ldots , \mathbf{X} _ { n }\) Assumptions Made In Each Case Section Distribution Univariate Case The data all have a common mean \(\mu\) mathematically, \(E \left( X _ { i } \right) = \mu ; i = 1,2 , \dots , n\) This implies that there is a single population of subjects and no sub-populations with different means. Multivariate Case The data have a common mean vector \(\boldsymbol{\mu}\) ; i.e., \(E \left( \boldsymbol { X } _ { i } \right) =\boldsymbol{\mu} ; i = 1,2 , . , n \) This also implies that there are no sub-populations with different mean vectors. Homoskedasticity Univariate Case The data have common variance \(\sigma^{2}\) ; mathematically, \(\operatorname { var } \left( X _ { i } \right) = \sigma ^ { 2 } ; i = 1,2 , . , n\) Multivariate Case The data for all subjects have common variance-covariance matrix \(Σ\) ; i.e., \(\operatorname { var } \left( \boldsymbol{X} _ { i } \right) = \Sigma ; i = 1,2 , \dots , n\) Independence Univariate Case The subjects are independently sampled. Multivariate Case The subjects are independently sampled. Normality Univariate Case The subjects are sampled from a normal distribution Multivariate Case The subjects are sampled from a multivariate normal distribution. Hypothesis Testing in Each Case Section Univariate Case Consider hypothesis testing: \(H _ { 0 } \colon \mu = \mu _ { 0 }\) against alternative \(H _ { \mathrm { a } } \colon \mu \neq \mu _ { 0 }\) Multivariate Case Consider hypothesis testing: \(H _ { 0 } \colon \boldsymbol{\mu} = \boldsymbol{\mu _ { 0 }}\) against \(H _ { \mathrm { a } } \colon \boldsymbol{\mu} \neq \boldsymbol{\mu _ { 0 }}\) Here our null hypothesis is that mean vector \(\boldsymbol{\mu}\) is equal to some specified vector \(\boldsymbol{\mu_{0}}\). The alternative is that these two vectors are not equal. We can also write this expression as shown below: \(H_0\colon \left(\begin{array}{c}\mu_1\\\mu_2\\\vdots \\ \mu_p\end{array}\right) = \left(\begin{array}{c}\mu^0_1\\\mu^0_2\\\vdots \\ \mu^0_p\end{array}\right)\) The alternative, again is that these two vectors are not equal. \(H_a\colon \left(\begin{array}{c}\mu_1\\\mu_2\\\vdots \\ \mu_p\end{array}\right) \ne \left(\begin{array}{c}\mu^0_1\\\mu^0_2\\\vdots \\ \mu^0_p\end{array}\right)\) Another way of writing this null hypothesis is shown below: \(H_0\colon \mu_1 = \mu^0_1\) and \(\mu_2 = \mu^0_2\) and \(\dots\) and \(\mu_p = \mu^0_p\) The alternative is that μ j is not equal to \(\mu^0_j\) for at least one j. \(H_a\colon \mu_j \ne \mu^0_j \) for at least one \(j \in \{1,2, \dots, p\}\) Univariate Statistics: \(t\)-test Section In your introductory statistics course you learned to test this null hypothesis with a t-statistic as shown in the expression below: \(t = \dfrac{\bar{x}-\mu_0}{\sqrt{s^2/n}} \sim t_{n-1}\) Under \(H _ { 0 } \) this t-statistic is has a t distribution with n-1 degrees of freedom. We reject \(H _ { 0 } \) at level \(α\) if the absolute value of the test statistic t is greater than the critical value from the t-table, evaluated at \(α/2\) as shown below: \(|t| > t_{n-1, \alpha/2}\)
I came across John Duffield Quantum Computing SE via this hot question. I was curious to see an account with 1 reputation and a question with hundreds of upvotes.It turned out that the reason why he has so little reputation despite a massively popular question is that he was suspended.May I ... @Nelimee Do we need to merge? Currently, there's just one question with "phase-estimation" and another question with "quantum-phase-estimation". Might we as well use just one tag? (say just "phase-estimation") @Blue 'merging', if I'm getting the terms right, is a specific single action that does exactly that and is generally preferable to editing tags on questions. Having said that, if it's just one question, it doesn't really matter although performing a proper merge is still probably preferable Merging is taking all the questions with a specific tag and replacing that tag with a different one, on all those questions, on a tag level, without permanently changing anything about the underlying tags @Blue yeah, you could do that. It generally requires votes, so it's probably not worth bothering when only one question has that tag @glS "Every hermitian matrix satisfy this property: more specifically, all and only Hermitian matrices have this property" ha? I though it was only a subset of the set of valid matrices ^^ Thanks for the precision :) @Nelimee if you think about it it's quite easy to see. Unitary matrices are the ones with phases as eigenvalues, while Hermitians have real eigenvalues. Therefore, if a matrix is not Hermitian (does not have real eigenvalues), then its exponential will not have eigenvalues of the form $e^{i\phi}$ with $\phi\in\mathbb R$. Although I'm not sure whether there could be exceptions for non diagonalizable matrices (if $A$ is not diagonalizable, then the above argument doesn't work) This is an elementary question, but a little subtle so I hope it is suitable for MO.Let $T$ be an $n \times n$ square matrix over $\mathbb{C}$.The characteristic polynomial $T - \lambda I$ splits into linear factors like $T - \lambda_iI$, and we have the Jordan canonical form:$$ J = \begin... @Nelimee no! unitarily diagonalizable matrices are all and only the normal ones (satisfying $AA^\dagger =A^\dagger A$). For general diagonalizability if I'm not mistaken onecharacterization is that the sum of the dimensions of the eigenspaces has to match the total dimension @Blue I actually agree with Nelimee here that it's not that easy. You get $UU^\dagger = e^{iA} e^{-iA^\dagger}$, but if $A$ and $A^\dagger$ do not commute it's not straightforward that this doesn't give you an identity I'm getting confused. I remember there being some theorem about one-to-one mappings between unitaries and hermitians provided by the exponential, but it was some time ago and may be confusing things in my head @Nelimee if there is a $0$ there then it becomes the normality condition. Otherwise it means that the matrix is not normal, therefore not unitarily diagonalizable, but still the product of exponentials is relatively easy to write @Blue you are right indeed. If $U$ is unitary then for sure you can write it as exponential of an Hermitian (time $i$). This is easily proven because $U$ is ensured to be unitarily diagonalizable, so you can simply compute it's logarithm through the eigenvalues. However, logarithms are tricky and multivalued, and there may be logarithms which are not diagonalizable at all. I've actually recently asked some questions on math.SE on related topics @Mithrandir24601 indeed, that was also what @Nelimee showed with an example above. I believe my argument holds for unitarily diagonalizable matrices. If a matrix is only generally diagonalizable (so it's not normal) then it's not true also probably even more generally without $i$ factors so, in conclusion, it does indeed seem that $e^{iA}$ unitary implies $A$ Hermitian. It therefore also seems that $e^{iA}$ unitary implies $A$ normal, so that also my argument passing through the spectra works (though one has to show that $A$ is ensured to be normal) Now what we need to look for is 1) The exact set of conditions for which the matrix exponential $e^A$ of a complex matrix $A$, is unitary 2) The exact set of conditions for which the matrix exponential $e^{iA}$ of a real matrix $A$ is unitary @Blue fair enough - as with @Semiclassical I was thinking about it with the t parameter, as that's what we care about in physics :P I can possibly come up with a number of non-Hermitian matrices that gives unitary evolution for a specific t Or rather, the exponential of which is unitary for $t+n\tau$, although I'd need to check If you're afraid of the density of diagonalizable matrices, simply triangularize $A$. You get $$A=P^{-1}UP,$$ with $U$ upper triangular and the eigenvalues $\{\lambda_j\}$ of $A$ on the diagonal.Then$$\mbox{det}\;e^A=\mbox{det}(P^{-1}e^UP)=\mbox{det}\;e^U.$$Now observe that $e^U$ is upper ... There's 15 hours left on a bountied question, but the person who offered the bounty is suspended and his suspension doesn't expire until about 2 days, meaning he may not be able to award the bounty himself?That's not fair: It's a 300 point bounty. The largest bounty ever offered on QCSE. Let h...
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.) @Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases. @TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good. It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors) Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11... $\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474. Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function. The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation} Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation} Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation} Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation} Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain. Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$ We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better) @TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
This is a hard problem for me to word in the title, so I'll try to do better now. Consider the following "game": You are sitting in a room beside a table. In the middle of the table there exists a box containing a very large sum of money. The box will only open if you've waited long enough (the time to wait is a random value between $0$ and $MAX$ seconds inclusive). So, there is a chance that the box will be unlocked immediately, or it very well might not unlock until MAX seconds have elapsed. Here is the tricky part. The act of CHECKING the box to see if its open resets the timer back to zero. So say we have $MAX = 100$ seconds, $MIN = 0$ seconds, and the ACTUAL timer on the box is $25$ seconds. If we wait at least $25$ seconds, the box will be open. However, if we open at any time before 25 seconds, we have to wait at least $25$ seconds from THEN before the box will be open. So you might ask "Who cares if I have to wait MAX seconds? Im guaranteed to be able to open it after MAX seconds". Well what if we add the twist that each elapsed second since you sat down will decrease the value inside the box by $\frac{3}{4*MAX}$ of the original value. If we use that, then that means if you wait MAX seconds, you will get a quarter the money $\frac{money - 3*MAX}{4*MAX} = 0.25*money$. If you open it at $25$ seconds (max seconds is $100$ remember), then you get $\frac{money - (3*25)}{(4*100)*money} = \frac{325}{400}$ of the money, or 81.25%. This is the BEST you can do, you just don't know that. So, what is the optimal strategy to get the most money out of the box? If the actual timer was $0$ seconds, you could immediately take 100%. However if it's $10$ seconds and you first check $1s$, then $2s$, then $3s$... you're quickly losing value since the penalty is based on TOTAL ELAPSED TIME. So checking at 1, 2, 3 would be 6 elapsed seconds. Another key piece of information is that the actual timer value is EQUALLY LIKELY to be any time between 0 and MAX seconds. I think that this fact and the penalty equation ($\frac{3}{4*MAX}$ for example) are the main keys to solving this puzzle. Obviously you can guarantee yourself $25%$ money by just waiting MAX seconds, but we are greedy and want the most we can possibly get (On AVERAGE). What is our strategy? As for my thoughts... I think the answer is that you check every k seconds... maybe $\sqrt(MAX)$ or something. It would be something that I could easily play with and simulate in a programming sandbox, trying all sorts of different stepping schemes for a bunch of random actual values between 0 and some max. Eventually I may be able to pull out some sort of general solution. It would be more interesting to me though if anyone recognizes this problem and knows an analytical optimal solution based on the penalty amount and the fact that the chances are uniform. Any ideas are welcome, this is just for fun! PS. I thought of this because I was recently playing a game and got banned for joining and leaving too many games too quickly. You are banned for a time between MIN and MAX seconds (based on offense though, not equally likely) and every time you try to login your penalty timer resets. So you don't know what the penalty timer is, you just know the time you waited since your last try wasn't long enough. Goal of course is to play again in the quickest amount of time. But this should be pretty much identical to what I've described above. EDIT: SOLUTION Thanks to the derivation of the optimization function by Bey, I was able to solve this problem. I noticed that with a few tests, $\sum_{i=1}^{N-2}t_i > \sum_{i=1}^{N-2}t_i*t_{i+1}$ Consider N = 3, which gives us: $t_1 > \dfrac {1}{M} * (t_1*t_2)$ Which is the same as: $t_1(1 - \dfrac{t_2}{M}) > 0$ This is always true since $t_2 \leq {M}, t_1 \geq 0$ Or, in general... $t_1 + t_2 +...+t_{N-2} > \dfrac {t_1t_2 + t_2t_3 +...+ t_{N-2}t_{N-1}} {M}$ because: $t_1(1 - \dfrac{t_2}{M}) + t_2(1 - \dfrac{t_3}{M}) +...+t_{N-2}(1 - \dfrac{t_{N-1}}{M}) > 0$ Thus, our best solution is to simple use N = 1 guess, where $t_1 = t_n = M$, because the cost function is zero there. So unfortunately not an exciting result, but it seems in such circumstances as this, you cannot do better on average than just making one choice which is the max time. If you don't know the maximum time? That's a different problem!
Let $p_k$ be the $k^{th}$ prime number. Find the least $n$ for which $(p_1^2+1)(p_2^2+1) \cdots (p_n^2+1)$ is divisible by $10^6$. I have no idea where to start on this problem. Any help would be appreciated. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Let $p_k$ be the $k^{th}$ prime number. Find the least $n$ for which $(p_1^2+1)(p_2^2+1) \cdots (p_n^2+1)$ is divisible by $10^6$. I have no idea where to start on this problem. Any help would be appreciated. For each prime above $2$, $p^2+1$ is even. For $2^2+1=5$ so we only need to look for the first $5$ odd primes so that $5\mid p^2+1$. $p^2\equiv-1\pmod{5}\iff p\in\{2,3\}\pmod{5}$. Scanning the primes: $3,7,13,17,23$ are the first $5$ so that $p\in\{2,3\}\pmod{5}$. Therefore, the product $$ (2^2+1)(3^2+1)(5^2+1)\dots(23^2+1) $$ will be divisible by $10^6$. If some $p^2+1$ is divisible by $25$, you may need fewer terms. Indeed, $7^2+1=50$, so we only need $$ (2^2+1)(3^2+1)(5^2+1)\dots(17^2+1) $$ Let's evaluate each $(p^2 + 1)$ expression, counting up the factors of $2$ and $5$, stopping when they both are at least $6$: $n = 1$ $p = 2$, and $(p^2 + 1) = 5$. 2s so far = $0$, 5s so far = $1$. $n = 2$ $p = 3$, and $(p^2 + 1) = 10$. 2s so far = $1$, 5s so far = $2$. $n = 3$ $p = 5$, and $(p^2 + 1) = 26$. 2s so far = $2$, 5s so far = $2$. $n = 4$ $p = 7$, and $(p^2 + 1) = 50$. 2s so far = $3$, 5s so far = $4$. $n = 5$ $p = 11$, and $(p^2 + 1) = 122$. 2s so far = $4$, 5s so far = $4$. $n = 6$ $p = 13$, and $(p^2 + 1) = 170$. 2s so far = $5$, 5s so far = $5$. $n = 7$ $p = 17$, and $(p^2 + 1) = 290$. 2s so far = $6$, 5s so far = $6$. We have 6 $2$s and 6 $5$s, enough for the product to be divisible by $1000000$. $n = 7$. (The actual product here is $390,949,000,000$.) I think find 6 prime number such that pk^2+1 (mod 5)=0 because p^2+1 for all the prime(except 2 ) is odd+1=even so p^+1=2k now find the numbers such that 5|pk^2+1 like that p=3 3^2+1=10 (ok) p=7 7^2+1=50 (ok) ... if you want to solve $p_k^2+1\pmod 5=0$ $p_k^2+1-5\pmod 5=0$ $p_k^2-4\pmod 5=0$ $p_k^2-4=5q$ find the prime such that $(p_k-2)(p_k+2)=5q$ so $p_k=5k+2$ or $p_k=5k-2$
I am interested in the following constrained optimization problem: $$\text{Minimize }J_p(F, G) = \int_0^1 \left(\frac{F}{f} + \frac{1-G}{g}\right)^{-1} dx.$$ over a pair of cumulative distributions functions $F$ and $G$, with $f := F'$ and $g := G'$, subject to the following constraints: Their Jensen Shannon divergence is given by a fixed constant, that is, $$\text{JSD}(F, G) := -\frac{1}{2}\int_0^1 f \log \frac{2f}{f+g} + g \log \frac{2g}{f+g} dx = C.$$ The Radon-Nikodym derivative is non-decreasing $$\left(\frac{g}{f}\right)' \ge 0.$$ $F, G$ are indeed cumulative distribution functions $$f, g \geq 0; \qquad 0 \le F, G \le 1.$$ This seems like a pretty standard calculus of variation problem, and I indeed made some progress deriving its associated Euler Lagrange equation, which is second order due to the inequality constraint 2. First I let $H = 1-G$, then the objective function becomes $J_p(F, H) = \int_0^1 \left(\frac{F}{f} - \frac{H}{h}\right)^{-1} dx$. The constraints also change accordingly: $\frac{1}{2} \int_0^1 \left(f \log \frac{2f}{f-h} - h \log\frac{2h}{h-f} \right) dx =C$. $\left(\frac{h}{f}\right)' \le 0$. $f, -h \ge 0; \qquad 0 \le F, -H \le 1$. One can then take the functional derivative (with respect to $F$ and $H$ respectively) of the Lagrangian $$ \mathcal{L} := \int_0^1 [\left(\frac{F(x)}{f(x)} - \frac{H(x)}{h(x)}\right)^{-1} + \frac{\lambda_1}{2} \left(f(x) \log \frac{2f(x)}{f(x)-h(x)} - h(x) \log\frac{2h(x)}{h(x)-f(x)} \right) - C] dx \\ - \frac{\lambda_2(x) (h'(x) f(x) - h(x)f'(x))}{f(x)^2} + \lambda_3(x)f(x) - \lambda_4(x) h(x) + \lambda_5(x) F(x) + \lambda_6(x) (1 - F(x)) + \lambda_7(x) H(x) + \lambda_8(x) (1 - H(x)).$$ Let $\mathcal{D}_F := \frac{\partial}{\partial F} - \frac{d}{dx} \frac{\partial}{\partial f} + \frac{d^2}{dx^2} \frac{\partial}{\partial f'}$ be the Euler Lagrange operator for $F$ and similarly define $\mathcal{D}_H$. [Update: below computation is flawed; see my self-answer below, which relies on mathematica more exclusively, without any hand derivation. The result makes a lot of sense when plotted] I was able to simplify the resulting system of 2 ODEs significantly, thanks to mathematica, as follows: $$ \mathcal{D}_F \mathcal{L} = [(f+h) B - A] U + [(h' \lambda_2)' + (h \lambda_2)''] + \lambda_3' + \lambda_5' - \lambda_6' = 0 \\ \mathcal{D}_H \mathcal{L} = [(f-h) B + A] V - [(f' \lambda_2)' + (f \lambda_2)''] - \lambda_4' + \lambda_7' - \lambda_8' = 0,$$ where $A := \lambda_1 (fh' - hf')(Hf - Fh)^3$, $B := 4fh[(Hf - Fh)fh + FH(fh' - hf')]$, $U, V$ are some rational functions of $F, H, f, h$. Here I took the liberty of rewriting constraint 2 as $h' f - h f' \le 0$, since the denominator is assumed to be nonnegative; I am not completely sure if this is correct when $f(x) = 0$, but including the denominator doesn't seem to make much difference. Now with the complementary slackness condition on constraint 2, we know that whenever $h'f - hf' \neq 0$, $\lambda_2$ has to be $0$, and vice versa. Similarly, $f \lambda_3 \equiv 0 \equiv h \lambda_4$. One trivial critical solution of the original problem is $F = G$, which corresponds to $h'f - hf' \equiv 0$. This gives the global maximum, and satisfies only the JSD constraint with $C = 0$. I am interested in the nontrivial minimum solutions. Now here is my trouble. As soon as I look at a point $x \in [0, 1]$ where $f, h, F, H, (1-F), (1-H) \neq 0$ (which implies $\lambda_i' = 0$, for $3 \le i \le 8$, if we assume they are $C^1$ at $x$), and $h'f - h f' \neq 0$, I am forced to have $Hf - Fh = 0$, which in turn forces me to have $h'f - hf' = 0$! So somehow I cannot escape from the condition that leads to the trivial solution. Where did I mess up? Does this problem require digging deeper into solutions with isolated points of non-differentiability?
In the univariate case, the data can often be arranged in a table as shown in the table below: 1 2 \(\dots\) g 1 \(Y_{11}\) \(Y_{21}\) \(\dots\) \(Y_{g_1}\) Subjects 2 \(Y_{12}\) \(Y_{22}\) \(\dots\) \(Y_{g_2}\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(n_i\) \(Y_{1n_1}\) \(Y_{2n_2}\) \(\dots\) \(Y_{gn_g}\) The columns correspond to the responses to g different treatments or from g different populations. And, the rows correspond to the subjects in each of these treatments or populations. Notations: \(Y_{ij}\) = Observation from subject jin group i \(n_{i}\) = Number of subjects in group i \(N = n_{1} + n_{2} + \dots + n_{g}\) = Total sample size. Assumptions for the Analysis of Variance are the same as for a two-sample t-test except that there are more than two groups: The data from group ihas common mean = \(\mu_{i}\); i.e., \(E\left(Y_{ij}\right) = \mu_{i}\) . This means that there are no sub-populations with different means. Homoskedasticity: The data from all groups have common variance \(\sigma^2\); i.e., \(var(Y_{ij}) = \sigma^{2}\). That is, the variability in the data does not depend on group membership. Independence:The subjects are independently sampled. Normality: The data are normally distributed. The hypothesis of interest is that all of the means are equal to one another. Mathematically we write this as: \(H_0\colon \mu_1 = \mu_2 = \dots = \mu_g\) The alternative is expressed as: \(H_a\colon \mu_i \ne \mu_j \) for at least one \(i \ne j\). i.e., there is a difference between at least one pair of group population means. The following notation should be considered: This involves taking an average of all the observations for j = 1 to \(n_{i}\) belonging to the ith group. The dot in the second subscript means that the average involves summing over the second subscript of y. This involves taking average of all the observations within each group and over the groups and dividing by the total sample size. The double dots indicate that we are summing over both subscripts of y. \(\bar{y}_{i.} = \frac{1}{n_i}\sum_{j=1}^{n_i}Y_{ij}\) = Sample mean for group i . \(\bar{y}_{..} = \frac{1}{N}\sum_{i=1}^{g}\sum_{j=1}^{n_i}Y_{ij}\) = Grand mean. ANOVA The Analysis of Variance involves partitioning of the total sum of squareswhich is defined as in the expression below: \(SS_{total} = \sum\limits_{i=1}^{g}\sum\limits_{j=1}^{n_i}(Y_{ij}-\bar{y}_{..})^2\) Here we are looking at the average squared difference between each observation and the grand mean. Note that if the observations tend to be far away from the Grand Mean then this will take a large value. Conversely, if all of the observations tend to be close to the Grand mean, this will take a small value. Thus, the total sums of squares measures the variation of the data about the Grand mean. An Analysis of Variance (ANOVA) is a partitioning of the total sum of squares. In the second line of the expression below we are adding and subtracting the sample mean for the ith group. In the third line, we can divide this out into two terms, the first term involves the differences between the observations and the group means, \(\bar{y}_i\), while the second term involves the differences between the group means and the grand mean. \(\begin{array}{lll} SS_{total} & = & \sum_{i=1}^{g}\sum_{j=1}^{n_i}\left(Y_{ij}-\bar{y}_{..}\right)^2 \\ & = & \sum_{i=1}^{g}\sum_{j=1}^{n_i}\left((Y_{ij}-\bar{y}_{i.})+(\bar{y}_{i.}-\bar{y}_{..})\right)^2 \\ & = &\underset{SS_{error}}{\underbrace{\sum_{i=1}^{g}\sum_{j=1}^{n_i}(Y_{ij}-\bar{y}_{i.})^2}}+\underset{SS_{treat}}{\underbrace{\sum_{i=1}^{g}n_i(\bar{y}_{i.}-\bar{y}_{..})^2}} \end{array}\) The first term is called the error sum of squares and measures the variation in the data about their group means. Note that if the observations tend to be close to their group means, then this value will tend to be small. On the other hand, if the observations tend to be far away from their group means, then the value will be larger. The second term is called the treatment sum of squares and involves the differences between the group means and the Grand mean. Here, if group means are close to the Grand mean, then this value will be small. While, if the group means tend to be far away from the Grand mean, this will take a large value. This second term is called the Treatment Sum of Squares and measures the variation of the group means about the Grand mean. The Analysis of Variance results are summarized in an analysis of variance table below: Hover over the light bulb to get more information on that item. Source d.f. SS MS F Treatments \(g-1\) \(\sum _ { i = 1 } ^ { g } n _ { i } \left( \overline { y } _ { i . } - \overline { y } _ { . . } \right) ^ { 2 }\) \(\dfrac { S S _ { \text { treat } } } { g - 1 }\) \(\dfrac { M S _ { \text { treat } } } { M S _ { \text { error } } }\) Error \(N-g\) \(\sum _ { i = 1 } ^ { g } \sum _ { j = 1 } ^ { n _ { i } } \left( Y _ { i j } - \overline { y } _ { i . } \right) ^ { 2 }\) \(\dfrac { S S _ { \text { error } } } { N - g }\) Total \(N-1\) \(\sum _ { i = 1 } ^ { g } \sum _ { j = 1 } ^ { n _ { i } } \left( Y _ { i j } - \overline { y } _ { \dots } \right) ^ { 2 }\) The ANOVA table contains columns for Source, Degrees of Freedom, Sum of Squares, Mean Square and F. Sources include Treatment and Error which together add up to Total. The degrees of freedom for treatment in the first row of the table is calculated by taking the number of groups or treatments minus 1. The total degrees of freedom is the total sample size minus 1. The Error degrees of freedom is obtained by subtracting the treatment degrees of freedom from the total degrees of freedom to obtain N- g. The formulae for the Sum of Squares is given in the SS column. The Mean Square terms are obtained by taking the Sums of Squares terms and dividing by the corresponding degrees of freedom. The final column contains the F statistic which is obtained by taking the MS for treatment and dividing by the MS for Error. Under the null hypothesis that the treatment effect is equal across group means, that is \(H_{0} \colon \mu_{1} = \mu_{2} = \dots = \mu_{g} \), this F statistic is F-distributed with g - 1 and N - g degrees of freedom: \(F \sim F_{g-1, N-g}\) The numerator degrees of freedom g - 1 comes from the degrees of freedom for treatments in the ANOVA table. This is referred to as the numerator degrees of freedom since the formula for the F-statistic involves the Mean Square for Treatment in the numerator. The denominator degrees of freedom N - g is equal to the degrees of freedom for error in the ANOVA table. This is referred to as the denominator degrees of freedom because the formula for the F-statistic involves the Mean Square Error in the denominator. We reject \(H_{0}\) at level \(\alpha\) if the F statistic is greater than the critical value of the F-table, with g - 1 and N - g degrees of freedom and evaluated at level \(\alpha\). \(F > F_{g-1, N-g, \alpha}\)
Multi-output Matrix T Process Summary The multi-output matrix T regression model was first described by Conti and O' Hagan in their paper on Bayesian emulation of multi-output computer codes. It has been available in the dynaml.models.stp package of the dynaml-core module since v1.4.2. Formulation¶ The model starts from the multi-output gaussian process framework. The quantity of interest is some unknown function \mathbf{f}: \mathcal{X} \rightarrow \mathbb{R}^q, which maps inputs in \mathcal{X} (an arbitrary input space) to a q dimensional vector outputs. The input x is transformed through \varphi(.): \mathcal{X} \rightarrow \mathbb{R}^m which is a deterministic feature mapping which then calculates the inputs for a linear mean function \mathbf{m}(.). The parameters of this linear trend are contained in the matrix B \in \mathbb{R}^{m \times q} and \theta contains all the covariance function hyper-parameters. The prior distribution of the multi-output function is represented as a matrix normal distribution, with c(.,.) representing the covariance between two input points, and the entries of \Sigma being the covariance between the output dimensions. The predictive distribution when the output data D \in \mathbb{R}^{n\times q} is observed is calculated by first computing the conditional predictive distribution of \mathbf{f}(.) | D, \Sigma, B, \theta and then integrating this distribution with respect to the posterior distributions \Sigma|D and B|D. The resulting predictive distribution \mathbf{f}(.)| \theta, D has the following structure. The distribution is a matrix variate T distribution. It is described by Mean \mathbf{m}^{**}(x). Covariance between rows c^{**}(x_{1}, x_{2}) Covariance function between output columns \Sigma_{GLS} Degrees of freedom n-m. The matrices B_{GLS} = (\varphi(X)^{\intercal}C^{-1}\varphi(X))^{-1}\varphi(X)^{\intercal}C^{-1}D and \Sigma_{GLS} = (n-m)^{-1}(D - \varphi(X)B_{GLS})^{\intercal}C^{-1}(D - \varphi(X)B_{GLS}) are the generalized least squares estimators for the matrices B and \Sigma which we saw in the formulation above. Multi-output Regression¶ An implementation of the multi-output matrix T model is available via the class MVStudentsTModel. Instantiating the model is very similar to other stochastic process models in DynaML i.e. by specifying the covariance structures on signal and noise, training data, etc. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 //Obtain the data, some generic type val trainingdata: DataType = _ val num_data_points: Int = _ val num_outputs:Int = _ val kernel: LocalScalarKernel[I] = _ val noiseKernel: LocalScalarKernel[I] = _ val feature_map: DataPipe[I, Double] = _ //Define how the data is converted to a compatible type implicit val transform: DataPipe[DataType, Seq[(I, Double)]] = _ val model = MVStudentsTModel( kernel, noiseKernel, feature_map)( trainingData, num_data_points, num_outputs)
Learning Objectives By the end of this section, you will be able to: Describe how quasars were discovered Explain how astronomers determined that quasars are at the distances implied by their redshifts Justify the statement that the enormous amount of energy produced by quasars is generated in a very small volume of space The name “quasars” started out as short for “quasi-stellar radio sources” (here “quasi-stellar” means “sort of like stars”). The discovery of radio sources that appeared point-like, just like stars, came with the use of surplus World War II radar equipment in the 1950s. Although few astronomers would have predicted it, the sky turned out to be full of strong sources of radio waves. As they improved the images that their new radio telescopes could make, scientists discovered that some radio sources were in the same location as faint blue “stars.” No known type of star in our Galaxy emits such powerful radio radiation. What then were these “quasi-stellar radio sources”? Redshifts: The Key to Quasars The answer came when astronomers obtained visible-light spectra of two of those faint “blue stars” that were strong sources of radio waves (Figure 1). Spectra of these radio “stars” only deepened the mystery: they had emission lines, but astronomers at first could not identify them with any known substance. By the 1960s, astronomers had a century of experience in identifying elements and compounds in the spectra of stars. Elaborate tables had been published showing the lines that each element would produce under a wide range of conditions. A “star” with unidentifiable lines in the ordinary visible light spectrum had to be something completely new. In 1963 at Caltech’s Palomar Observatory, Maarten Schmidt (Figure 2) was puzzling over the spectrum of one of the radio stars, which was named 3C 273 because it was the 273rd entry in the third Cambridge catalog of radio sources (part (b) of Figure 2. There were strong emission lines in the spectrum, and Schmidt recognized that they had the same spacing between them as the Balmer lines of hydrogen (see Radiation and Spectra). But the lines in 3C 273 were shifted far to the red of the wavelengths at which the Balmer lines are normally located. Indeed, these lines were at such long wavelengths that if the redshifts were attributed to the Doppler effect, 3C 273 was receding from us at a speed of 45,000 kilometers per second, or about 15% the speed of light! Since stars don’t show Doppler shifts this large, no one had thought of considering high redshifts to be the cause of the strange spectra. The puzzling emission lines in other star-like radio sources were then reexamined to see if they, too, might be well-known lines with large redshifts. This proved to be the case, but the other objects were found to be receding from us at even greater speeds. Their astounding speeds showed that the radio “stars” could not possibly be stars in our own Galaxy. Any true star moving at more than a few hundred kilometers per second would be able to overcome the gravitational pull of the Galaxy and completely escape from it. (As we shall see later in this chapter, astronomers eventually discovered that there was also more to these “stars” than just a point of light.) It turns out that these high-velocity objects only look like stars because they are compact and very far away. Later, astronomers discovered objects with large redshifts that appear star-like but have no radio emission. Observations also showed that quasars were bright in the infrared and X-ray bands too, and not all these X-ray or infrared-bright quasars could be seen in either the radio or the visible-light bands of the spectrum. Today, all these objects are referred to as quasi-stellar objects ( QSOs), or, as they are more popularly known, quasars. (The name was also soon appropriated by a manufacturer of home electronics.) Over a million quasars have now been discovered, and spectra are available for over a hundred thousand. All these spectra show redshifts, none show blueshifts, and their redshifts can be very large. Yet in a photo they look just like stars (Figure 3). In the record-holding quasars, the first Lyman series line of hydrogen, with a laboratory wavelength of 121.5 nanometers in the ultraviolet portion of the spectrum, is shifted all the way through the visible region to the infrared. At such high redshifts, the simple formula for converting a Doppler shift to speed (Radiation and Spectra) must be modified to take into account the effects of the theory of relativity. If we apply the relativistic form of the Doppler shift formula, we find that these redshifts correspond to velocities of about 96% of the speed of light. Recession Speed of a Quasar The formula for the Doppler shift, which astronomers denote by the letters z, is [latex]z=\frac{\Delta {\lambda}}{{\lambda}}=\frac{v}{c}[/latex] where λ is the wavelength emitted by a source of radiation that is not moving, Δλ is the difference between that wavelength and the wavelength we measure, v is the speed with which the source moves away, and c (as usual) is the speed of light. A line in the spectrum of a galaxy is at a wavelength of 393 nanometers (nm, or 10 –9 m) when the source is at rest. Let’s say the line is measured to be longer than this value (redshifted) by 7.86 nm. Then its redshift [latex]z=\frac{7.86\text{nm}}{393\text{nm}}=0.02[/latex] , so its speed away from us is 2% of the speed of light [latex]\left(\frac{v}{c}=0.02\right)[/latex]. This formula is fine for galaxies that are relatively nearby and are moving away from us slowly in the expansion of the universe. But the quasars and distant galaxies we discuss in this chapter are moving away at speeds close to the speed of light. In that case, converting a Doppler shift (redshift) to a distance must include the effects of the special theory of relativity, which explains how measurements of space and time change when we see things moving at high speeds. The details of how this is done are way beyond the level of this text, but we can share with you the relativistic formula for the Doppler shift: [latex]\frac{v}{c}=\frac{{\left(z+1\right)}^{2}-1}{{\left(z+1\right)}^{2}+1}[/latex] Let’s do an example. Suppose a distant quasar has a redshift of 5. At what fraction of the speed of light is the quasar moving away? Check Your Learning Several lines of hydrogen absorption in the visible spectrum have rest wavelengths of 410 nm, 434 nm, 486 nm, and 656 nm. In a spectrum of a distant galaxy, these same lines are observed to have wavelengths of 492 nm, 521 nm, 583 nm, and 787 nm respectively. What is the redshift of this galaxy? What is the recession speed of this galaxy? Quasars Obey the Hubble Law The first question astronomers asked was whether quasars obeyed the Hubble law and were really at the large distances implied by their redshifts. If they did not obey the rule that large redshift means large distance, then they could be much closer, and their luminosity could be a lot less. One straightforward way to show that quasars had to obey the Hubble law was to demonstrate that they were actually part of galaxies, and that their redshift was the same as the galaxy that hosted them. Since ordinary galaxies do obey the Hubble law, anything within them would be subject to the same rules. Observations with the Hubble Space Telescope provided the strongest evidence showing that quasars are located at the centers of galaxies. Hints that this is true had been obtained with ground-based telescopes, but space observations were required to make a convincing case. The reason is that quasars can outshine their entire galaxies by factors of 10 to 100 or even more. When this light passes through Earth’s atmosphere, it is blurred by turbulence and drowns out the faint light from the surrounding galaxy—much as the bright headlights from an oncoming car at night make it difficult to see anything close by. The Hubble Space Telescope, however, is not affected by atmospheric turbulence and can detect the faint glow from some of the galaxies that host quasars (Figure 4). Quasars have been found in the cores of both spiral and elliptical galaxies, and each quasar has the same redshift as its host galaxy. A wide range of studies with the Hubble Space Telescope now clearly demonstrate that quasars are indeed far away. If so, they must be producing a truly impressive amount of energy to be detectable as points of light that are much brighter than their galaxy. Interestingly, many quasar host galaxies are found to be involved in a collision with a second galaxy, providing, as we shall see, an important clue to the source of their prodigious energy output. The Size of the Energy Source Given their large distances, quasars have to be extremely luminous to be visible to us at all—far brighter than any normal galaxy. In visible light alone, most are far more energetic than the brightest elliptical galaxies. But, as we saw, quasars also emit energy at X-ray and ultraviolet wavelengths, and some are radio sources as well. When all their radiation is added together, some QSOs have total luminosities as large as a hundred trillion Suns (10 14 L Sun), which is 10 to 100 times the brightness of luminous elliptical galaxies. Finding a mechanism to produce the large amount of energy emitted by a quasar would be difficult under any circumstances. But there is an additional problem. When astronomers began monitoring quasars carefully, they found that some vary in luminosity on time scales of months, weeks, or even, in some cases, days. This variation is irregular and can change the brightness of a quasar by a few tens of percent in both its visible light and radio output. Think about what such a change in luminosity means. A quasar at its dimmest is still more brilliant than any normal galaxy. Now imagine that the brightness increases by 30% in a few weeks. Whatever mechanism is responsible must be able to release new energy at rates that stagger our imaginations. The most dramatic changes in quasar brightness are equivalent to the energy released by 100,000 billion Suns. To produce this much energy we would have to convert the total mass of about ten Earths into energy every minute. Moreover, because the fluctuations occur in such short times, the part of a quasar that is varying must be smaller than the distance light travels in the time it takes the variation to occur—typically a few months. To see why this must be so, let’s consider a cluster of stars 10 light-years in diameter at a very large distance from Earth (see Figure 5) in which Earth is off to the right). Suppose every star in this cluster somehow brightens simultaneously and remains bright. When the light from this event arrives at Earth, we would first see the brighter light from stars on the near side; 5 years later we would see increased light from stars at the center. Ten years would pass before we detected more light from stars on the far side. Even though all stars in the cluster brightened at the same time, the fact that the cluster is 10 light-years wide means that 10 years must elapse before the increased light from every part of the cluster reaches us. From Earth we would see the cluster get brighter and brighter, as light from more and more stars began to reach us. Not until 10 years after the brightening began would we see the cluster reach maximum brightness. In other words, if an extended object suddenly flares up, it will seem to brighten over a period of time equal to the time it takes light to travel across the object from its far side. We can apply this idea to brightness changes in quasars to estimate their diameters. Because quasars typically vary (get brighter and dimmer) over periods of a few months, the region where the energy is generated can be no larger than a few light-months across. If it were larger, it would take longer than a few months for the light from the far side to reach us. How large is a region of a few light-months? Pluto, usually the outermost (dwarf) planet in our solar system, is about 5.5 light-hours from us, while the nearest star is 4 light-years away. Clearly a region a few light months across is tiny relative to the size of the entire Galaxy. And some quasars vary even more rapidly, which means their energy is generated in an even smaller region. Whatever mechanism powers the quasars must be able to generate more energy than that produced by an entire galaxy in a volume of space that, in some cases, is not much larger than our solar system. Earlier Evidence Even before the discovery of quasars, there had been hints that something very strange was going on in the centers of at least some galaxies. Back in 1918, American astronomer Heber Curtis used the large Lick Observatory telescope to photograph the galaxy Messier 87 in the constellation Virgo. On that photograph, he saw what we now call a jet coming from the center, or nucleus, of the galaxy (Figure 6). This jet literally and figuratively pointed to some strange activity going on in that galaxy nucleus. But he had no idea what it was. No one else knew what to do with this space oddity either. The random factoid that such a central jet existed lay around for a quarter century, until Carl Seyfert, a young astronomer at Mount Wilson Observatory, also in California, found half a dozen galaxies with extremely bright nuclei that were almost stellar, rather than fuzzy in appearance like most galaxy nuclei. Using spectroscopy, he found that these nuclei contain gas moving at up to two percent the speed of light. That may not sound like much, but it is 6 million miles per hour, and more than 10 times faster than the typical motions of stars in galaxies. After decades of study, astronomers identified many other strange objects beyond our Milky Way Galaxy; they populate a whole “zoo” of what are now called active galaxies or active galactic nuclei (AGN). Astronomers first called them by many different names, depending on what sorts of observations discovered each category, but now we know that we are always looking at the same basic mechanism. What all these galaxies have in common is some activity in their nuclei that produces an enormous amount of energy in a very small volume of space. In the next section, we describe a model that explains all these galaxies with strong central activity—both the AGNs and the QSOs. Key Concepts and Summary The first quasars discovered looked like stars but had strong radio emission. Their visible-light spectra at first seemed confusing, but then astronomers realized that they had much larger redshifts than stars. The quasar spectra obtained so far show redshifts ranging from 15% to more than 96% the speed of light. Observations with the Hubble Space Telescope show that quasars lie at the centers of galaxies and that both spirals and ellipticals can harbor quasars. The redshifts of the underlying galaxies match the redshifts of the quasars embedded in their centers, thereby proving that quasars obey the Hubble law and are at the great distances implied by their redshifts. To be noticeable at such great distances, quasars must have 10 to 100 times the luminosity of the brighter normal galaxies. Their variations show that this tremendous energy output is generated in a small volume—in some cases, in a region not much larger than our own solar system. A number of galaxies closer to us also show strong activity at their centers—activity now known to be caused by the same mechanism as the quasars. Glossary quasar: an object of very high redshift that looks like a star but is extragalactic and highly luminous; also called a quasi-stellar object, or QSO active galactic nuclei (AGN): galaxies that are almost as luminous as quasars and share many of their properties, although to a less spectacular degree; abnormal amounts of energy are produced in their centers active galaxies: galaxies that house active galactic nuclei
Limit on num_vars? Post Reply 2 posts • Page 1of 1 I have a strange issue, I have added four new parameters to cosmomc putting them in parameter positions 14, 15 ,16 and 17. This has moved \Omega_\Lambda, Age/Gyr, \Omega_m, \sigma_8, z_{re} and H_0 to parameter positions 18, 19, 20, 21, 22, and 24. The problem is when I run getdist, H_0 is missing in the outputs. test.margstats only goes up to z_{re}. The value of num_vars is 11, meaning H_0 is not given in the output (num_vars should be 12). I have editted distparams.ini so that lab24 = H_0 and plotparams_num = 0. I've checked the chains, and they do contain H_0. Hence I was wondering if there is some limit on num_vars set elsewhere? What have I missed? The problem is when I run getdist, H_0 is missing in the outputs. test.margstats only goes up to z_{re}. The value of num_vars is 11, meaning H_0 is not given in the output (num_vars should be 12). I have editted distparams.ini so that lab24 = H_0 and plotparams_num = 0. I've checked the chains, and they do contain H_0. Hence I was wondering if there is some limit on num_vars set elsewhere? What have I missed?
Example 7-7: Spouse Data (Question 1) Section Question 1: Do the husbands respond to the questions in the same way as their wives? Before considering the multivariate case let's review the univariate approach to answering this question. In this case we will compare the responses to a single question. Univariate Paired t-test Case: Consider comparing the responses to a single question. Notation will be as follows: \(X _ { 1 i }\) = response of husband i -the first member of pair i \(X _ { 2 i }\) = response of wife i -the second member of pair i \(\mu _ { 1 }\) = population mean for the husbands - the first population \(\mu _ { 2 }\) = population mean for the wives - the second population Note!It is completely arbitrary which population is considered the first population and which is considered the second population. It is just necessary to keep track of which way they were labeled, so that we are consistent with our choice. Our objective here is to test the null hypothesis that the population means are equal against the alternative hypothesis that means are not equal, as described in the expression below: \(H_0\colon \mu_1 =\mu_2 \) against \(H_a\colon \mu_1 \ne \mu_2\) In the univariate course you learned that the null hypothesis is tested as follows. First we define \(Y _ { i }\) to be the difference in responses for the \(i^{th}\) pair of observations. In this case, this will be the difference between husband i and wife i. Likewise we can also define \(\mu _ { Y }\) to be the population mean of these differences, which is the same as the difference between the population means \(\mu _ { 1 }\) and \(\mu _ { 2 }\), both as noted below: \(Y_i = X_{1i}-X_{2i}\) and \(\mu_Y = \mu_1-\mu_2\) Testing the null hypothesis for the equality of the population means is going to be equivalent to testing the null hypothesis that \(\mu _ { Y }\) is equal to 0 against the general alternative that \(\mu _ { Y }\) is not equal to 0. \(H_0\colon \mu_Y =0 \) against \(H_a\colon \mu_Y \ne 0\) This hypothesis is tested using the paired t-test. We will define \(\bar{y}\) to be the sample mean of the \(Y _ { i }\)'s: \(\bar{y} = \dfrac{1}{n}\sum_{i=1}^{n}Y_i\) We will also define \(s_{2}Y\) to be the sample variance of the \(Y _ { i }\)'s: \(s^2_Y = \dfrac{\sum_{i=1}^{n}Y^2_i - (\sum_{i=1}^{n}Y_i)^2/n}{n-1}\) We will make the usual four assumptions in doing this: The \(Y _ { i }\)'s have common mean \(\mu _ { Y }\) Homoskedasticity. The \(Y _ { i }\)'s have common variance \(\sigma^2_Y\). Independence. The \(Y _ { i }\)'s are independently sampled. Normality. The \(Y _ { i }\)'s are normally distributed. The test statistic is a t-statistic which is, in this case, equal to the sample mean divided by the standard error as shown below: \[t = \frac{\bar{y}}{\sqrt{s^2/n}} \sim t_{n-1}\] Under the null hypothesis, \(H _ { o }\) this test statistic is going to be t-distributed with n-1 degrees of freedom and we will reject \(H _ { o }\) at level \(α\) if the absolute value of the t-value exceeds the critical value from the t-distribution with n-1 degrees of freedom evaluated at \(α\) over 2. \(|t| > t_{n-1, \alpha/2}\)
Example 8-1 Pottery Data (MANOVA) Section Before carrying out a MANOVA, first check the model assumptions: The data from group ihas common mean vector \(\boldsymbol{\mu}_{i}\) The data from all groups have common variance-covariance matrix \(\Sigma\). Independence:The subjects are independently sampled. Normality:The data are multivariate normally distributed. Assumptions Assumption 1: The data from group ihas common mean vector \(\boldsymbol{\mu}_{i}\) This assumption says that there are no subpopulations with different mean vectors. Here, this assumption might be violated if pottery collected from the same site had inconsistencies. Assumption 3: Independence: The subjects are independently sampled. This assumption is satisfied if the assayed pottery are obtained by randomly sampling the pottery collected from each site. This assumption would be violated if, for example, pottery samples were collected in clusters. In other applications, this assumption may be violated if the data were collected over time or space. Assumption 4: Normality: The data are multivariate normally distributed. Note! For large samples, the Central Limit Theorem says that the sample mean vectors are approximately multivariate normally distributed, even if the individual observations are not. For the pottery data, however, we have a total of only N= 26 observations, including only two samples from Caldicot. With small N, we cannot rely on the Central Limit Theorem. Diagnostic procedures are based on the residuals, computed by taking the differences between the individual observations and the group means for each variable: \(\hat{\epsilon}_{ijk} = Y_{ijk}-\bar{Y}_{i.k}\) Thus, for each subject (or pottery sample in this case), residuals are defined for each of the p variables. Then, to assess normality, we apply the following graphical procedures: Plot the histograms of the residuals for each variable. Look for a symmetric distribution. Plot a matrix of scatter plots. Look for elliptical distributions and outliers. Plot three-dimensional scatter plots. Look for elliptical distributions and outliers. If the histograms are not symmetric or the scatter plots are not elliptical, this would be evidence that the data are not sampled from a multivariate normal distribution in violation of Assumption 4. In this case, a normalizing transformation should be considered. Download the text file containing the data here: pottery.txt Using SAS The SAS program below will help us check this assumption. Download the SAS Program here: potterya.sasView the video explanation of the SAS code. Using Minitab Minitab procedures are not shown separately. These can be handled using procedures already known. Histograms suggest that, except for sodium, the distributions are relatively symmetric. However, the histogram for sodium suggests that there are two outliers in the data. Both of these outliers are in Llanadyrn. Two outliers can also be identified from the matrix of scatter plots. Removal of the two outliers results in a more symmetric distribution for sodium. The results of MANOVA can be sensitive to the presence of outliers. One approach to assessing this would be to analyze the data twice, once with the outliers and once without them. The results may then be compared for consistency. The following analyses use all of the data, including the two outliers. Assumption 2: The data from all groups have common variance-covariance matrix \(\Sigma\). This assumption can be checked using Bartlett's test for homogeneity of variance-covariance matrices. To obtain Bartlett's test, let \(\Sigma_{i}\) denote the population variance-covariance matrix for group i . Consider testing: \(H_0\colon \Sigma_1 = \Sigma_2 = \dots = \Sigma_g\) against \(H_0\colon \Sigma_i \ne \Sigma_j\) for at least one \(i \ne j\) Under the alternative hypothesis, at least two of the variance-covariance matrices differ on at least one of their elements. Let: \(\mathbf{S}_i = \dfrac{1}{n_i-1}\sum\limits_{j=1}^{n_i}\mathbf{(Y_{ij}-\bar{y}_{i.})(Y_{ij}-\bar{y}_{i.})'}\) denote the sample variance-covariance matrix for group i . Compute the pooled variance-covariance matrix \(\mathbf{S}_p = \dfrac{\sum_{i=1}^{g}(n_i-1)\mathbf{S}_i}{\sum_{i=1}^{g}(n_i-1)}= \dfrac{\mathbf{E}}{N-g}\) Bartlett's test is based on the following test statistic: \(L' = c\left\{(N-g)\log |\mathbf{S}_p| - \sum_{i=1}^{g}(n_i-1)\log|\mathbf{S}_i|\right\}\) where the correction factor is \(c = 1-\dfrac{2p^2+3p-1}{6(p+1)(g-1)}\left\{\sum_\limits{i=1}^{g}\dfrac{1}{n_i-1}-\dfrac{1}{N-g}\right\}\) The version of Bartlett's test considered in the lesson of the two-sample Hotelling's T-square is a special case where g = 2. Under the null hypothesis of homogeneous variance-covariance matrices, L' is approximately chi-square distributed with \(\dfrac{1}{2}p(p+1)(g-1)\) degrees of freedom. Reject \(H_0\) at level \(\alpha\) if \(L' > \chi^2_{\frac{1}{2}p(p+1)(g-1),\alpha}\) Example 8-2: Pottery Data Section Using SAS Here we will use the Pottery SAS program. Download the SAS Program here: pottery2.sasView the video explanation of the SAS code. Using Minitab Minitab procedures are not shown separately. These can be handled using procedures already known. Analysis We find no statistically significant evidence against the null hypothesis that the variance-covariance matrices are homogeneous ( L' = 27.58; d.f. = 45; p = 0.98). Notes If we were to reject the null hypothesis of homogeneity of variance-covariance matrices, then we would conclude that assumption 2 is violated. MANOVA is not robust to violations of the assumption of homogeneous variance-covariance matrices. If the variance-covariance matrices are determined to be unequal then the solution is to find a variance-stabilizing transformation. Note that the assumptions of homogeneous variance-covariance matrices and multivariate normality are often violated together. Therefore, a normalizing transformation may also be a variance-stabilizing transformation.
Example 8-3: Pottery Data (MANOVA) Section After we have assessed the assumptions, our next step is to proceed with the MANOVA. Using SAS This may be carried out using the Pottery SAS Program below. Download the SAS Program here: pottery.sasView the video explanation of the SAS code. Using Minitab View the video below to see how to perform a MANOVA analysis on the pottery date using the Minitab statistical software application. Analysis The concentrations of the chemical elements depend on the site where the pottery sample was obtained \(\left( \Lambda ^ { \star } = 0.0123 ; F = 13.09 ; \mathrm { d } . \mathrm { f } = 15,50 ; p < 0.0001 \right)\). It was found, therefore, that there are differences in the concentrations of at least one element between at least one pair of sites. : How do the chemical constituents differ among sites? Question Using SAS A profile plot may be used to explore how the chemical constituents differ among the four sites. In a profile plot, the group means are plotted on the Y-axis against the variable names on the X-axis, connecting the dots for all means within each group. A profile plot for the pottery data is obtained using the SAS program below Download the SAS Program here: pottery1.sasView the video explanation of the SAS code. Using Minitab Not supported in Minitab Analysis Results from the profile plots are summarized as follows: The sample sites appear to be paired: Ashley Rails with Isle Thorns and Caldicot with Llanedyrn. Ashley Rails and Isle Thorns appear to have higher aluminum concentrations than Caldicot and Llanedyrn. Caldicot and Llanedyrn appear to have higher iron and magnesium concentrations than Ashley Rails and Isle Thorns. Calcium and sodium concentrations do not appear to vary much among the sites. Note: These results are not backed up by appropriate hypotheses tests. Hypotheses need to be formed to answer specific questions about the data. These should be considered only if significant differences among group mean vectors are detected in the MANOVA. Specific Questions Which chemical elements vary significantly across sites? How do the sites differ? Is the mean chemical constituency of pottery from Ashley Rails and Isle Thorns different from that of Llanedyrn and Caldicot? Is the mean chemical constituency of pottery from Ashley Rails equal to that of Isle Thorns? Is the mean chemical constituency of pottery from Llanedyrn equal to that of Caldicot? Analysis of Individual Chemical Elements A naive approach to assessing the significance of individual variables (chemical elements) would be to carry out individual ANOVAs to test: \(H_0\colon \mu_{1k} = \mu_{2k} = \dots = \mu_{gk}\) for chemical k. Reject \(H_0 \) at level \(\alpha\) if \(F > F_{g-1, N-g, \alpha}\) : If we're going to repeat this analysis for each of the Problem p variables, this does not control for the experiment-wise error rate. Just as we can apply a Bonferroni correction to obtain confidence intervals, we can also apply a Bonferroni correction to assess the effects of group membership on the population means of the individual variables. Bonferroni Correction: Reject \(H_0 \) at level \(\alpha\) if \(F > F_{g-1, N-g, \alpha/p}\) or, equivalently, if the p-value is less than \(α/p\). Example 8-4: Pottery Data (ANOVA) Section Using SAS The results for the individual ANOVA results are output with the SAS program below. Download the SAS program here: pottery.sasView the video explanation of the SAS code. Using Minitab Not Supported in Minitab Analysis Here, p = 5 variables, g = 4 groups, and a total of N = 26 observations. So, for an α = 0.05 level test, we reject \(H_0\colon \mu_{1k} = \mu_{2k} = \dots = \mu_{gk}\) if \(F > F_{3,22,0.01} = 4.82\) or equivalently, if the p-value reported by SAS is less than 0.05/5 = 0.01. The results of the individual ANOVAs are summarized in the following table. All tests are carried out with 3, 22 degrees freedom ( the d.f. should always be noted when reporting these results). Element F SAS p-value Al 26.67 < 0.0001 Fe 89.88 < 0.0001 Mg 49.12 < 0.0001 Ca 29.16 < 0.0001 Na 9.50 0.0003 Because all of the F-statistics exceed the critical value of 4.82, or equivalently, because the SAS p-values all fall below 0.01, we can see that all tests are significant at the 0.05 level under the Bonferroni correction. Conclusion: The means for all chemical elements differ significantly among the sites. For each element, the means for that element are different for at least one pair of sites.
While attempting to evaluate the integral $\int_{0}^{\frac{\pi}{2}}\sinh^{-1}{\left(\sqrt{\sin{x}}\right)}\,\mathrm{d}x$, I stumbled upon the following representation for a related integral in terms of hypergeometric functions: $$\small{\int_{0}^{1}\frac{x\sinh^{-1}{x}}{\sqrt{1-x^4}}\,\mathrm{d}x\stackrel{?}{=}\frac{\Gamma{\left(\frac34\right)}^2}{\sqrt{2\pi}}\,{_4F_3}{\left(\frac14,\frac14,\frac34,\frac34;\frac12,\frac54,\frac54;1\right)}-\frac{\Gamma{\left(\frac14\right)}^2}{72\sqrt{2\pi}}{_4F_3}{\left(\frac34,\frac34,\frac54,\frac54;\frac32,\frac74,\frac74;1\right)}}.$$ I'm having some trouble wading through the algebraic muckity-muck, so I'd like help confirming the above conjectured identity. More importantly, can these hypergeometrics be simplified in any significant way? The "niceness" of the parameters really makes me suspect it can be... Any thoughts or suggestions would be appreciated. Cheers!
I was trying to prove this trigonometric identity, it looks like using the elementary relations should be enough, but I still can't find how: $$\frac{1}{2}\sin^2 a \ \sin^2 b + \cos^2a \ \cos^2 b = \frac{1}{3} + \frac{2}{3} \biggl( \frac{3}{2}\cos^2 a - \frac{1}{2} \biggr)\biggl( \frac{3}{2}\cos^2 b - \frac{1}{2}\biggr)$$ Thank you! (taken from Celestial Mechanics)
I have a dream. I want my maths writing to magically be made into a .tex file so that I can edit it. I want to write my papers, my exams, my lecture notes, everything, by hand, then wave a magic wand to convert them to something pretty (and editable!). For example, could I scan in my writing and run some program which will do this for me? Or would I be able to use the Galaxy Note or Surface Pro (both of which come with usable styluses (styli?), and the Note can even read my handwriting!). Or is my dream not going to be realised just yet... Just to be precise, I want something to do the following: when I write the following (on paper, on a tablet - I don't care!)... Equations: $2+3x-5=6$ Blackboard-bold: $\mathbb{R}$ for the reals Maybe \mathcal also works $\mathcal{C}$? Maths on its own line is a must! $$e^{\pi i}=-1$$ And noticing the stuff below! Subscripts and superscripts, sets and other common things. $$S={x_i: x_i^2\in\mathbb{Q}}$$ Maybe I am dreaming a bit much with this one, how about rendering the "G" in "$G$ is a group"? Works with align: $$ \begin{align*} 3&=1+2\ &=2+1\ &=1+1+1 \end{align*} $$ Matrices: $$ \left( \begin{array}{ccc} 1&0&0\ 0&1&0\ 0&0&1 \end{array} \right) $$ Finally, I want to prove theorems so it'd better recognise what is coming next... Theorem 1(A. Theorem) .This is a theorem. Proof.This is a proof. ∎ ...then the magic wand will yield a .tex document with the following code (with suitable environments defined etc.) \begin{enumerate}\item Equations: $2+3x-5=6$\item Blackboard-bold: $\mathbb{R}$ for the reals\item Maybe \mathcal also works $\mathcal{C}$?\item Maths on its own line is a must!\[e^{\pi i}=-1\]And noticing the stuff below!\item Subscripts and superscripts, sets and other common things.\[S=\{x_i: x_i^2\in\mathbb{Q}\}\]\item Maybe I am dreaming a bit much with this one, how about rendering the "G" in "$G$ is a group"?\item Works with align:\begin{align*}3&=1+2\\&=2+1\\&=1+1+1\end{align*}\item Matrices:\[\left(\begin{array}{ccc}1&0&0\\0&1&0\\0&0&1\end{array}\right)\]\item Finally, I want to prove theorems so it'd better recognise what is coming next...\end{enumerate}\begin{theorem}[A. Theorem]This is a theorem.\end{theorem}\begin{proof}This is a proof.\end{proof} Note that the following two questions are relevant, but do not answer the above question. They are both rather outdated.
Statistics Theory New submissions 1-13] [ showing up to 2000 entries per page: fewer | more ] New submissions for Mon, 23 Sep 19 [1] arXiv:1909.09302 [pdf, ps, other] Title: Robust Estimation and Shrinkage in Ultrahigh Dimensional Expectile Regression with Heavy Tails and Variance HeterogeneitySubjects: Statistics Theory (math.ST); Methodology (stat.ME) High-dimensional data subject to heavy-tailed phenomena and heterogeneity are commonly encountered in various scientific fields and bring new challenges to the classical statistical methods. In this paper, we combine the asymmetric square loss and huber-type robust technique to develop the robust expectile regression for ultrahigh dimensional heavy-tailed heterogeneous data. Different from the classical huber method, we introduce two different tuning parameters on both sides to account for possibly asymmetry and allow them to diverge to reduce bias induced by the robust approximation. In the regularized framework, we adopt the generally folded concave penalty function like the SCAD or MCP penalty for the seek of bias reduction. We investigate the finite sample property of the corresponding estimator and figure out how our method plays its role to trades off the estimation accuracy against the heavy-tailed distribution. Also, noting that the robust asymmetric loss function is everywhere differentiable, based on our theoretical study, we propose an efficient first-order optimization algorithm after locally linear approximation of the non-convex problem. Simulation studies under various distributions demonstrates the satisfactory performances of our method in coefficient estimation, model selection and heterogeneity detection. [2] arXiv:1909.09336 [pdf, ps, other] Title: Applications of Generalized Maximum Likelihood Estimators to stratified sampling and post-stratification with many unobserved strataSubjects: Statistics Theory (math.ST) Consider the problem of estimating a weighted average of the means of $n$ strata, based on a random sample with realized $K_i$ observations from stratum $i, \; i=1,...,n$. This task is non-trivial in cases where for a significant portion of the strata the corresponding $K_i=0$. Such a situation may happen in post-stratification, when it is desired to have very fine stratification. A fine stratification could be desired in order that assumptions, or, approximations, like Missing At Random conditional on strata, will be appealing. Fine stratification could also be desired in observational studies, when it is desired to estimate average treatment effect, by averaging the effects in small and homogenous strata. Our approach is based on applying Generalized Maximum Likelihood Estimators (GMLE), and ideas that are related to Non-Parametric Empirical Bayes, in order to estimate the means of strata $i$ with corresponding $K_i=0$. There are no assumptions about a relation between the means of the unobserved strata (i.e., with $K_i=0$) and those of the observed strata. The performance of our approach is demonstrated both in simulations and on a real data set. Some consistency and asymptotic results are also provided. [3] arXiv:1909.09438 [pdf, other] Title: On the structure of exchangeable extreme-value copulasSubjects: Statistics Theory (math.ST); Probability (math.PR); Methodology (stat.ME) We show that the set of $d$-variate symmetric stable tail dependence functions, uniquely associated with exchangeable $d$-dimensional extreme-value copulas, is a simplex and determine its extremal boundary. The subset of elements which arises as $d$-margins of the set of $(d+k)$-variate symmetric stable tail dependence functions is shown to be proper for arbitrary $k \geq 1$. Finally, we derive an intuitive and useful necessary condition for a bivariate extreme-value copula to arise as bi-margin of an exchangeable extreme-value copula of arbitrarily large dimension, and thus to be conditionally iid. [4] arXiv:1909.09517 [pdf, other] Title: Multi-level Bayes and MAP monotonicity testingSubjects: Statistics Theory (math.ST) In this paper, we develop Bayes and maximum a posteriori probability (MAP) approaches to monotonicity testing. In order to simplify this problem, we consider a simple white Gaussian noise model and with the help of the Haar transform we reduce it to the equivalent problem of testing positivity of the Haar coefficients. This approach permits, in particular, to understand links between monotonicity testing and sparse vectors detection, to construct new tests, and to prove their optimality without supplementary assumptions. The main idea in our construction of multi-level tests is based on some invariance properties of specific probability distributions. Along with Bayes and MAP tests, we construct also adaptive multi-level tests that are free from the prior information about the sizes of non-monotonicity segments of the function. Cross-lists for Mon, 23 Sep 19 [5] arXiv:1909.09261 (cross-list from stat.AP) [pdf, other] Title: Posterior Contraction Rate of Sparse Latent Feature Models with Application to ProteomicsSubjects: Applications (stat.AP); Statistics Theory (math.ST); Methodology (stat.ME) The Indian buffet process (IBP) and phylogenetic Indian buffet process (pIBP) can be used as prior models to infer latent features in a data set. The theoretical properties of these models are under-explored, however, especially in high dimensional settings. In this paper, we show that under mild sparsity condition, the posterior distribution of the latent feature matrix, generated via IBP or pIBP priors, converges to the true latent feature matrix asymptotically. We derive the posterior convergence rate, referred to as the contraction rate. We show that the convergence holds even when the dimensionality of the latent feature matrix increases with the sample size, therefore making the posterior inference valid in high dimensional setting. We demonstrate the theoretical results using computer simulation, in which the parallel-tempering Markov chain Monte Carlo method is applied to overcome computational hurdles. The practical utility of the derived properties is demonstrated by inferring the latent features in a reverse phase protein arrays (RPPA) dataset under the IBP prior model. Software and dataset reported in the manuscript are provided at this http URL [6] arXiv:1909.09345 (cross-list from stat.ML) [pdf, other] Title: Does SLOPE outperform bridge regression?Comments: 51 pages, 18 figuresSubjects: Machine Learning (stat.ML); Machine Learning (cs.LG); Statistics Theory (math.ST) A recently proposed SLOPE estimator (arXiv:1407.3824) has been shown to adaptively achieve the minimax $\ell_2$ estimation rate under high-dimensional sparse linear regression models (arXiv:1503.08393). Such minimax optimality holds in the regime where the sparsity level $k$, sample size $n$, and dimension $p$ satisfy $k/p \rightarrow 0$, $k\log p/n \rightarrow 0$. In this paper, we characterize the estimation error of SLOPE under the complementary regime where both $k$ and $n$ scale linearly with $p$, and provide new insights into the performance of SLOPE estimators. We first derive a concentration inequality for the finite sample mean square error (MSE) of SLOPE. The quantity that MSE concentrates around takes a complicated and implicit form. With delicate analysis of the quantity, we prove that among all SLOPE estimators, LASSO is optimal for estimating $k$-sparse parameter vectors that do not have tied non-zero components in the low noise scenario. On the other hand, in the large noise scenario, the family of SLOPE estimators are sub-optimal compared with bridge regression such as the Ridge estimator. [7] arXiv:1909.09528 (cross-list from math.OC) [pdf, other] Title: Nonparametric learning for impulse control problemsSubjects: Optimization and Control (math.OC); Probability (math.PR); Statistics Theory (math.ST); Machine Learning (stat.ML) One of the fundamental assumptions in stochastic control of continuous time processes is that the dynamics of the underlying (diffusion) process is known. This is, however, usually obviously not fulfilled in practice. On the other hand, over the last decades, a rich theory for nonparametric estimation of the drift (and volatility) for continuous time processes has been developed. The aim of this paper is bringing together techniques from stochastic control with methods from statistics for stochastic processes to find a way to both learn the dynamics of the underlying process and control in a reasonable way at the same time. More precisely, we study a long-term average impulse control problem, a stochastic version of the classical Faustmann timber harvesting problem. One of the problems that immediately arises is an exploration vs. exploitation-behavior as is well known for problems in machine learning. We propose a way to deal with this issue by combining exploration- and exploitation periods in a suitable way. Our main finding is that this construction can be based on the rates of convergence of estimators for the invariant density. Using this, we obtain that the average cumulated regret is of uniform order $O({T^{-1/3}})$. Replacements for Mon, 23 Sep 19 [8] arXiv:1710.09735 (replaced) [pdf, other] [9] arXiv:1903.04306 (replaced) [pdf, ps, other] [10] arXiv:1812.06282 (replaced) [pdf, ps, other] Title: A Generalization of Hierarchical Exchangeability on Trees to Directed Acyclic GraphsComments: 34 pages, 10 figures. Many presentation changes in this version including: more examples and reorganized proofsSubjects: Probability (math.PR); Logic (math.LO); Statistics Theory (math.ST) [11] arXiv:1905.09195 (replaced) [pdf, other] [12] arXiv:1908.03442 (replaced) [pdf, other] [13] arXiv:1908.06852 (replaced) [pdf, other] 1-13] [ showing up to 2000 entries per page: fewer | more ] Disable MathJax (What is MathJax?)
To prove that some decision problem $P$ is NP-complete, my understanding is that it suffices to show that the problem is in NP [...] and to demonstrate that some known NP-complete problem $Q$ can be reduced to $P$ [...] by some kind of appropriate reduction in polynomial time. We can then say that we can solve $Q$ in polynomial time with an oracle for $P$. This is correct. So sure, I understand that with an oracle for 3-coloring, one could solve arbitrary instances of 3SAT in polynomial time. But why would I assume the converse holds, that I can solve arbitrary instances of 3-coloring provided an oracle for 3SAT? Yes, the converse holds, because 3SAT is also NP-complete. But that is a separate fact, which requires a separate proof. Of course, that proof exists: it's typically proven by taking an arbitrary NP Turing machine and writing a Boolean formula that says "That machine accepts its input." Indeed something has to be proven NP-complete by this kind of method. We couldn't claim that all problems in NP can be reduced to complete problems just by reducing complete problems between themselves: at some point, you do have to prove that all NP problems do reduce to some specific complete problem. To see why don't get the converse for free, consider reductions outside NP. Because NP$\,\subset\,$ NEXP (nondeterministic exponential time), any problem in NP can be reduced to any NEXP-complete problem. However, we know that the converse reduction, from a NEXP-complete problem to an NP problem, cannot exist: that would imply that NP=NEXP but we know that those two classes are different, by the time hierarchy theorem. The 3SAT $\leq_p$ 3-coloring reduction, of course, requires the use of special and restricted graphs where a 3-coloring exists iff a satisfying assignment for the 3SAT problem exists. Doesn't this conflict with statements (at least I thought I heard in CS classes) saying that an oracle for 3SAT would prove P = NP? Might it not be the case that such an algorithm would be worthless for solving families of 3-coloring problems? What you (should have!) heard in class is that having a deterministic polynomial time algorithm for 3SAT would prove that P=NP. But remember what an oracle is. An oracle is an add-on to your Turing machine with the following property: if you write a 3SAT instance onto the oracle's tape, you can "push a button" and, in one time step, the oracle will tell you whether that formula is satisfiable or not. So, the fact that 3SAT is NP-complete means that NP is equal to the class of problems that can be solved in polynomial time by a deterministic Turing machine with an oracle for 3SAT. However, that is not P: P is the class of problems that can be solved in polynomial time by a deterministic Turing machine with no oracle at all. In symbols, you've proved that NP=P, not that 3SAT NP=P. However, if you can prove that 3SAT is in P, i.e., that there is a deterministic polynomial time algorithm (without using oracles) for 3SAT, then you have proven that P=NP. The reason is that, now, you can go back to your proof that NP=P and replace all the oracle calls with a "subroutine" that just solves the 3SAT instance you wrote to the oracle's tape. 3SAT Doesn't this imply some kind of hierarchy of NP-complete problems? Not really, no. However, there is a hierarchy of NP-complete problems. I already alluded to that above by using the time hierarchy theorem to separate NP and NEXP. But the time hierarchy theorem is a little more powerful than that: it also tells us that NTIME($n^k$)$\neq$NTIME($n^{k+1}$). That is, for all $k$, there are things that you can do nondeterministically in $O(n^{k+1})$ steps that you can't do in $O(n^{k})$ steps (the same is true for deterministic machines). That gives you a hierarchy right away. In practice, you don't hear about this hierarchy very often. I think the reason is that, by the time you've said that a problem is NP-complete, that already means that it's hard enough that you probably don't care about its exact complexity. For deterministic algorithms, we do care about the hierarchy: $\Theta(n^{10})$ is impractical, $\Theta(n^3)$ is OK on smallish datasets, $\Theta(n\log n)$ is great unless you're Google and $n$ is the whole web, etc. But for NP-complete problems, they're almost all at the "impractical" end so we normally don't bother to distinguish. (Footnote for the experts: yes, we often distinguish between NP-complete problems in terms of, say, fixed-parameter tractability and the W-hierarchy but I think that's not really the direction this question is going.)
The following shows two examples to construct orthogonal contrasts. In each example, we consider balanced data; that is, there are equal numbers of observations in each group. Example 8-6: Section In some cases, it is possible to draw a tree diagram illustrating the hypothesized relationships among the treatments. In the following tree, we wish to compare 5 different populations of subjects. Prior to collecting the data, we may have reason to believe that populations 2 and 3 are most closely related. Populations 4 and 5 are also closely related, but not as close as populations 2 and 3. Population 1 is closer to populations 2 and 3 than population 4 and 5. Each branch (denoted by the letters A,B,C, and D) corresponds to a hypothesis we may wish to test. This yields the contrast coefficients as shown in each row of the following table: Contrasts 1 2 3 4 5 A \(\dfrac{1}{3}\) \(\dfrac{1}{3}\) \(\dfrac{1}{3}\) \(- \dfrac { 1 } { 2 }\) \(- \dfrac { 1 } { 2 }\) B 1 \(- \dfrac { 1 } { 2 }\) \(- \dfrac { 1 } { 2 }\) 0 0 C 0 0 0 1 -1 D 0 1 -1 0 0 Consider Contrast A. Here, we are comparing the mean of all subjects in populations 1,2, and 3 to the mean of all subjects in populations 4 and 5. Note!The first group of populations (1, 2, and 3) has contrast coefficients with positive signs, while the second group (4 and 5) has negative signs. Because there are 3 populations in the first group, each population gets a coefficient of +1/3. Because there are 2 populations in the second group, each population gets a coefficient of -1/2. For Contrast B, we compare population 1 (receiving a coefficient of +1) with the mean of populations 2 and 3 (each receiving a coefficient of -1/2). Multiplying the corresponding coefficients of contrasts A and B, we obtain: (1/3) × 1 + (1/3) × (-1/2) + (1/3) × (-1/2) + (-1/2) × 0 + (-1/2) × 0 = 1/3 - 1/6 - 1/6 + 0 + 0 = 0 So contrasts A and B are orthogonal. Similar computations can be carried out to confirm that all remaining pairs of contrasts are orthogonal to one another. Example 8-7: Section Consider the factorial arrangement of drug type and drug dose treatments: Drug Low High A 1 2 B 3 4 Here, treatment 1 is equivalent to a low dose of drug A, treatment 2 is equivalent to a high dose of drug A, etc. For this factorial arrangement of drug type and drug dose treatments, we can form the orthogonal contrasts: Contrasts A, Low A, High B, Low B, High Drug \(- \dfrac{1}{2}\) \(- \dfrac{1}{2}\) \( \dfrac{1}{2}\) \( \dfrac{1}{2}\) Dose \(- \dfrac{1}{2}\) \( \dfrac{1}{2}\) \(- \dfrac { 1 } { 2 }\) \( \dfrac{1}{2}\) Interaction \( \dfrac{1}{2}\) \(- \dfrac{1}{2}\) \(- \dfrac{1}{2}\) \( \dfrac{1}{2}\) To test for the effects of drug type, we give coefficients with a negative sign for drug A, and positive signs for drug B. Because there are two doses within each drug type, the coefficients take values of plus or minus 1/2. Similarly, to test for the effects of drug dose, we give coefficients with negative signs for the low dose, and positive signs for the high dose. Because there are two drugs for each dose, the coefficients take values of plus or minus 1/2. The final test considers the null hypothesis that the effect of the drug does not depend on dose, or conversely, the effect of the dose does not depend on the drug. In either case, we are testing the null hypothesis that there is no interaction between drug and dose. The coefficients for this interaction are obtained by multiplying the signs of the coefficients for drug and dose. Thus, for drug A at the low dose, we multiply "-" (for the drug effect) times "-" (for the dose effect) to obtain "+" (for the interaction). Similarly, for drug A at the high dose, we multiply "-" (for the drug effect) times "+" (for the dose effect) to obtain "-" (for the interaction). The remaining coefficients are obtained similarly. Example 8-8: Pottery Data Section Recall the specific questions: Does the mean chemical content of pottery from Ashley Rails and Isle Thorns equal that of pottery from Caldicot and Llanedyrn? Does the mean chemical content of pottery from Ashley Rails equal that of that of pottery from Isle Thorns? Does the mean chemical content of pottery from Caldicot equal that of pottery from Llanedyrn? These questions correspond to the following theoretical relationships among the sites: The relationships among sites suggested in the above figure suggests the following contrasts: Contrasts Ashley Rails Caldicot Isle Thorns Llanedyrn 1 \( \dfrac{1}{2}\) \(- \dfrac{1}{2}\) \( \dfrac{1}{2}\) \(- \dfrac{1}{2}\) 2 1 0 -1 0 3 0 1 0 -1 \(n_i\) 5 2 5 14 Notes Contrasts 1 and 2 are orthogonal: \[\sum_{i=1}^{g} \frac{c_id_i}{n_i} = \frac{0.5 \times 1}{5} + \frac{(-0.5)\times 0}{2}+\frac{0.5 \times (-1)}{5} +\frac{(-0.5)\times 0}{14} = 0\] However, contrasts 1 and 3 are not orthogonal: \[\sum_{i=1}^{g} \frac{c_id_i}{n_i} = \frac{0.5 \times 0}{5} + \frac{(-0.5)\times 1}{2}+\frac{0.5 \times 0}{5} +\frac{(-0.5)\times (-1) }{14} = \frac{6}{28}\] Solution: Instead of estimating the mean of pottery collected from Caldicot and Llanedyrn by \[\frac{\mathbf{\bar{y}_2+\bar{y}_4}}{2}\] we can weight by sample size: \[\frac{n_2\mathbf{\bar{y}_2}+n_4\mathbf{\bar{y}_4}}{n_2+n_4} = \frac{2\mathbf{\bar{y}}_2+14\bar{\mathbf{y}}_4}{16}\] Similarly, the mean of pottery collected from Ashley Rails and Isle Thorns may estimated by \[\frac{n_1\mathbf{\bar{y}_1}+n_3\mathbf{\bar{y}_3}}{n_1+n_3} = \frac{5\mathbf{\bar{y}}_1+5\bar{\mathbf{y}}_3}{10} = \frac{8\mathbf{\bar{y}}_1+8\bar{\mathbf{y}}_3}{16}\] This yields the Orthogonal Contrast Coefficients: Contrasts Ashley Rails Caldicot Isle Thorns Llanedyrn 1 \( \dfrac{8}{16}\) \(- \dfrac{2}{16}\) \( \dfrac{8}{16}\) \(- \dfrac{14}{16}\) 2 1 0 -1 0 3 0 1 0 -1 \(n_i\) 5 2 5 14 Using SAS The inspect button below will walk through how these contrasts are implemented in the SAS program . Download the SAS program here: pottery.sasView the video explanation of the SAS code. Using Minitab Orthogonal contrast for MANOVA is not available in Minitab at this time. Analysis The following table of estimated contrasts is obtained Element \(\widehat { \Psi } _ { 1 }\) \(\widehat { \Psi } _ { 2 }\) \(\widehat { \Psi } _ { 3 }\) Al 5.29 -0.86 -0.86 Fe -4.64 -0.20 -0.96 Mg -4.06 -0.07 -0.97 Ca -0.17 0.03 0.09 Na -0.17 -0.01 -0.20 These results suggest: Pottery from Ashley Rails and Isle Thorns have higher aluminum and lower iron, magnesium, calcium, and sodium concentrations than pottery from Caldicot and Llanedyrn. Pottery from Ashley Rails have higher calcium and lower aluminum, iron, magnesium, and sodium concentrations than pottery from Isle Thorns. Pottery from Caldicot have higher calcium and lower aluminum, iron, magnesium, and sodium concentrations than pottery from Llanedyrn. Note!These suggestions have yet to be backed up by appropriate hypotheses tests.
I'm trying to understand how people actually measure decay constants that are discussed in meson decays. As a concrete example lets consider the pion decay constant. The amplitude for $\pi ^-$ decay is given by, \begin{equation} \big\langle 0 | T \exp \left[ i \int \,d^4x {\cal H} \right] | \pi ^- ( p _\pi ) \big\rangle \end{equation} To lowest order this is given by, \begin{equation} i \int \,d^4x \left\langle 0 | T W _\mu J ^\mu | \pi ^- ( p _\pi ) \right\rangle \end{equation} If we square this quantity and integrate over phase space then we will get the decay rate. On the other hand, the pion decay constant is defined through, \begin{equation} \left\langle 0 | J ^\mu | \pi ^- \right\rangle = - i f _\pi p _\pi ^\mu \end{equation} This is clearly related to the above, but it seems to me there are a couple of subtleties. How do we get rid of the time-ordering symbol? Since we don't have a value for $ W _\mu $ how can we go ahead and extract $f _\pi $ ?
ok, suppose we have the set $U_1=[a,\frac{a+b}{2}) \cup (\frac{a+2}{2},b]$ where $a,b$ are rational. It is easy to see that there exists a countable cover which consists of intervals that converges towards, a,b and $\frac{a+b}{2}$. Therefore $U_1$ is not compact. Now we can construct $U_2$ by taking the midpoint of each half open interval of $U_1$ and we can similarly construct a countable cover that has no finite subcover. By induction on the naturals, we eventually end up with the set $\Bbb{I} \cap [a,b]$. Thus this set is not compact I am currently working under the Lebesgue outer measure, though I did not know we cannot define any measure where subsets of rationals have nonzero measure The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure that is, trying to compute the Lebesgue outer measure of the irrationals using only the notions of covers, topology and the definition of the measure What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set Problem: Let $X$ be some measurable space and $f,g : X \to [-\infty, \infty]$ measurable functions. Prove that the set $\{x \mid f(x) < g(x) \}$ is a measurable set. Question: In a solution I am reading, the author just asserts that $g-f$ is measurable and the rest of the proof essentially follows from that. My problem is, how can $g-f$ make sense if either function could possibly take on an infinite value? @AkivaWeinberger For $\lambda^*$ I can think of simple examples like: If $\frac{a}{2} < \frac{b}{2} < a, b$, then I can always add some $\frac{c}{2}$ to $\frac{a}{2},\frac{b}{2}$ to generate the interval $[\frac{a+c}{2},\frac{b+c}{2}]$ which will fullfill the criteria. But if you are interested in some $X$ that are not intervals, I am not very sure We then manipulate the $c_n$ for the Fourier series of $h$ to obtain a new $c_n$, but expressed w.r.t. $g$. Now, I am still not understanding why by doing what we have done we're logically showing that this new $c_n$ is the $d_n$ which we need. Why would this $c_n$ be the $d_n$ associated with the Fourier series of $g$? $\lambda^*(\Bbb{I}\cap [a,b]) = \lambda^*(C) = \lim_{i\to \aleph_0}\lambda^*(C_i) = \lim_{i\to \aleph_0} (b-q_i) + \sum_{k=1}^i (q_{n(i)}-q_{m(i)}) + (q_{i+1}-a)$. Therefore, computing the Lebesgue outer measure of the irrationals directly amounts to computing the value of this series. Therefore, we first need to check it is convergent, and then compute its value The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set Alessandro: and typo for the third $\Bbb{I}$ in the quote, which should be $\Bbb{Q}$ (cont.) We first observed that the above countable sum is an alternating series. Therefore, we can use some machinery in checking the convergence of an alternating series Next, we observed the terms in the alternating series is monotonically increasing and bounded from above and below by b and a respectively Each term in brackets are also nonegative by the Lebesgue outer measure of open intervals, and together, let the differences be $c_i = q_{n(i)-q_{m(i)}}$. These form a series that is bounded from above and below Hence (also typo in the subscript just above): $$\lambda^*(\Bbb{I}\cap [a,b])=\sum_{i=1}^{\aleph_0}c_i$$ Consider the partial sums of the above series. Note every partial sum is telescoping since in finite series, addition associates and thus we are free to cancel out. By the construction of the cover $C$ every rational $q_i$ that is enumerated is ordered such that they form expressions $-q_i+q_i$. Hence for any partial sum by moving through the stages of the constructions of $C$ i.e. $C_0,C_1,C_2,...$, the only surviving term is $b-a$. Therefore, the countable sequence is also telescoping and: @AkivaWeinberger Never mind. I think I figured it out alone. Basically, the value of the definite integral for $c_n$ is actually the value of the define integral of $d_n$. So they are the same thing but re-expressed differently. If you have a function $f : X \to Y$ between two topological spaces $X$ and $Y$ you can't conclude anything about the topologies, if however the function is continuous, then you can say stuff about the topologies @Overflow2341313 Could you send a picture or a screenshot of the problem? nvm I overlooked something important. Each interval contains a rational, and there are only countably many rationals. This means at the $\omega_1$ limit stage, thre are uncountably many intervals that contains neither rationals nor irrationals, thus they are empty and does not contribute to the sum So there are only countably many disjoint intervals in the cover $C$ @Perturbative Okay similar problem if you don't mind guiding me in the right direction. If a function f exists, with the same setup (X, t) -> (Y,S), that is 1-1, open, and continous but not onto construct a topological space which is homeomorphic to the space (X, t). Simply restrict the codomain so that it is onto? Making it bijective and hence invertible. hmm, I don't understand. While I do start with an uncountable cover and using axiom of choice to well order the irrationals, the fact that the rationals are countable means I eventually end up with a countable cover of the rationals. However the telescoping countable sum clearly does not vanish, so this is weird... In a schematic, we have the following, I will try to figure this out tomorrow before moving on to computing the Lebesgue outer measure of the cantor set: @Perturbative Okay, kast question. Think I'm starting to get this stuff now.... I want to find a topology t on R such that f: R, U -> R, t defined by f(x) = x^2 is an open map where U is the "usual" topology defined by U = {x in U | x in U implies that x in (a,b) \subseteq U}. To do this... the smallest t can be is the trivial topology on R - {\emptyset, R} But, we required that everything in U be in t under f? @Overflow2341313 Also for the previous example, I think it may not be as simple (contrary to what I initially thought), because there do exist functions which are continuous, bijective but do not have continuous inverse I'm not sure if adding the additional condition that $f$ is an open map will make an difference For those who are not very familiar about this interest of mine, besides the maths, I am also interested in the notion of a "proof space", that is the set or class of all possible proofs of a given proposition and their relationship Elements in a proof space is a proof, which consists of steps and forming a path in this space For that I have a postulate that given two paths A and B in proof space with the same starting point and a proposition $\phi$. If $A \vdash \phi$ but $B \not\vdash \phi$, then there must exists some condition that make the path $B$ unable to reach $\phi$, or that $B$ is unprovable under the current formal system Hi. I believe I have numerically discovered that $\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n$ as $K\to\infty$, where $c=0,\dots,K$ is fixed and $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$. Any ideas how to prove that?
Lesson 4: Confidence Intervals Objectives Construct and interpret sampling distributions using StatKey Explain the general form of a confidence interval Interpret a confidence interval Explain the process of bootstrapping Construct bootstrap confidence intervals using the standard error method Construct bootstrap confidence intervals using the percentile method in StatKey Construct bootstrap confidence intervals using Minitab Express Describe how sample size impacts a confidence interval This lesson corresponds to Chapter 3 in the Lock 5 textbook. In Lessons 2 and Lesson 3 you learned about descriptive statistics. Lesson 4 begins our coverage of inferential statistics which use data from a sample to make an inference about a population. Confidence intervals use data collected from a sample to estimate a population parameter. Confidence Interval A range computed using sample statistics to estimate an unknown population parameter with a stated level of confidence In this lesson we will be working with the following statistics and parameters: Population Parameter Sample Statistic Mean \(\mu\) \(\overline x\) Difference in two means \(\mu_1 - \mu_2\) \(\overline x_1 - \overline x_2\) Proportion \(p\) \(\widehat p\) Difference in two proportions \(p_1 - p_2\) \(\widehat p_1 - \widehat p_2\) Correlation \(\rho\) \(r\) Slope (simple linear regression) \(\beta\) \(b\) Before we being, let's review population parameters and sample statistics. Population parameters are fixed values. We rarely know the parameter values because it is often difficult to obtain measures from the entire population. Sample statistics are known values, but they are random variables because they vary from sample to sample. Example: Campus Commuters Section A survey is carried out at a university to estimate the proportion of undergraduate students who drive to campus to attend classes. One thousand students are randomly selected and asked whether they drive or not to campus to attend classes. The population is all of the undergraduates at that university. The sample is the group of 1000 undergraduate students surveyed. The parameter is the true proportion of all undergraduate students at that university who drive to campus to attend classes. The statistic is the proportion of the 1000 sampled undergraduates who drive to campus to attend classes. Example: Annual Income in California Section A study is conducted to estimate the true mean annual income of all adult residents of California. The study randomly selects 2000 adult residents of California. The population consists of all adult residents of California. The sample is the 2000 residents in the study. The parameter is the true mean annual income of all adult residents of California. The statistic is the mean of the 2000 residents in this sample. Ultimately, we measure sample statistics and use them to draw conclusions about unknown population parameters. This is statistical inference.
Generalized Least Squares Summary The Generalized Least Squares model is a regression formulation which does not assume that the model errors/residuals are independent of each other. Rather it borrows from the Gaussian Process paradigm and assigns a covariance structure to the model residuals. Warning The nomenclature Generalized Least Squares (GLS) and Generalized Linear Models (GLM) can cause much confusion. It is important to remember the context of both. GLS refers to relaxing of the independence of residuals assumption while GLM refers to Ordinary Least Squares OLS based models which are extended to model regression, counts, or classification tasks. Formulation.¶ Let \mathbf{X} \in \mathbb{R}^{n\times m} be a matrix containing data attributes. The GLS model builds a linear predictor of the target quantity of the following form. Where \varphi(.): \mathbb{R}^m \rightarrow \mathbb{R}^d is a feature mapping, \mathbf{y} \in \mathbb{R}^n is the vector of output values found in the training data set and \mathbf{\beta} \in \mathbb{R}^d is a set of regression parameters. In the GLS framework, it is assumed that the model errors \varepsilon \in \mathbb{R}^n follow a multivariate gaussian distribution given by \mathbb {E} [\varepsilon |\mathbf {X} ] = 0 and \operatorname{Var} [\varepsilon |\mathbf {X} ] = \mathbf {\Omega }, where \mathbf{\Omega} is a symmetric positive semi-definite covariance matrix. In order to calculate the model parameters \mathbf{\beta}, the log-likelihood of the training data outputs must be maximized with respect to the parameters \mathbf{\beta}, which leads to. For the GLS problem the analytical solution of the above optimization problem can be calculated. GLS Models¶ You can create a GLS model using the GeneralizedLeastSquaresModel class. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 //Get the training data val data: Stream[(DenseVector[Double], Double)] = _ //Define a feature mapping //If it is not defined the GLS model //will assume a identity feature map. val feature_map: DenseVector[Double] => DenseVector[Double] = _ //Initialize a kernel function. val kernel: LocalScalarKernel[DenseVector[Double]] = _ //Construct the covariance matrix for model errors. val covmat = kernel.buildBlockedKernelMatrix(data, data.length) val gls_model = new GeneralizedLeastSquaresModel( data, covmat, feature_map) //Train the model gls_model.learn()
Recently I asked a question regarding how to mix a glossy and diffuse shader in my path tracer: Mix shader looks wrong on my path tracer. I thought it was incorrect, but a comparison between mine and Blender's seems correct. However, now I am lost in how I would approach making the surface of a mirror look rough as this example shows: I have done some research, and discovered that there are different types of reflectance models such as Beckmann and GGX. I've found some explanations of how to find the BRDF such as this site: Specular BRDF, but I can't find any explanation on how to do the reflection ray with explicit lighting. As shown in the pseudo code of my path tracer (below), for every object I shoot a ray towards a light and use the surface transport rendering equation to incorporate the BRDF. I am assuming this is where the GGX/Beckmann BRDF would be plugged in. (I'm guessing it's not quite that simple though and some probability must be involved). What really gets me though is that reflection ray. For diffuse it's easy because I just send off a random ray anywhere in the hemisphere of the surface normal. However, for specular, there's a more sharp bump in the BRDF. How would I translate that into a reflected ray? If I just jittered the ideal reflection ray a little, that wouldn't relate to how the microfacets are modeled in the GGX/Beckmann reflection. Explicit lighting equation from Peter Shirley's Realistic Ray Tracing: $$\large L_S(\mathbf x,\mathbf k_o)\approx\frac{\rho(\mathbf k_i,\mathbf k_o)L_e(\mathbf x',\mathbf x-\mathbf x')v(\mathbf x,\mathbf x')\cos\theta_i\cos\theta'}{p(\mathbf x')\left\lVert \mathbf x-\mathbf x'\right\rVert^2}$$ Where $p(\mathbf x)$ is the density function of the light triangle $1/\text{Area}$ $\mathbf x'$ is a random point on the light triangle $\mathbf x$ is the hit point on the object The cosines are the angles between the light's normal and the light ray, and the object's normal and the light ray $\rho$ is the BRDF ($1/\pi$ for ideal diffuse) And $v$ is either $1$ or $0$ depending on if it's in shadow Pseudo code: rayColor(ray r, depth, int E=1){ if(r doesn't hit triangle) return 0 if(r hit is a light) { if(E) return light_emission else return 0 } vector x = r.origin + r.direction*t // x is point where r hit tri vector n = normal where ray hit triangle n.normalize() vector nl = n.dot(r.d) < 0 ? n : n*(-1) // properly orient normal if(++depth > 5) return 0 // max bounces float triangle_area = area of emitting triangle vector x_light_random = random point on emitting triangle vector light_normal = normal of emitting triangle vector d = x_light_random = x_converted; if(light_normal.dot(d) > 0) light_normal *= -1; // make it emit // both sides object_normal.normalize(); light_normal.normalize(); BRDF = 1/PI // perfect diffuse light_emitted = 1 // emission of 1 vector light_out = 0 if(ray starting at x towards d hits light (i.e. not in shadow)) { light_out = BRDF*light_emitted*(object_normal.dot(d))* (-1*light_normal.dot(d)*triangle_area)/ (d.length*d.length*d.length*d.length) } vector direct_light = color_of_object_triangle*light_out; //----SPECULAR----- vector d2 = r.dir-n*2*n.dot(r.dir); // ideal reflection vector light_color = 1 // white since dialectics don't change spec vector specular = light_color*rayColor(createRay(x, d2),depth) float P = 0.5; // 50/50 chance of mirror/diffuse if(erand(Xi) < P) return direct_light/P else return spec/(1-P)
I have a question from the paper "More Abelian Dualities in 2+1 Dimensions". https://arxiv.org/pdf/1609.04012.pdf On page 3, it says that we flow towards IR by sending $\alpha\rightarrow\infty$ and tuning the mass to zero. $$S[\phi ;A]=\int d^{3}x |(\partial_{\mu}-iA_{\mu})\phi|^{2}-\alpha|\phi|^{4}.$$ My question is that why this limit takes the theory to IR.
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional... no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right? For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right. Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present? Like can we think $H=C_q \times C_r$ or something like that from the given data? When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities? When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$. And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$. In this case how can I write G using notations/symbols? Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$? First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.? As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations? There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$. A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$ If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword There is good motivation for such a definition here So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$ It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$. Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative I don't know how to interpret this coarsely in $\pi_1(S)$ @anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale. @ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math. Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap... I can probably guess that they are using symmetries and permutation groups on graphs in this course. For example, orbits and studying the automorphism groups of graphs. @anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix! Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case @chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$ could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
A cloud has relative humidity of 100% or more. Is it correct to say that the mixing ratio of a cloud is undefined? My reasoning is that dry air in a cloud is negligible therefore $w = \frac{x g}{ 0 kg}$ No, it is not undefined. A relative humidity of 100% refers to cases in which the atmosphere holds as much water, as the temperature would allow. The term "dry air" denotes the fraction of the air not contaning any water vapor (or explicitly not any water molecules). Dry air is therefore never negligible in clouds. The saturation vapor pressure over a plane water surface can be calculated by the Magnus formula (which is an empirical approximation to the Clausius-Clapeyron-equation valid in a temperature range from -45°C to 60°C): $E_{H_2O}(\vartheta)=6.112hPa \cdot exp{\frac{17.62 \cdot \vartheta}{243.15 ^\circ C + \vartheta}} $, where $\vartheta$ is the temperature in °C. The water vapor mixing ration $r$ itself is calculated by: $r=\frac{0.622e}{p-e}$, where $p$ is the air pressure and $e$ is the water vapor pressure. In case of a cloud you equate $e$ and $E_{H_2O}(\vartheta)$, since the relative humidity is defined as: $r.H.=\frac{e}{E_{H_2O} (\vartheta)}$ As a sidenote: Oversaturation in clouds may occur frequently but is usually smaller then 1%. For back-of-the-envelope-calculation this should be sufficient. Please note, that different text books use diffent symbols for $e, E, ...$
Q1 What are the values of x for which \[\frac {x^3 + 3x^2 + 2x} { x^2 + 5x +6} = 0 \] Q2 Let x be the required Arithmetic Mean, then 8, x, 16 form three successive terms in the Aritmetic Progression. Find x. Q3 The sum of the first and third terms of a Geometric progression is \[6\frac{1}{2}\] and the sum of the second and fourth terms is \[9\frac{3}{4}\].Find the first term. Q4 The coordinate of the centre and the radius of the circle \[y^2 + x^2 �?? 14x -8y + 56 = 0\] is __________ Q5 The sum of five numbers in an Arithmetic Progression is 25 and the sum of their squares is 165. Find the common difference. Q6 Let x be the required Geometric Mean (GM) between a and b. Then a, x, b, are the successive terms in the Geometric Progression. Find the GM Q7 Let \[Z_1 = 12 + 5i,~~ Z_2 = 14 �?? 7i\] express \[Z_1 Z_2\] in standard form Q8 The limiting value of \[\frac{n^{3}+5n^{2}+2}{2n^{3} + 9} \] as \[n\rightarrow \infty\]is __________ Q9 Suppose A = (1, 4), B = (4, 5), C = (5, 7), then\[ (A \times B) \cap (A \times C)\] is ____________ Q10 Let U = {1, 2, �?��?��?��?�., 8, 9}, A = {1, 2, 3, 4}, B = {2, 4, 6, 8}, C = {3, 4, 5, 6}, then \[(A�?�B)^c\] Q11 A company sets up a smoking fund and invests N10, 000 each year for 5years at 9% compound interest, the worth of the fund after 5years is __________ Q12 The smallest number of terms of the geometrical series, 8 + 24 + 72 + �?��?��?��?��?�.., that will give a total greater than 6, 000, 000 is _______ Q13 The common ratio in a geometric series having first term 7, the last term 448, and the sum 889 is _________________ Q14 The number of terms in an A.P. whose first term is 5, common difference 3, and sum 55 is _____ Q15 Suppose S = a + (a + d) + (a + 2d) + �?��?��?��?��?�.. + [a + (n �?? 1) d] �?��?�, the sum of n terms is __________________ Q16 ________ is the value of n given that 77 is the nth term of the A.P \[3\frac{1}{2}, ~~7,~~ 10\frac{1}{2}, �?�..\] Q17 ________ is the limiting value of\[ 3x^3 + 5x^2 �?? 6\] as \[n\rightarrow -2\] Q18 The 23rd term of the A.P �??7, �??3, +1, �?��?�gives_______________ Q19 The 383rd term of the series 5 + 8 + 11 + … is______________ ¬ Q20 The sum of five numbers in an A.P is 25 and the sum of their squares is 165. Find the numbers Q21 The sum of the first n terms of a series is 2n^2 �?? n. Find the nth term and of the series. Q22 Find the number of terms in an A.P. whose first term is 5, common difference 3, and sum 55. Q23 let \[ Z_{1} = 5 + 12i, ~~ Z_{2} =3 + 4i\]. Find \[( Z_{1})^{2} – (Z_{2})^{2}\] in standard form Q24 Determine the value(s) of x for which \[\frac{x^{7}+5x^{5}+6x^{3}}{x^{5}-5x^{4}+6 x^{3}}\] is undefined Q25 Suppose \[Z_{1} = 5 + 12i \] and \[Z_{2} =3 + 4i\], express \[\frac{Z_{1}}{Z_{2}}\] in polar form Q26 Find the equation of a circle having centre (5, -4) and radius 7 Q27 Express 12 + 5i in polar form (i.e in form of \[z=r\cos\theta + i\sin\theta\] Q28 If \[ Z_{1 }= 7 + 12i\] and \[Z_{2} = 4 + 3i\]. Find the distance between \[Z_{1 }~~and ~~Z_{2}\] Q29 Find the solution set for \[\frac{x-5}{x+10}\leq 1\] Q30 Solve for x in \[ |x2 �?? 4|\leq 4\] Q31 Let M = {a, b} then find the power set \[2^{M}\]. Q32 If U = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, A = {1, 4, 7, 10}, B = {2, 5, 8}. Find \[A�?? \cap B�??\] Q33 8 If A = {1, 2, 3, 4}, B = {3, 4, 5, 6}, C = {4, 5, 6, 7} Find \[ (A \cap B) �?? (B \cup C)\] Q34 Determine the value(s) of x for which \[\frac{x^{7}+5x^{5}+6x^{3}}{x^{5}-5x^{4}+6 x^{3}}\] is undefined Q35 Find the value(s) of x for which \[\frac{x^{3}-7x^{2}+6x}{x^{5}+4x^{4}-3}=0\] Q36 A survey in a class shows that 15 out of the pupils play cricket, 11 play tennis and 6 play cricket and tennis. How many pupils are there in the class. (Hint: Assume that everyone plays at least one of these games) Q37 In a survey of 100 students, the numbers studying various languages were found to be: Spanish 28: German 30: French 42: Spanish and French 10: Spanish and German 8: German and French 5: all the three languages 3. How many students were studying no language? Q38 A market survey was conducted to determine consumer acceptance for a group of products. The survey revealed the following about three products A, B, & C. of 155 people interviewed, it was discovered that,74 like product A, 81 like product B,60 like product C,27 like product A & B, 25 like products A & C,35 like products B & C and 12 did not like any of the three products.Determine the number of people out of those interviewed that liked all three products. Determine the number of people out of those interviewed that liked all three products. Q39 Determine the limiting value of \[\frac{3n^{2}+1}{4(n^{2}-2)}\] as \[n\rightarrow \infty\] Q40 Sum to n terms of three AP�??s are \[S_{1}, ~~S_{2} ~~and~~S_{3}\]. The first term of each of them is 1 and common differences are 1, 2, and 3 respectively. Find the nth term to show that the above \[S_{1}, ~~S_{2} and~~S_{3}\] are AP�??s. Q72 Find the limiting value of \[\frac { 7n + 5} { 2n – 3}\] as n \rightarrow {\infIty} Q73 How many read Science today if and only if, they read Caravan? Q74 How many read Caravan as their only magazine? Q75 In a survey of 100 families, the numbers that read the most recent issues of various magazinees were found to be as follows: Readers digest = 28, Readers digets and Science today = 8, Science today = 30, Readers digest and Caravan = 10, Caravan = 42, Science today and Caravan = 5, All the three magazines = 3. THE ABOVE IS FOR QUESTIONS 6 – 8. How many read none of the three magazines? Q76 In a recent survey of 400 students in Palm Ville High College, 100 were listed as smokers and 150 as chewers of gum: 75 were listed as both smokers and chewres of gum. Find how many students are neither smokers nor gum chewers Q77 In a geometric series, the first term is 7, the last term is 448, and the sum is 889. Find the common ratio, r Q78 Let x be the required Geometric Mean (GM) between a and b. Then a, x, b, are the successive terms in the Geometric Progression. Find the GM Q79 The sum of five numbers in an Arithmetic Progression is 25 and the sum of their squares is 165. Find the common difference. Q80 The sum of the first n terms of a series is \[2n^2 – n\]. Find the nth term
One reason I call this site a “blag” and not a “blog” is that I’m always late. For example, I’m finally typing up the group-theory homework assignment which Ben gave last Monday (and which will be due next Monday). During our seminar over in BU territory, we discussed the relations among the Lie groups SU(2) and SO(3) and the manifolds S 3 and RP 3. Problems will be given below the fold. Also, Eric will be discussing statistical physics this afternoon at NECSI. Problem 1. To understand the relation between SU(2) and SO(3), we need to identify a 2×2 Hermitian matrix with a rotation in 3D Euclidean space. We do this by pulling out the Pauli matrices. Define a vector space V as [tex]V = \hbox{span}\left\{\left(\begin{array}{cc} i & 0 \\ 0 & -i\end{array}\right), \left(\begin{array}{cc} 0 & 1 \\ -1 & 0\end{array}\right), \left(\begin{array}{cc} 0 & i \\ i & 0\end{array}\right)\right\}.[/tex] Next, define an action for A ∈ V, P ∈ SU(2), [tex]P \cdot A \equiv P A P^\dag.[/tex] Prove that P acting on A always falls within V (closure), and that the action is linear, i.e., [tex] P\cdot(A + kB) = P\cdot A + kP\cdot B[/tex] for A and B in V, k a real number. Assuming the results of homework 1, a map P from V to itself can be represented as a 3×3 matrix, which we denote as ψ(P): [tex]\psi(P)A = P\cdot A.[/tex] Problem 2. Prove that [tex]\psi:\ SU(2) \rightarrow GL(3,R)[/tex] is a homomorphism. That is, show [tex]\psi(PQ) = \psi(P)\psi(Q)[/tex] and that (this is slightly tricker) ψ(P) is invertible. Problem 3. Test the relation [tex]PAP^\dag = A[/tex] on the basis of V, the Pauli matrices, and show that the matrix P must be plus-or-minus the identity. This implies that the preimage of the identity in SO(3) is two points in SU(2).
PCTeX Talk Discussions on TeX, LaTeX, fonts, and typesetting Author Message zedler Joined: 03 Mar 2006 Posts: 15 Posted: Mon Mar 27, 2006 11:53 am Post subject: spacing of mathrm Hello, \documentclass{book} \usepackage{times,mtpro2} \begin{document} $(\mathrm j$ \end{document} gives touching glyphs. Michael Michael Spivak Joined: 10 Oct 2005 Posts: 52 Posted: Tue Mar 28, 2006 12:29 pm Post subject: Re: spacing of mathrm zedler wrote: Hello, \documentclass{book} \usepackage{times,mtpro2} \begin{document} $(\mathrm j$ \end{document} gives touching glyphs. Michael There's not much I can do about that---if you are using Times as the text font, then in text (j also touches! [though $(\mathrm j$ is worse, with more overlap]. I'm wondering how this arose. Assuming that you didn't really want \mathrm{(j ... I would guess that you are using roman letters as a set of variables, either in addition to, or in place of, the MTPro2 italic letters. In that case, you really would want a special font for this purpose, in the same way that MTPro's \mathbf font has different spacing than the Times-bold, so that subscripts and superscripts will work better. zedler Joined: 03 Mar 2006 Posts: 15 Posted: Wed Mar 29, 2006 5:23 am Post subject: Re: spacing of mathrm Quote: There's not much I can do about that---if you are using Times as the text font, then in text (j also touches! [though $(\mathrm j$ is worse, with more overlap]. I'm wondering how this arose. Assuming that you didn't really want \mathrm{(j ... I would guess that you are using roman letters as a set of variables, either in addition to, or in place of, the MTPro2 italic letters. In that case, you really would want a special font for this purpose, in the same way that MTPro's \mathbf font has different spacing than the Times-bold, so that subscripts and superscripts will work better. Yes, I really want to typeset $\exp(\mathrm j\omega\tau=$ ;-) I suppose this can only be corrected by increasing the bracket side bearings, but your approach was to have very tight bracket side bearings and adjust/increase the spacing using kerns. This of course fails for \mathrm... The tight bracket side bearings were also an issue in my previous example, $\[\]_{xy}$. CM, Fourier, Lucida and MnSymbol don't have this problem... Michael Michael Spivak Joined: 10 Oct 2005 Posts: 52 Posted: Wed Mar 29, 2006 6:39 am Post subject: Re: spacing of mathrm zedler wrote: Quote: There's not much I can do about that---if you are using Times as the text font, then in text (j also touches! [though $(\mathrm j$ is worse, with more overlap]. I'm wondering how this arose. Assuming that you didn't really want \mathrm{(j ... I would guess that you are using roman letters as a set of variables, either in addition to, or in place of, the MTPro2 italic letters. In that case, you really would want a special font for this purpose, in the same way that MTPro's \mathbf font has different spacing than the Times-bold, so that subscripts and superscripts will work better. Yes, I really want to typeset $\exp(\mathrm j\omega\tau=$ ;-) I suppose this can only be corrected by increasing the bracket side bearings, but your approach was to have very tight bracket side bearings and adjust/increase the spacing using kerns. This of course fails for \mathrm... The tight bracket side bearings were also an issue in my previous example, $\[\]_{xy}$. CM, Fourier, Lucida and MnSymbol don't have this problem... Michael Actually, I didn't, and one can't, adjust the spacing after a left parentheses or before a right parenthesis using kerns [I mentioned this on some posting somewhere once before]; even if you put kerns into the tfm file, they are ignored because the left parenthesis is an "opening", which determines its own spacing, and similarly the right parenthesis is a "closing". I chose side bearings for the parenthesis that worked well with the italic letters on the math italic font. Even if that were not the case, the real problem is that in the expression \exp(\mathrm j the ( comes from the math italic font, while the j is coming from an entirely different font, the Times-Roman font, and TeX has no way of kerning characters in different fonts. If you were to use some other roman font as your text font, then the problem might very well be less or much more---it would depend entirely on the left side bearing of j in that particular font. I suspect that j is being used here as a some special character (perhaps in electrical engineering, although I thought they preferred bold j); in that case, I would just define something like \myj to give a small kern followed by j---in fact, it's easier to type \myj than to type \mathrm j. Sorry that [] doesn't work out for you, but I've never seen something like that in any mathematics paper, and since I like the way brackets work with the math italic characters in general, I wouldn't want to change the side bearings just for this special case (once again, changes couldn't be overridden with kerns). zedler Joined: 03 Mar 2006 Posts: 15 Posted: Wed Mar 29, 2006 10:00 am Post subject: Re: spacing of mathrm Quote: Sorry that [] doesn't work out for you, but I've never seen something like that in any mathematics paper, and since I like the way brackets work with the math italic characters in general, I wouldn't want to change the side bearings just for this special case (once again, changes couldn't be overridden with kerns). I can apply manual spacings, the "\mathrm j" is stored in a macro anyway and the empty brackets I need only once. Perhaps you're interested, I've put together a collection showing how different math font setups behave in the above mentioned cases: http://www.hft.ei.tum.de/mz/mtpro2_sidebearings.pdf Michael Michael Spivak Joined: 10 Oct 2005 Posts: 52 Posted: Wed Mar 29, 2006 1:24 pm Post subject: Re: spacing of mathrm Quote: ="zedler I can apply manual spacings, the "\mathrm j" is stored in a macro anyway and the empty brackets I need only once. Perhaps you're interested, I've put together a collection showing how different math font setups behave in the above mentioned cases: http://www.hft.ei.tum.de/mz/mtpro2_sidebearings.pdf Michael Interesting. I'd say that CM looks the worst (especially the \omega and \tau, as well as being so thin). Lucida is somewhat "klunky", though definitely easy to read! (Is this Lucida or Lucida-Bright?) Some one mentioned that section headings are sometimes printed in sans-serif, so that a sans-serif math might be nice to have; I suspect that the Lucida greek letters would work well for that. If \mathrm j is in a macro, then probably there should also be some space on the right; certainly needed for CM, not really needed for Lucida or Minion, useful for Fourier and MTPro2. By the way, what is []_{\langle6\times6\rangle} ? zedler Joined: 03 Mar 2006 Posts: 15 Posted: Wed Mar 29, 2006 3:25 pm Post subject: Re: spacing of mathrm Quote: (Is this Lucida or Lucida-Bright?) Pctex's Lucida fonts. Quote: By the way, what is []_{\langle6\times6\rangle} ? Excerpt from a paper (Deadline tomorrow Mar 30, Hawai time ;-)) Code: \begin{document}\let\mathbf\mbf ... Next, the impedance matrix of the outer 12-port is obtained by inverting $\mathbf{Y}^{\langle 16\times16\rangle}$ and taking the upper left $\langle 12\times12\rangle$ submatrix \begin{equation} \mathbf{Z}^{\langle 12\times12\rangle}=\left[{\mathbf{Y}^{\langle 16\times16\rangle}}^{-1}\right]_{\langle 12\times12\rangle} \end{equation} where the operator $[\,]_{\langle 12\times12\rangle}$ denotes taking the submatrix. The $\mathbf Z^{\langle 6\times 6\rangle}=\mathbf Z$ matrix of the outer six-port is obtained by Perhaps not the best notation, do you have a better idea? BTW, quite funny that both you and my boss are aficionados of differential forms ;-) Wish you wedge and hodge, Michael Michael Spivak Joined: 10 Oct 2005 Posts: 52 Posted: Wed Mar 29, 2006 3:39 pm Post subject: Re: spacing of mathrm zedler wrote: Excerpt from a paper (Deadline tomorrow Mar 30, Hawai time ;-)) Code: \begin{document}\let\mathbf\mbf ... Next, the impedance matrix of the outer 12-port is obtained by inverting $\mathbf{Y}^{\langle 16\times16\rangle}$ and taking the upper left $\langle 12\times12\rangle$ submatrix \begin{equation} \mathbf{Z}^{\langle 12\times12\rangle}=\left[{\mathbf{Y}^{\langle 16\times16\rangle}}^{-1}\right]_{\langle 12\times12\rangle} \end{equation} where the operator $[\,]_{\langle 12\times12\rangle}$ denotes taking the submatrix. The $\mathbf Z^{\langle 6\times 6\rangle}=\mathbf Z$ matrix of the outer six-port is obtained by Perhaps not the best notation, do you have a better idea? BTW, quite funny that both you and my boss are aficionados of differential forms ;-) Wish you wedge and hodge, Michael Not really, but I would probably have used something like UL_{\langle 12\times\12\rangle}(...) with U and L roman (or perhaps bold). And I probably would actually have used something like UL_{[12]}, with the idea that for square matrices [12] would mean \langle12\times12\rangle. All times are GMT - 7 Hours You can post new topics in this forum You can reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum Powered by phpBB © 2001, 2005 phpBB Group
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code. he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects. i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent. you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl. In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos... Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval @AkivaWeinberger are you familiar with the theory behind Fourier series? anyway here's a food for thought for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely. (a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$? @AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it. > In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d... Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions. @AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations hence you're free to rescale the sides, and therefore the (semi)perimeter as well so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality that makes a lot of the formulas simpler, e.g. the inradius is identical to the area It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane? $q$ is the upper summation index in the sum with the Bernoulli numbers. This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
By default, the eye an the projection reference point PRP are in (0,0,0). I can change the eye position with gluLookAt(), but how can I change the PRP (i.e., the convergence position for a set of parallel lines) for a perspective projection in OpenGL? Eye position, projection reference point PRP, camera center, or projection center are all just different words for the same thing. gluLookAt() applied to the GL_MODELVIEW matrix works for both, parallel and perspective projection. You can choose between parallel and perspective projection by setting up the GL_PROJECTION matrix. For perspective projection this can be done using the gluPerspective() function. Edit: (after revised question) If the PRP is not at the eye position, you can not use gluPerspective(). Instead you have to create your own GL_PROJECTION matrix. Following the formula that you have now provided, this results in the 4 x 4 matrix: $$\mathtt{A} =\begin{bmatrix} z_c-z_{pp} & 0 & -x_c & x_c \cdot z_{pp}\\ 0 & z_c-z_{pp} & -y_c & y_c \cdot z_{pp} \\ 0 & 0 & \frac{\mathrm{far}+\mathrm{near}}{\mathrm{near}-\mathrm{far}} & \frac{2 \cdot \mathrm{far} \cdot \mathrm{near}}{\mathrm{near}-\mathrm{far}} \\ 0 & 0 & -1 & z_c \end{bmatrix} $$ I have done a quick OpenGL/WebGL rendering test with the GSN Composer. Follow the link and move the mouse over the render output to change $x_c$ and $y_c$ of the projection reference point $(x_c, y_c, z_c)$: If you are talking about an oblique perspective projection in which the line joining the eye and center of the projection plane is not perpendicular to the plane like here (upper left), This can be generated by the glFrustum() call. Normally when you describe perspective projection they set left = -right bottom = -top so that these are centered around the Z-axis. In order to generate an oblique projection you can use the above mentioned method which takes input the coordinates for left, right, top, bottom, near and far planes. The other way around is to build your own custom projection matrix, and load it directly. The matrix should be stored as an array GLfloat mat[16] = {...} GlLoadMatrix(mat)
Decomposing the language $L$ into union or intersection of two simplerlanguages is useful to prove that L is regular, but it will not helpyou (in general) to prove that $L$ is not regular. The union orintersection of two non regular languages can be regular. However, you can use these closure properties differently, toeliminate $x$. Consider the language $R=x^*y^*$. This language isregular. Its intersection with $L$ is $L'=L\cap R=\{x^iy^j \mid i \le2j\}$ keeping only the words were $z$ has exponent $0$. This is because $L_2\cap R=\{\epsilon\}$. So $L'=L\cap R=(L_1\cup L_2)\cap R=(L_1\cap R)\cup(L_2\cap R)=(L_1\cap R)\cup\{\epsilon\}=L_1\cap R$ because $\epsilon\in L_1\cap R$. If $L$ were regular, then $L'$ would also be regular because $R$ isregular, and the intersection of regular languages is regular. Thus if you can prove that $L'=\{x^iy^j \mid i\le 2j\}$ is notregular, you can infer that $L$ is not regular. And you should be able to prove it for $L'$. There are other ways of using closure properties to simplify thelanguage. For example you could use an erasing homomorphisn to replaceall $z$ by the empty word, which would lead you to the same language$L'$, And regular languages are closed under arbitrary homomorphism. Closure properties can be very friendly, if you learn to use them. Then can considerably simplify some problems. See How to prove that a language is not regular? With thanks to Hendrik Jan for catching a shameful bug.
The underlying philosophy of constructivism in mathematics is that in order to prove that something exists, we need to "find" or "construct" it. In NBG (von Neumann–Bernays–Gödel axiomatic set theory) one can only quantify over sets, as in ZFC (where classes can also be informally treated specifying formulas, for example, instead of $\alpha \in \mathrm{Ord}$ we simply say that $\alpha$ satisfy the formula which asserts as set to be an ordinal). However, what if we want, for example, to state a theorem that asserts an existence of a certain class? For example, take Transfinite Recursion: Transfinite Recursion. Let $G\colon V \to V$ be a class function ($V$ being a class of all sets). Then there is a unique class function $F\colon \mathrm{Ord} \to V$ such that $$\forall \alpha \in \mathrm{Ord}, F(\alpha) = G(F\restriction_{\alpha}).$$ I've been thinking how can we state this theorem in the language of ZFC and NBG. In ZFC, a "class function" from a "class" define by a formula $\phi(x)$ to a class defined by a formula $\psi(y)$ is simply a formula $\upsilon(x,y)$ such that $\forall x, \phi(x) \Rightarrow \exists!y \in \psi(y), \upsilon(x,y)$. The problem is the same: we can't quantify neither over formulas in ZFC, nor over classes in NBG. In transfinite recursion, we can simply circumvent the issue of not being able to write "for all class functions $G\colon V \to V$" is to treat is not as an theorem, but an infinitely many theorems, one for each different formula representing a class function. However, we still can't simply state that some class or some class function "exists". But, apparently, we don't have to. The proof of the aforementioned theorem I know of involves the explicit construction of a class function $F\colon \mathrm{Ord} \to V$ by defining a formula. So, to prove that some class (or a class function, which is still a class in NBG) exists, we have to explicitly construct it? So, in way, theory of classes is "constructive" in NBG (and in ZFC, even if there is no theory of classes per se)? That is, instead of saying that there exists class function, we simply state a formula (I will not write it here, as it is long and requires auxiliary definitions) and prove that it does indeed defines a class function? But if this is the case, what about uniqueness? To say that there are no other sets $y$ satisfying $\phi(y)$ rather than $x$ is precisely to say that $\forall y, \phi(y) \Rightarrow y = x$. But we can't even say $\forall y,$ if $y$ is a class function. I understand many here will want a precise question (or questions) rather than a wall of text, so I'll sum my questions up: $(1)$ Is giving an explicit construction of a class or a class function the only way of saying that it exists in NBG or ZFC? $(2)$ If so, how do we say that there is no other class with that property? P.S. This question is not only about proving something involving unquantifiable objects in a theory, but also about stating theorems about them.
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever." Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field. "You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. " so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force. For the buoyancy do I: density of water * volume of water displaced * gravity acceleration? so: mass of bottle * gravity = volume of water displaced * density of water * gravity? @EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$? As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern... You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer. Though as it happens I have to go now - lunch time! :-) @JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth. Anonymous Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure Not sure about that, but the converse is certainly false :P Derrida has received a lot of criticism from the experts on the fields he tried to comment on I personally do not know much about postmodernist philosophy, so I shall not comment on it myself I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger. I can see why a man of that generation would be leaned towards that idea. I do too.
This question is in reference to an answer I posted here yesterday. In it I derived the partition function for a harmonic oscillator as follows $$q = \sum_{j}e^{-\frac{\epsilon_j}{kT}}$$ For the harmonic, oscillator $\epsilon_j = (\frac{1}{2}+j)\hbar \omega$ for $j \in \{ 0,1,2.. \}$ Note that $\epsilon_0 \neq 0$ there exists a zero point energy. Let's write out a few terms $$q = e^{-\frac{\hbar \omega/2}{kT}} + e^{-\frac{\hbar \omega3/2}{kT}} + e^{-\frac{\hbar \omega5/2}{kT}} +..... $$ factoring out $e^{-\frac{\hbar \omega/2}{kT}}$ $$q = e^{-\frac{\hbar \omega/2}{kT}} \left( 1+ e^{-\frac{\hbar \omega}{kT}} + e^{-\frac{2\hbar \omega}{kT}} +.....\right) $$ The sum in the bracket takes the form of a geometric series whose sum converges as shown below $$1+x+x^2+... = \frac{1}{(1-x)} $$ herein, $ x \equiv e^{-\frac{\hbar\omega}{kT}} $ Putting all of this together $$q = \frac{e^{-\frac{\hbar \omega/2}{kT}}}{(1-e^{-\frac{\hbar\omega}{kT}})}$$ I happened to check Atkins Physical chemistry (10th edition) for the same derivation earlier today. On page 620, the vibrational partition function using the harmonic oscillator approximation is given as $$q = \frac{1}{1-e^{-\beta h c \nu'}}$$, $\beta$ is $\frac{1}{kT}$ and $\nu'$ is wave number This result was derived in brief illustration 15B.1 on page 613 using a uniform ladder. However, in that illustration the uniform ladder starts at 0, but the harmonic oscillator has a zero point energy (which I accounted for in my derivation). I discussed this with my instructor and he pointed out that as $T \rightarrow 0$ $q \rightarrow 1 $ (since only one state is thermally accessible. The result derived in Atkins' does that indeed, but mine goes $q \rightarrow 0$. Now, in the formalism developed in Atkins the do set the ground state energies to zero (on page 605), and basically add non-zero ground state energies to a calculated $\langle \epsilon \rangle$, which is fine. Anyway, I would love it if someone could weigh in on this and help me resolve this conceptual mess in my head.
I want to find the increasing and decreasing intervals of a quadratic equation algebraically without calculus. The truth is I'm teaching a middle school student and I don't want to use the drawing of the graph to solve this question. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community I want to find the increasing and decreasing intervals of a quadratic equation algebraically without calculus. The truth is I'm teaching a middle school student and I don't want to use the drawing of the graph to solve this question. This is if you do not want to use the fact that you know how a graph looks like: You can explain to him how from this $f(x)=ax^2+bx+c$ arrive at this $f(x)=a(x+ \dfrac{b}{2a})^2 + (c- \dfrac{b^2}{4a})$. For $x= - \dfrac {b}{2a} + d$ and $x= - \dfrac {b}{2a} - d$ you have the same function value $f(x)$ for whatever $d$ you choose, which means, by this symmetry of$f$, that if function increases when going from the left of $x=- \dfrac {b}{2a}$ towards $x=- \dfrac {b}{2a}$ then it will decrease when going from $x=- \dfrac {b}{2a}$ towards right of $x=- \dfrac {b}{2a}$ and it is either maximal or minimal at $x=- \dfrac {b}{2a}$ (you know that this all depends on whether $a>0$ or $a<0$). Suppose $a>0$, then, if $x_1>x_2\geq - \dfrac {-b}{2a}$ you obtain $f(x_1)-f(x_2)=a(x_1 + \dfrac{b}{2a})^2 -a(x_2 + \dfrac{b}{2a})^2 =a(x_1-x_2)(x_1+x_2+\dfrac{b}{a})>0$ so $f(x_1)>f(x_2)$. Now use symmetry of the function around $- \dfrac {b}{2a}$ to deduce reverse inequality for the other side. Similarly for $a<0$. You could take your quadratic function, call it f and consider the quantity $f(y)-f(x)$ with $y>x$. For example, if $f(x)=x^2$ then this would yield $$y^2-x^2=(y+x)(y-x).$$ This quantity is positive provided $y+x>0$ or $y>-x$. This means $y>|x|$ and in particular $y>0$. As someone mentioned in the comments, the standard way to do this is the trick of completing the square (also often used to derive the quadratic formula). You just write $$ ax^2+bx+c = a\left(x+\frac{b}{2a}\right)^2+c-\frac{b^2}{4a}.$$ To see this formula is true, just multiply out the square on the right hand side carefully. You don't need to memorize the trick exactly... just remember vaguely what it looks like and choose the term inside of the parentheses on the right-hand-side to make the $bx$ term come out correctly. Then the $-b^2/4a$ just cancels out the third term from the squaring. Then use graphing sense to think about the right hand side. It is a parabola of the form $y = ax^2$ (which has vertex at $x=0$) that is shifted to the left by $b/2a$ and up by $c-b^2/4a.$ So the vertex is at $x=-b/2a.$ If $a>0$ it opens upward, otherwise down. To prove algebraically that $x^2$ is increasing for $x>0$ and decreasing for $x<0$ we can use the fact that $y^2>x^2$ if and only if $|y|>|x|.$ For the function to be increasing on an interval we need $|y|>|x|$ whenever $y>x$ for all $x$ and $y$ in the interval. This is clearly true if both $x$ and $y$ are positive so we must show that the function is not increasing outside $(0,\infty)$. Let $x<0.$ There is no interval around $x$ on which the function is increasing since if we choose some $y$ with $x<y<0$ then $|y|<|x|.$ Exact same logic to show decreasing on $(-\infty,0)$. Other parabolas are just reflections/shifts of this one as completing the square shows.
The discovery of decimal arithmetic The discovery of decimal arithmetic in ancient India, together with the well-known schemes for long multiplication and long division, surely must rank as one of the most important discoveries in the history of science. The date of this discovery, by an unknown Indian mathematician or group of mathematicians, was recently pushed back to the third century CE, based on the recent dating of the Bakhshali manuscript, but it probably happened earlier, perhaps around 0 CE. Arithmetic on modern computers Computers, of course, do not use decimal arithmetic in computation (except input and output for human eyes). Instead they use binary arithmetic, either in an integer format, where the bits in a computer word represent a binary integer (with perhaps one bit as a sign), or else in floating-point format, where part of the computer word represents the data and part represents an exponent — the binary equivalent of writing a value in scientific notation, such as $1.2345 \times 10^{67}$. A widely-used 64-bit integer format can represent integers up to roughly $4.5 \times 10^{18}$, whereas a widely-used 64-bit floating-point format can represent numbers from about $10^{-308}$ to $10^{308}$, with nearly 16-significant-digit accuracy. Arithmetic on such numbers is typically done in computer hardware using binary variations of the basic schemes. For additional details on these formats, see this Wikipedia article. For some scientific and engineering applications, however, even 16-digit accuracy is not sufficient, and so such applications rely on “multiprecision” arithmetic — software extensions to the standard hardware arithmetic operations. Cryptography, which is typically performed numerous times each day in one’s smartphone or laptop, requires computations to be done on integers as large as several thousand digits. Some computations in pure mathematics and mathematical physics research require extremely high-precision floating-point arithmetic — tens of thousands of digits in some cases (see for example this paper). Researchers exploring properties of the binary and decimal expansions of numbers such as $\pi$ have computed with millions, billions or even trillions of digits — the most recent record computation of $\pi$ was to 31 trillion digits (actually 31,415,926,535,897 decimal digits, which as a point of interest, happens to be the first 14 digits in the decimal expansion of $\pi$). Karatsuba’s algorithm for multiplication It was widely believed, even as late as the 1950s, that there was no fundamentally faster way of multiplying large numbers than the ancient scheme we learned in school, or a minor variation such as performing the operations in base $2^{32}$ rather than base ten. Then in 1960, the young Russian mathematician Anatoly Karatsuba found a clever technique for performing very high precision multiplication. By dividing each of the inputs into two substrings, he could find the product, as least to the precision often needed, with only three substring multiplications instead of four. By continuing recursively to break down the input numbers in this fashion, he obtained a scheme whose cost increased only by the approximate factor $n^{1.58}$ instead of $n^2$ with the ordinary scheme, where $n$ is the number of computer words. But this was soon overshadowed by a scheme first formalized by Schonhage and Strassen, which we now describe. The fast Fourier transform and multiplication A major breakthrough in performing such computations came with the realization that the fast Fourier transform (FFT), which was originally developed to process signal data, could be used to dramatically accelerate high-precision multiplication. The fast Fourier transform is merely a clever computer algorithm to rapidly calculate the frequency spectrum of a string of data interpreted as a signal. In effect, one’s ear performs a Fourier transform (in analog, not in digital) when it distinguishes the pitch contour of a musical note or a person’s voice. The idea for FFT-based multiplication is, first of all, to represent a very high precision number as a string of computer words, each containing, say, 32 successive bits of its binary expansion (i.e., each entry is an integer between $0$ and $2^{32} – 1$). Then two such multiprecision numbers can be multiplied by observing that their product computed the old-fashioned way (in base $2^{32}$ instead of base ten) is nothing more than the “linear convolution” of the two strings of computer words, which convolution can be performed efficiently using an FFT. In particular, the FFT-based scheme for multiplying two high-precision numbers $A = (a_0, a_1, a_2, \cdots, a_{n-1})$ and $B = (b_0, b_1, b_2, \cdots, b_{n-1})$ is the following: Extend each of the $n$-long input strings $A$ and $B$ to length $2n$ by padding with zeroes. Perform an FFT on each of the extended $A$ and $B$ strings to produce $2n$-long complex strings denoted $F(A)$ and $F(B)$. Multiply the strings $F(A)$ and $F(B)$ together, term-by-term, as complex numbers. Perform an inverse FFT on the resulting $2n$-long string to yield a $2n$-long string $C$, i.e., $C = F^{-1}[F(A) \cdot F(B)]$. Round each entry of $C$ to the nearest integer. Starting at the end of $C$, release carries in each term, i.e., leave only the final 32 bits in each word, with the higher-order bits added to the previous word or words as necessary. The result $D$ is the product of $A$ and $B$, represented as a $2n$-long string of 32-bit integers. Note, however, that several important details were omitted here. For instance, the product of two 32-bit numbers can be computed exactly using a 64-bit hardware multiply operation, but numerous such 64-bit values must be added together when computing an FFT, requiring somewhat more than 64-bit precision. Also, for optimal performance it is important to take advantage of the fact that both input strings $A$ and $B$ are real numbers rather than complex, and that the final inverse FFT returns real numbers in $C$. Finally, the outline above omitted mention on how to manage roundoff error when performing these operations. One remedy is to divide the data into even smaller chunks, say only 16 or 24 bits per word; another remedy is perform these computations not in the field of complex numbers using floating-point arithmetic, but instead in the field of integers modulo a prime number, so that roundoff error is not a factor. For additional details, see this paper. With a fast scheme such as FFT-based multiplication in hand, division can be performed by Newton iterations, with a level of precision that approximately doubles with each iteration. This scheme reduces the cost of high-precision division to only somewhat more than twice that of high-precision multiplication. A similar scheme can be used to compute high-precision square roots. See this paper for details. Schonhage and Strassen, in carefully analyzing a certain variation of the FFT-based multiplication scheme, found that for large $n$ (where $n$ is the number of computer words) the total computational cost, in terms of hardware operations, scales as $n \cdot \log(n) \cdot \log(\log(n))$. In practice, the FFT-based scheme is extremely fast for high-precision multiplication, since one often can utilize highly tuned library routines to perform the FFT operations. An n log(n) algorithm for multiplication Ever since the Schoenhage-Strassen scheme was discovered and formalized, researchers have wondered if this was truly the end of the road. For example, can we remove the final $\log(\log(n))$ factor and just have $n \cdot \log(n)$ asymptotic complexity? Well that day has now arrived. In a brilliant new paper by David Harvey of the University of New South Wales, Australia, and Joris van der Hoeven of the French National Centre for Scientific Research, France, the authors have defined a new algorithm with exactly that property. The Harvey-van der Hoeven paper approaches the problem by splitting the problem into smaller problems, applying the FFT multiple times, and replacing more multiplications with additions and subtractions, all in a very clever scheme that eliminates the nagging $\log(\log(n))$ factor. The authors note that their result does not prove that there is no algorithm even faster than theirs. So maybe an even faster one will be found. Or not. Either way, don’t put away your multiplication tables quite yet. The Harvey-van der Hoeven scheme is only superior to FFT-based multiplication for exceedingly high precision. But it is an important theoretical result that has wide-ranging implications in the field of computational complexity. For some additional details, see this well-written Quanta article by Kevin Hartnett.
I am really confused if $\alpha_1=(e^{\pi/2},1)$ and $\alpha_2=(\sqrt[3]{110},1)$ are linearly independent or linearly dependent in $\mathbb{R}^2$. (Hoffman and Kunze, page no. $48$.) Consider $c_1\alpha_1+c_2\alpha_2=0$. Then $c_1(e^{\pi/2},1)+c_2(\sqrt[3]{110},1)=(0,0)$. This gives two equations: \begin{align*} c_1e^{\pi/2}+c_2\sqrt[3]{110}&=0\\ c_1+c_2&=0. \end{align*} Then we get, $c_1=-c_2$ and $c_2(\sqrt[3]{110}-e^{\pi/2})=0$. Now, $e^{\pi/2}$ and $\sqrt[3]{110}$ are numerically very close. So for $0<c_2<1$, the smaller $c_2$ becomes, the closer the latter equation is to $0$. Can we conclude that $\alpha_1$ and $\alpha_2$ are linearly dependent on basis of this? But then if two vectors are linearly dependent, one of them is a scalar multiple of the other. I don't see how $e^{\pi/2}$ and $\sqrt[3]{110}$ are scalar multiple of each other. (I am only discussing the first components of each tuple because $1$ is clearly scalar multiple of $1$.) Any idea if they are linearly independent or linearly dependent? Also, what about the linear independence/linear dependence of the set $\{e^{\pi/2},\sqrt[3]{110},1\}$ in $\mathbb{R}$? This might be very simple and I am complicating things unnecessarily, nevertheless any explanation to clear my confusion would be appreciated. Thanks! Edit: Thank you. I found each your answers helpful (except where I mentioned below). I wish I could accept more than one answers I found helpful :)
The answer is No. The keyword is that the Coulomb force (which hasn't been spelled out), the main forces binding the atoms together in the lattice of solid-state or condensed matter for the long pole, supposed to have retarded time to transmit the force/information between the atoms. Again, Coulomb force, is part of Electromagnetic (E&M) force, which has been emphasized already by others. All E&M forces somehow become consistent between different reference frames only if we take into account the constant speed of light, and the retardation effect of E&M potential/force. You can easily read the retarded electromagnetic potential$ (\varphi,\mathbf A )$/force here: $$\mathrm\varphi (\mathbf r , t) = \frac{1}{4\pi\epsilon_0}\int \frac{\rho (\mathbf r' , t_r)}{|\mathbf r - \mathbf r'|}\, \mathrm{d}^3\mathbf r'$$ $$\mathbf A (\mathbf r , t) = \frac{\mu_0}{4\pi}\int \frac{\mathbf J (\mathbf r' , t_r)}{|\mathbf r - \mathbf r'|}\, \mathrm{d}^3\mathbf r'\,.$$ where 'r' is a position vector in space, 't' is time, The retarded time is:$$t_r = t-\frac{|\mathbf r - \mathbf r'|}{c}$$ There is also gravity, and strong-force binding the nucleus; but they are in the much weaker energy scale comparing to E&M concerning binding atoms on the lattice. Also, all forces and all massless particle (photon/gluons/gravitons) may have the same speed of propagation, the speed of light.
L^2 Curvature Bounds on Manifolds with Bounded Ricci Curvature Speaker(s):Aaron Naber (Northwestern University) Location:MSRI: Simons Auditorium Tags/Keywords algebraic geometry and GAGA complex differential geometry mathematical physics Kahler metric mirror symmetry curvature estimates Ricci curvature lower bounds geometric analysis Primary Mathematics Subject Classification Secondary Mathematics Subject ClassificationNo Secondary AMS MSC 14468 Consider a Riemannian manifold with bounded Ricci curvature |Ric|\leq n-1 and the noncollapsing lower volume bound Vol(B_1(p))>v>0. The first main result of this paper is to prove the previously conjectured L^2 curvature bound \fint_{B_1}|\Rm|^2 < C(n,v). In order to prove this, we will need to first show the following structural result for limits. Namely, if (M^n_j,d_j,p_j) -> (X,d,p) is a GH-limit of noncollapsed manifolds with bounded Ricci curvature, then the singular set S(X) is n-4 rectifiable with the uniform Hausdorff measure estimates H^{n-4}(S(X)\cap B_1)<C(n,v), which in particular proves the n-4-finiteness conjecture of Cheeger-Colding. We will see as a consequence of the proof that for n-4 a.e. x\in S(X) that the tangent cone of X at x is unique and isometric to R^{n-4}xC(S^3/G_x) for some G_x\subseteq O(4) which acts freely away from the origin. The proofs involve several new estimates on spaces with bounded Ricci curvature. This is joint work with Wenshuai Jiang 14468 H.264 Video 14468.mp4 Download If none of the options work for you, you can always buy the DVD of this lecture. The videos are sold at cost for $20USD (shipping included). Please Click Here to send an email to MSRI to purchase the DVD. See more of our Streaming videos on our main VMath Videos page.
Quadratic Equations Practise solving quadratic equations algebraically with this self-marking exercise. This is level 4; Three terms where the squared term has a coefficient other than one and the expression factorises. You can earn a trophy if you get at least 7 correct. Instructions Try your best to answer the questions above. Type your answers into the boxes provided leaving no spaces. As you work through the exercise regularly click the "check" button. If you have any wrong answers, do your best to do corrections but if there is anything you don't understand, please ask your teacher for help. When you have got all of the questions correct you may want to print out this page and paste it into your exercise book. If you keep your work in an ePortfolio you could take a screen shot of your answers and paste that into your Maths file. Transum.org This web site contains over a thousand free mathematical activities for teachers and pupils. Click here to go to the main page which links to all of the resources available. Please contact me if you have any suggestions or questions. Mathematicians are not the people who find Maths easy; they are the people who enjoy how mystifying, puzzling and hard it is. Are you a mathematician? Comment recorded on the 26 March 'Starter of the Day' page by Julie Reakes, The English College, Dubai: "It's great to have a starter that's timed and focuses the attention of everyone fully. I told them in advance I would do 10 then record their percentages." Comment recorded on the 9 May 'Starter of the Day' page by Liz, Kuwait: "I would like to thank you for the excellent resources which I used every day. My students would often turn up early to tackle the starter of the day as there were stamps for the first 5 finishers. We also had a lot of fun with the fun maths. All in all your resources provoked discussion and the students had a lot of fun." Answers There are answers to this exercise but they are available in this space to teachers, tutors and parents who have logged in to their Transum subscription on this computer. A Transum subscription unlocks the answers to the online exercises, quizzes and puzzles. It also provides the teacher with access to quality external links on each of the Transum Topic pages and the facility to add to the collection themselves. Subscribers can manage class lists, lesson plans and assessment data in the Class Admin application and have access to reports of the Transum Trophies earned by class members. If you would like to enjoy ad-free access to the thousands of Transum resources, receive our monthly newsletter, unlock the printable worksheets and see our Maths Lesson Finishers then sign up for a subscription now:Subscribe Go Maths Learning and understanding Mathematics, at every level, requires learner engagement. Mathematics is not a spectator sport. Sometimes traditional teaching fails to actively involve students. One way to address the problem is through the use of interactive activities and this web site provides many of those. The Go Maths page is an alphabetical list of free activities designed for students in Secondary/High school. Maths Map Are you looking for something specific? An exercise to supplement the topic you are studying at school at the moment perhaps. Navigate using our Maths Map to find exercises, puzzles and Maths lesson starters grouped by topic. Teachers If you found this activity useful don't forget to record it in your scheme of work or learning management system. The short URL, ready to be copied and pasted, is as follows: Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your comments. Close Factorising - Factorise algebraic expressions in this structured online self marking exercise. Level 1 - A quadratic equation presented in a factorised form. Level 2 - Two terms where the unknown is a factor of both. The roots are integers. Level 3 - Three terms where the squared term has a coefficient of one. The roots are integers. Level 4 - Three terms where the squared term has a coefficient other than one and the expression factorises. Level 5 - The difference between two squares. Level 6 - Three terms and the roots are not necessarily integers. Level 7 - Mixed questions on solving quadratic equations Exam Style Questions - A collection of problems in the style of GCSE or IB/A-level exam paper questions (worked solutions are available for Transum subscribers). More Algebra including lesson Starters, visual aids, investigations and self-marking exercises. See the National Curriculum page for links to related online activities and resources. The video above is from the wonderful Mr Hegarty. Here is the formula for solving the equation \(ax^2 + bx + c = 0\).$$ x = \frac{ - b \pm \sqrt {b^2 - 4ac} }{2a} $$ Did you know that there is another formula for finding the roots of quadratic equations? It is called the 'citardauq' (the word quadratic backwards) formula and you can read more about it here but you will never need it for school Maths. Don't wait until you have finished the exercise before you click on the 'Check' button. Click it often as you work through the questions to see if you are answering them correctly. You can double-click the 'Check' button to make it float at the bottom of your screen. Close
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever." Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field. "You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. " so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force. For the buoyancy do I: density of water * volume of water displaced * gravity acceleration? so: mass of bottle * gravity = volume of water displaced * density of water * gravity? @EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$? As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern... You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer. Though as it happens I have to go now - lunch time! :-) @JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth. Anonymous Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure Not sure about that, but the converse is certainly false :P Derrida has received a lot of criticism from the experts on the fields he tried to comment on I personally do not know much about postmodernist philosophy, so I shall not comment on it myself I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger. I can see why a man of that generation would be leaned towards that idea. I do too.
Anyone who has used a microwave oven knows there is energy in electromagnetic waves. Sometimes this energy is obvious, such as in the warmth of the summer Sun. Other times, it is subtle, such as the unfelt energy of gamma rays, which can destroy living cells. Electromagnetic waves bring energy into a system by virtue of their electric and magnetic fields. These fields can exert forces and move charges in the system and, thus, do work on them. However, there is energy in an electromagnetic wave itself, whether it is absorbed or not. Once created, the fields carry energy away from a source. If some energy is later absorbed, the field strengths are diminished and anything left travels on. Clearly, the larger the strength of the electric and magnetic fields, the more work they can do and the greater the energy the electromagnetic wave carries. In electromagnetic waves, the amplitude is the maximum field strength of the electric and magnetic fields (Figure \(\PageIndex{1}\)). The wave energy is determined by the wave amplitude. Figure \(\PageIndex{1}\): Energy carried by a wave depends on its amplitude. With electromagnetic waves, doubling the E fields and B fields quadruples the energy density u and the energy flux uc. For a plane wave traveling in the direction of the positive x-axis with the phase of the wave chosen so that the wave maximum is at the origin at \(t = 0\), the electric and magnetic fields obey the equations \[E_y (x,t) = E_0 \, \cos \, (kx - \omega t)\] \[B_x (x,t) = B_0 \, \cos \, (kx - \omega t).\] The energy in any part of the electromagnetic wave is the sum of the energies of the electric and magnetic fields. This energy per unit volume, or energy density u, is the sum of the energy density from the electric field and the energy density from the magnetic field. Expressions for both field energy densities were discussed earlier (\(u_E\) in Capacitance and \(u_B\) in Inductance). Combining these the contributions, we obtain \[u (x,t) = u_E + u_B = \frac{1}{2}\epsilon_0 E^2 + \frac{1}{2\mu_0} B^2.\] The expression \(E = cB = \frac{1}{\sqrt{\epsilon_0\mu_0}}B\) then shows that the magnetic energy density \(u_B\) and electric energy density \(u_E\) are equal, despite the fact that changing electric fields generally produce only small magnetic fields. The equality of the electric and magnetic energy densities leads to \[u(x,t) = \epsilon_0 E^2 = \frac{B^2}{\mu_0}.\] The energy density moves with the electric and magnetic fields in a similar manner to the waves themselves. We can find the rate of transport of energy by considering a small time interval \(\Delta t\). As shown in Figure \(\PageIndex{2}\), the energy contained in a cylinder of length \(c\Delta t\) and cross-sectional area A passes through the cross-sectional plane in the interval \(\Delta t\). Figure \(\PageIndex{2}\): The energy \(uAc\Delta t\) contained in the electric and magnetic fields of the electromagnetic wave in the volume \(Ac\Delta t\) passes through the area A in time \(\Delta t\). The energy passing through area A in time \(\Delta t\) is \[u \times volume = uAc\Delta t.\] The energy per unit area per unit time passing through a plane perpendicular to the wave, called the energy flux and denoted by S, can be calculated by dividing the energy by the area A and the time interval \(\Delta t\). \[S = \frac{\text{Energy passing area } A \text{ in time } \Delta t}{A \Delta t} = uc = \epsilon_0cE^2 = \frac{1}{\mu_0} EB.\] More generally, the flux of energy through any surface also depends on the orientation of the surface. To take the direction into account, we introduce a vector \(\vec{S}\), called the Poynting vector, with the following definition: \[\vec{S} = \frac{1}{\mu_0} \vec{E} \times \vec{B}.\] The cross-product of \(\vec{E}\) and \(\vec{B}\) points in the direction perpendicular to both vectors. To confirm that the direction of \(\vec{S}\) is that of wave propagation, and not its negative, return to [link]. Note that Lenz’s and Faraday’s laws imply that when the magnetic field shown is increasing in time, the electric field is greater at x than at \(x + \Delta x\). The electric field is decreasing with increasing x at the given time and location. The proportionality between electric and magnetic fields requires the electric field to increase in time along with the magnetic field. This is possible only if the wave is propagating to the right in the diagram, in which case, the relative orientations show that \(\vec{S} = \frac{1}{\mu_0} \vec{E} \times \vec{B}\) is specifically in the direction of propagation of the electromagnetic wave. \[S(x, t) = c\epsilon_0 E_0^2 cos^2 \, (kx - \omega t)\] Because the frequency of visible light is very high, of the order of \(10^{14} \, Hz\), the energy flux for visible light through any area is an extremely rapidly varying quantity. Most measuring devices, including our eyes, detect only an average over many cycles. The time average of the energy flux is the intensity I of the electromagnetic wave and is the power per unit area. It can be expressed by averaging the cosine function in Equation over one complete cycle, which is the same as time-averaging over many cycles (here, T is one period): \[I = S_{avg} = c\epsilon_0E_0^2 \frac{1}{T} \int_0^T cos^2 \, \left(2\pi \frac{t}{T}\right) dt.\] We can either evaluate the integral, or else note that because the sine and cosine differ merely in phase, the average over a complete cycle for \(cos^2 \, (\xi)\) is the same as for \(sin^2 \, (\xi)\), to obtain \[\langle cos^2 \xi \rangle = \frac{1}{2} [\langle cos^2 \xi \rangle + \langle sin^2 \xi \rangle ] = \frac{1}{2} \langle 1 \rangle = \frac{1}{2}.\] where the angle brackets \(\langle . . . \rangle \) stand for the time-averaging operation. The intensity of light moving at speed c in vacuum is then found to be \[I = S_{avg} = \frac{1}{2}c\epsilon_0 E_0^2\] in terms of the maximum electric field strength \(E_0\), which is also the electric field amplitude. Algebraic manipulation produces the relationship \[I = \frac{cB_0^2}{2\mu_0}\] where \(B_0\) is the magnetic field amplitude, which is the same as the maximum magnetic field strength. One more expression for \(I_{avg}\) in terms of both electric and magnetic field strengths is useful. Substituting the fact that \(cB_0 = E_0\), the previous expression becomes \[I = \frac{E_0B_0}{2\mu_0}.\] We can use whichever of the three preceding equations is most convenient, because the three equations are really just different versions of the same result: The energy in a wave is related to amplitude squared. Furthermore, because these equations are based on the assumption that the electromagnetic waves are sinusoidal, the peak intensity is twice the average intensity; that is, \(I_0 = 2I\). Example \(\PageIndex{1}\): A Laser Beam The beam from a small laboratory laser typically has an intensity of about \(1.0 \times 10^{-3} W/m^2\). Assuming that the beam is composed of plane waves, calculate the amplitudes of the electric and magnetic fields in the beam. Strategy Use the equation expressing intensity in terms of electric field to calculate the electric field from the intensity. Solution From Equation, the intensity of the laser beam is \[I = \frac{1}{2}c\epsilon_0 E_0^2. \nonumber\] The amplitude of the electric field is therefore \[ \begin{align*} E_0 &= \sqrt{\frac{2}{c\epsilon_0}I} \\[4pt] &= \sqrt{\frac{2}{(3.00 \times 10^8 m/s)(8.85 \times 10^{-12} F/m)}\left(1.0 \times 10^{-3} V/m^2 \right)} \\[4pt] &= 0.87 \, V/m. \end{align*}\] The amplitude of the magnetic field can be obtained from [link]: \[B_0 = \frac{E_0}{c} = 2.9 \times 10^{-9} \, T. \nonumber\] Example \(\PageIndex{2}\): Light Bulb Fields A light bulb emits 5.00 W of power as visible light. What are the average electric and magnetic fields from the light at a distance of 3.0 m? Strategy Assume the bulb’s power output P is distributed uniformly over a sphere of radius 3.0 m to calculate the intensity, and from it, the electric field. Solution The power radiated as visible light is then \(I = \frac{P}{4\pi r^2} = \frac{c\epsilon_0 E_0^2}{2},\) \(E_0 = \sqrt{2\frac{P}{4\pi r^2 c\epsilon_0}} = \sqrt{2\frac{5.00 \, W}{4\pi (3.0 \, m)^2 (3.00 \times 10^8 \, m/s)(8.85 \times 10^{-12} C^2/N \cdot m^2)}} = 5.77 \, N/C,\) \(B_0 = E_0/c = 1.92 \times 10^{-8} \, T\). Significance The intensity I falls off as the distance squared if the radiation is dispersed uniformly in all directions. Example \(\PageIndex{2}\): Radio Range A 60-kW radio transmitter on Earth sends its signal to a satellite 100 km away (Figure \(\PageIndex{3}\)). At what distance in the same direction would the signal have the same maximum field strength if the transmitter’s output power were increased to 90 kW? Figure \(\PageIndex{3}\): In three dimensions, a signal spreads over a solid angle as it travels outward from its source. Strategy The area over which the power in a particular direction is dispersed increases as distance squared, as illustrated in Figure \(\PageIndex{3}\). Change the power output P by a factor of (90 kW/60 kW) and change the area by the same factor to keep \(I = \frac{P}{A} = \frac{c\epsilon_0 E_0^2}{2}\) the same. Then use the proportion of area A in the diagram to distance squared to find the distance that produces the calculated change in area. Solution Using the proportionality of the areas to the squares of the distances, and solving, we obtain from the diagram \[ \begin{align*} \frac{r_2^2}{r_1^2} &= \frac{A_2}{A_1} = \frac{90 \, W}{60 \, W}, \\[4pt] r_2 &= \sqrt{\frac{90}{60}}(100 \, km) \\[4pt] &= 122 \, km. \end{align*}\] Significance The range of a radio signal is the maximum distance between the transmitter and receiver that allows for normal operation. In the absence of complications such as reflections from obstacles, the intensity follows an inverse square law, and doubling the range would require multiplying the power by four. Contributors Samuel J. Ling (Truman State University), Jeff Sanny (Loyola Marymount University), and Bill Moebs with many contributing authors. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
Young's Modules is defined as, $Y = \frac{\sigma}{\epsilon}$ where $\sigma$ is the strain defined as $\sigma = F/A$ and $\epsilon$ is the stress defined as $\epsilon = \frac{\Delta L}{L_0}$. In this case we require the stress, thus re-arranging the first equation gives, $\sigma = \epsilon Y$ After plugging in the definition of $\epsilon$ we then arrive at, $\sigma = \frac{\Delta L}{L_0}Y$ Now as you already worked out, $\Delta L = L_0\alpha\Delta T$. Therefore after plugging this in, we arrive at the required result: $\sigma = \frac{L_0\alpha\Delta T}{L_0}Y = \alpha\Delta T Y$
Quadratic Equations Practise solving quadratic equations algebraically with this self-marking exercise. This is level 5; The difference between two squares. You can earn a trophy if you get at least 7 correct. Instructions Try your best to answer the questions above. Type your answers into the boxes provided leaving no spaces. As you work through the exercise regularly click the "check" button. If you have any wrong answers, do your best to do corrections but if there is anything you don't understand, please ask your teacher for help. When you have got all of the questions correct you may want to print out this page and paste it into your exercise book. If you keep your work in an ePortfolio you could take a screen shot of your answers and paste that into your Maths file. Transum.org This web site contains over a thousand free mathematical activities for teachers and pupils. Click here to go to the main page which links to all of the resources available. Please contact me if you have any suggestions or questions. Mathematicians are not the people who find Maths easy; they are the people who enjoy how mystifying, puzzling and hard it is. Are you a mathematician? Comment recorded on the 1 August 'Starter of the Day' page by Peter Wright, St Joseph's College: "Love using the Starter of the Day activities to get the students into Maths mode at the beginning of a lesson. Lots of interesting discussions and questions have arisen out of the activities. Comment recorded on the 3 October 'Starter of the Day' page by S Mirza, Park High School, Colne: "Very good starters, help pupils settle very well in maths classroom." Answers There are answers to this exercise but they are available in this space to teachers, tutors and parents who have logged in to their Transum subscription on this computer. A Transum subscription unlocks the answers to the online exercises, quizzes and puzzles. It also provides the teacher with access to quality external links on each of the Transum Topic pages and the facility to add to the collection themselves. Subscribers can manage class lists, lesson plans and assessment data in the Class Admin application and have access to reports of the Transum Trophies earned by class members. If you would like to enjoy ad-free access to the thousands of Transum resources, receive our monthly newsletter, unlock the printable worksheets and see our Maths Lesson Finishers then sign up for a subscription now:Subscribe Go Maths Learning and understanding Mathematics, at every level, requires learner engagement. Mathematics is not a spectator sport. Sometimes traditional teaching fails to actively involve students. One way to address the problem is through the use of interactive activities and this web site provides many of those. The Go Maths page is an alphabetical list of free activities designed for students in Secondary/High school. Maths Map Are you looking for something specific? An exercise to supplement the topic you are studying at school at the moment perhaps. Navigate using our Maths Map to find exercises, puzzles and Maths lesson starters grouped by topic. Teachers If you found this activity useful don't forget to record it in your scheme of work or learning management system. The short URL, ready to be copied and pasted, is as follows: Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your comments. Close Factorising - Factorise algebraic expressions in this structured online self marking exercise. Level 1 - A quadratic equation presented in a factorised form. Level 2 - Two terms where the unknown is a factor of both. The roots are integers. Level 3 - Three terms where the squared term has a coefficient of one. The roots are integers. Level 4 - Three terms where the squared term has a coefficient other than one and the expression factorises. Level 5 - The difference between two squares. Level 6 - Three terms and the roots are not necessarily integers. Level 7 - Mixed questions on solving quadratic equations Exam Style Questions - A collection of problems in the style of GCSE or IB/A-level exam paper questions (worked solutions are available for Transum subscribers). More Algebra including lesson Starters, visual aids, investigations and self-marking exercises. See the National Curriculum page for links to related online activities and resources. The video above is from the wonderful Mr Hegarty. Here is the formula for solving the equation \(ax^2 + bx + c = 0\).$$ x = \frac{ - b \pm \sqrt {b^2 - 4ac} }{2a} $$ Did you know that there is another formula for finding the roots of quadratic equations? It is called the 'citardauq' (the word quadratic backwards) formula and you can read more about it here but you will never need it for school Maths. Don't wait until you have finished the exercise before you click on the 'Check' button. Click it often as you work through the questions to see if you are answering them correctly. You can double-click the 'Check' button to make it float at the bottom of your screen. Close
Applied mathematics is one which is used in day-to-day life, in solving troubles (problems) or in business purposes. Let me write an example: George had some money. He gave 14 Dollars to Matthew. Now he has 27 dollars. How much money had he? If you are familiar with day-to-day calculations – you must say that George had 41 dollars, and since he had 41, gave 14 to Matthew saving 27 dollars. That’s right? Off course! This is a general (layman) approach. ‘How will we achieve it mathematically?’ –we shall restate the above problem as another statement (meaning the same): George had some money$ x$ dollars. He gave 14 dollars to Matthew. Now he has 27 dollars. How much money he had?Find the value of $ x$ . This is equivalent to the problem asked above. I have just replaced ‘some money’ by ‘x dollars’. As ‘some’ senses as unknown quantity— $ x$ does the same. Now all we need to get the value of x. When solving for $ x$ , we should have a plan like this: George had $ x$ dollars. He gave to Matthew 14 dollars Now he must have $ x-14$ dollars But problem says that he has 27 dollars left. This implies that $ x-14$ dollars are equal to 27 dollars. i.e., $ x-14=27$ $ x-14=27$ contains an alphabet x which we assumed to be unknown–can have any certain value. Statements (like $ x-14=27$ ) containing unknown quantities and an equality are called Equations. The unknown quantities used in equations are called variables, usually represented by bottom letters in English alphabet (e.g.,$ x,y,z$ ). Top letters of alphabet ($ a,b,c,d$ ..) are usually used to represent constants (one whose value is known, but not shown). Now let we concentrate on the problem again. We have the equation x-14=27. Now adding 14 to both sides of the equal sign: $$ x-14 +14 =27 +14$$ or, $ x-0 = 41 $ as (-14+14=0) or, $ x= 41$ . So, $ x$ is 41. This means George had 41 dollars. And this answer is equal to the answer we found practically. Solving problems practically are not always possible, specially when complicated problems encountered —we use theory of equations. We could also deal above problem as this way: $ x-14= 27$ or,$ x= 27+14 =41$ -14 transfers to another side, which makes the change in sign of the value, i.e., +14. When we transport a number from left side to right of the equal sign, the sign of the number changes and vice-versa. As here -14 converts into +14; +18 converts into -18 in example below: $ x+18 =32$ or, $ x=32 -18 =14$ . Please note, any number not having a sign before its value is deemed to be positive- -e.g., 179 and +179 are the same, for the sake of theory in equations. Before we proceed, why not take another example? Marry had seven sheep. Marry’s uncle gifted her some more sheep. She has eighteen sheep now. How many sheep did her uncle gift? First of all, how would you state it as an equation? $ 7 + x = 18$ or, $ +7 +x =18$ (just to illustrate that 7=+7) or, $ x= 18-7 =9$ . So, Marry’s uncle gifted her 9 sheep. /// Now tackle this problem, Monty had some cricket balls. Graham had double number of balls as compared to Monty. Adam had also 6 cricket balls. They all collected their balls and found that total number of cricket balls was 27. How many balls had Monty and Graham? As usual our first step to solve this problem must be to restate it as an equation. We do it like this: Monty had (let) x balls. Then Graham must had $ x \times 2=2x$ balls. Adam had 6 balls. The total sum=$ x+2x+6=3x+6$ But that is 27 according to our question. Hence, $ 3x+6=27$ or, $ 3x=27-6 =21$ or,$ x=21 /3 =7$ . Here multiplication sign converts into division sign, when transferred. Since $ x=7$ , we can say that Monty had 7 balls (instead of x balls) and Graham had 14 (instead of $ 2x$ ). Types of Equations They are many types of algebraic equations (we suffix ‘Algebraic’ because it includes variables which are part of algebra) depending on their properties. In common we classify them into two main parts: Equations with one variable ( univariable algebraic equations, or just Univariables) Equations with more than one variables ( multivariable algebraic equations, or just Multivariables) Univariable Equations Equations consisting of only one variable are called univariable equations. All of the equations we solved above are univariables since they contain only one variable (x). Other examples are: $ 3x+2=5x-3$ ; $ x^2+5x +3=0$ ; $ e^x =x^e$ (here, e is a constant). Univariables are further divided into many categories depending upon the degree of the variable. Some most common are: Linear Univariables:Equations having the maximum power (degree) of the variable 1. $ ax+b=c$ is a general example of linear equations in one variable, where a, b and c are arbitrary constants. Quadratic Equations:Also known as Square Equations, are ones in which the maximum power of the variable is 2. $ ax^2+bx+c=0$ is a general example of quadratic equations, where a,b,c are constants. Cubic Equations:Equations of third degree (maximum power=3) are called Cubic. A cubic equation i s of type $ ax^3+bx^2+cx+d=0$ ; where a,b,c,d are constants. Quartic Equations:Equations of fourth degree are Quartic. A quartic equation is of type $ ax^4+bx^3+cx^2+dx+e=0$ . Similarly, equation of an n-th degree can be defined if the variable of the equation has maximum power n. Multivariable Equations Some equations have more than one variables, as $ ax^2+2hxy+by^2=0$ etc. Such equations are termed as Multivariable Equations. Depending on the number of variables present in the equations, multivariable equations can be classified as: 1. Bi-variable Equations –Equations having exactly two variables are called bi-variables. $ x+y=5$ ; $ x^2+y^2=4$ ; $ r^2+\theta^2=k^2$ , where k is constant; etc are equations with two variables. Bivariable equations can also be divided into many categories, as same as univariables were. A. Linear Bivariable Equations: Power of a variable or sum of powers of product of two variables does not exceed 1. For example: $ ax+by=c$ is a linear but $ axy=b$ is not. B. Second Order Bivariable Equations:Power of a variable or sum of powers of product of two variables does not exceed 2. For example: $ axy=b$ , $ ax^2+by^2+cxy+dx+ey+f=0$ are of second order. Similarly you can easily define n-th order Bivariable equations. 2. Tri-variable Equations: Equations having exactly three variables are called tri-variable equations. $ x+y+z=5$ ; $ x^2+y^2-z^2=4$ ; $ r^3+\theta^3+\phi^3=k^3$ , where k is constant; etc are trivariables. (Further classification of Trivariables are not necessary, but I hope that you can divide them into more categories as we did above.) similarly, you can easily define any n-variable equationas an equation in which the number of variables is n. Out of these equations, we have discussed only Linear Univariable Equations here. We have already discussed them above, for particular examples. Here we’ll discuss them for general cases. As told earlier, a general example of linear univariable equation is $ ax+b=c$ . We can adjust it by transferring constants to one side and keeping variable to other. $ ax+b = c$ or, $ ax = c-b$ or, $ x = \frac{c-b}{a}$ this is the required solution. Example: Solve $ 3x+5=0$ . We have, $ 3x+5=0$ or, $ 3x = 0-5 =-5$ or, $ x = \frac{-5}{3}$ Feel free to ask questions, send feedback and even point out mistakes. Great conversations start with just a single word. How to write better comments?
This module shows how the HP 10bII+ financial calculator can be used to price a call option with the Black-Scholes (1973) model. A key feature of this calculator is the ability to return the normal lower tail probability for the value z. The alternative is often a probability table. The HP 10bII+ financial calculator is approved for use in the GARP FRM exams, but not the CFA exams. According to the the Black-Scholes (1973) model, the theoretical price \(C\) for European call option on a non dividend paying stock is $$\begin{equation} C=S_0 N(d_1)-Xe^{-rT}N(d_2) \end{equation}$$ where$$d_1=\frac {log \left( \frac{S_0}{X} \right) + \left( r+ \frac {\sigma^2} {2} \right )T}{\sigma \sqrt{T}} $$ $$d_2=\frac {log \left( \frac{S_0}{X} \right) + \left( r – \frac {\sigma^2} {2} \right )T}{\sigma \sqrt{T}} = d_1 – \sigma \sqrt{T}$$ In equation 1, \(S_0\) is the stock price at time 0, \(X\) is the exercise price of the option, \(r\) is the risk free interest rate, \(\sigma\) represents the annual volatility of the underlying asset, and \(T\) is the time to expiration of the option. Further discussion and examples in Excel can be found at: Black-Scholes option pricing The Black-Scholes model in the HP 10bII+ Example: The stock price at time 0, six months before expiration date of the option is $42.00, option exerise price is $40.00, the rate of interest on a government bond with 6 months to expiration is 5%, and the annual volatility of the underlying stock is 20%. Calculation of the the call price can be completed as a 5 step process. Step 1. d1; 2. d2; 3. N(d1); 4. N(d2); and step 5, C. To point the way, the call price from equation 1 is $4.08. We need some planning first, because intermediate results are assigned to memory registers in the calculator. The order of calculation is influenced by my exposure to the HP 12C (reverse polish notation) calculator. It uses a memory stack, so numbered registers can be reduced, but it does not have a normal lower tail probability function. Replacing the variable names with their values will help in performing the calculation. The numbers in this list refer to the major sections of the steps in the calculations shown below. \(d_1=\frac {log \left( \frac{42.00}{40.00} \right) + \left( 0.05+ \frac {0.2^2} {2} \right )0.5}{0.2 \sqrt{0.5}}\). This is equation 1 with values from the example. Calculates the numerator of d 1 Calculates the denominator of d 1– stored in register 3at step 12 Calculates the value of d 1– stored in register 0at step 15 register 1at step 17 register 2at step 22 register 4at step 26 Calculator mode: Chain Display digits: 4 1 a. d 1 numerator # HP 10bII+ Keystrokes Comment Display reads 1. 0.2 [Orange SHIFT down] [x 2] sigma 2 [0.0400] 2. [\(\div\)] 2 Divide by 2 3. [+] 0.05 Add the rate 4. [x] 0.5 [=] Multiply by time and display the result [0.0350] 5. [Orange SHIFT down] [STO] 0 Store the displayed value in register 0 6. 42 [\(\div\)] 40 [=] Divide the stock price by the exercise price, and display result [1.0500] 7. [Orange SHIFT down] [LN] Take the natural log of the value in the display [0.04879] 8. [+] [RCL] 0 [=] Add the displayed value to recalled value from register 0 [0.08379] 9. [Orange SHIFT down] [STO] 0 Store the displayed value in register 0 [0.08379] 1 b. d 1 denominator # HP 10bII+ Keystrokes Comment Display reads 10. 0.5 [Orange SHIFT down] [√x] Take the square root of time [0.7071] 11. [x] 0.2 [=] Multiple by 0.2 and display result [0.1414] 12. [Orange SHIFT down] [STO] 3 Store the displayed value in register 3 (For use later in d 2) [0.1414] 1 c. d 1 # HP 10bII+ Keystrokes Comment Display reads 13. [Orange SHIFT down] [1/x] Take the reciprocal of the value in the display [7.0711] 14. [x] [Recall] 0 [=] Multiply the recalled value from register 0 [0.5925] 15. [Orange SHIFT down] [STO] 0 Store the displayed value d in 1 register 0 (For use later in d) 2 [0.5925] 2. N(d 1) # HP 10bII+ Keystrokes Comment Display reads 16. [Blue SHIFT up] [ Z⇆P] Calculate the cumulative normal probability for the value in the display [0.7232] 17. [Orange SHIFT down] [STO] 1 Store the displayed value N(d in r 1) egister 1 (For use later in C) [0.7232] 3. d 2 # HP 10bII+ Keystrokes Comment Display reads 18. [RCL] 0 Recall d from register 0 1 [0.5925] 19. [-] [RCL] 3 Subtract the value of d 1 recalled from register 3 [0.1414] 20. [=] Display the value of d 2 [0.4511] 4. N(d 2) # HP 10bII+ Keystrokes Comment Display reads 21. [Blue SHIFT up] [ Z⇆P] Calculate the cumulative normal probability for the value in the display [0.6740] 22. [Orange SHIFT down] [STO] 2 Store the displayed value N(d in 2) register 2 (For use later in C) [0.6740] 5. C # HP 10bII+ Keystrokes Comment Display reads 23. 0.05 [+/-] [x] 0.5 [=] Calculate the exponent: multiply the negative rate by time [-0.02500] 24. [Orange SHIFT down] [e x] Base e to negative rate x time [0.9753] 25. [x] [RCL] 2 [x] 40 [=] Multiply the displayed value by the recalled value from register 2, then multiply by the exercise price and display the result [26.2955] 26. [Orange SHIFT down] 4 Store the intermediate result in register 4 (For use in step 28) [26.2955] 27. 42 [x] [RCL] 1 [=] Multiply the stock price by the value in register 1 [30.3760] 28. [-] [RCL] 4 [=] From the displayed value subtract the recalled value from register 4 [4.0805]
A hypothetical diatomic molecule has a binding length of $0.8860 nm$. When the molecule makes a rotational transition from $l = 2$ to the next lower energy level, a photon is released with $\lambda_r = 1403 \mu m$. At a vibration transition to a lower energy state, a photon is released with $\lambda_v = 4.844 \mu m$. Determine the spring constant k. What I've done is: 1) Calculate the moment of inertia of the molecule by equating Planck's equation to the transition rotational energy: $$E = \frac{hc}{\lambda} = \frac{2 \hbar^2}{I}$$ Thus solving for $I$: $$I = 1.562 \times 10^{-46} kg m^2$$ 2) Knowing that the moment of inertia of a diatomic molecule rotating about its CM can be expressed as $I = \rho r^2$, solve for $\rho$ (where $\rho$ is the reduced mass and $r$ is the distance from one of the molecules to the axis of rotation; thus $r$ is half the bond length. EDIT: $r$ is the bond length and NOT half of it. Curiously, if you use the half value you get a more reasonable k: around $3 N/m$): $$\rho = \frac{I}{r^2} = 1.99 \times 10^{-28} kg$$ 3) Solve for $k$ from the frequency of vibration equation: $$f = \frac{1}{2\pi}\sqrt{\frac{k}{\rho}}$$ Knowing: $$\omega = 2\pi f$$ We get: $$k = \omega^2 \rho = (c/\lambda_v)^2 \rho = 0.763 N/m$$ Where $\lambda_v= 4.844 \times 10^{-6} m$ The problem I see is that the method seems to be OK but the result does not convince me. We know that the stiffness constant $k$ is is a measure of the resistance offered by a body to deformation. The unknown diatomic molecule we're dealing with seems to be much more elastic than $H_2$ (which has $k = 550 N/m$).
It's a little unclear exactly what you mean by "contains information about Halting of some turing machines", but under every reasonable interpretation of this I can think of the answer is "yes." For instance, consider the following: Definition. A set $X$ has halting information if there is some computable set $C$ of (indices for) Turing machines such that: $(i)$ the set $\{i\in C: \Phi_i(i)\downarrow\}$ is not computable, but $(ii)$ the set $\{i\in C: \Phi_i(i)\downarrow\}$ is computable from $X$. Here I use "$\downarrow$" to denote "halts." Then we have: Fact. There are noncomputable sets which do not have halting information. Moreover, there are such sets which are arithmetic - that is, definable in first-order arithmetic. Proof: Note that $\{i\in C: \Phi_i(i)\downarrow\}=K\cap C$, where $K$ is the Halting Problem. So it's enough to build an $X$ such that for every computable $C$, either $K\cap C$ is computable or $K\cap C\not\le_TX$. Specifically, we have countably many sets $A_i$ (the sets of the form $K\cap C$ which are non-computable) and we want a noncomputable $X$ which does not compute any $A_i$. Such an $X$ can be built by diagonalization. It's now an easy exercise to show that this construction can be performed computably in, say, $0''''$ (actually much lower, but no need to go that far) - show that $0''''$ can compute a listing of the $A_i$s, and moreover can compute the necessary diagonalization process. But every set computable in $0''''$ is definable in first-order arithmetic (specifically, $\Delta^0_5$), so we're done.
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for @JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default? @JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font. @DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma). @egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge. @barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually) @barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording? @barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us. @DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.) @barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow) if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.) @egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended. @barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really @DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts. @DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ... @DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts. MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers... has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable? I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something. @baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!... @baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier. @baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals.
The Probability Density Function(PDF) is the probability function which is represented for the density of a continuous random variable lying between a certain range of values. The probability Density function is defined by the formula, Questions on the probability distribution function Question 1: The pdf of a distribution is given as \(f(x)= \left\{\begin{matrix}x;\; for\ 0< x< 1 \\ 2-x;\; for \ 1< x< 2 \\ 0;\; for\ x> 2 \end{matrix}\right \}\) Calculate the density within the interval \((0.5< x< 1.5)\) Solution:\(P(0.5< x< 1.5)=\int_{0.5}^{1.5}f(x)dx\) \(=\int_{0.5}^{1}f(x)dx+\int_{1}^{1.5}f(x)dx\) \(=\int_{0.5}^{1}xdx+\int_{1}^{1.5}(2-x)dx\) \(=\left ( \frac{x^{2}}{2} \right )_{0.5}^{1}+\left ( (2x-\frac{x^{2}}{2}) \right )_{1}^{1.5}\) = 3/4 To study other mathematical concepts in a better and interesting way, Register at BYJU’S.
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
Repeated Measures Design Repeated measures analysis of variance (rANOVA) is one of the most commonly used statistical approaches to repeated measures designs. Learning Objectives Evaluate the significance of repeated measures design given its advantages and disadvantages Key Takeaways Key Points Repeated measures design, also known as within-subjects design, uses the same subjects with every condition of the research, including the control. Repeated measures design can be used to conduct an experiment when few participants are available, conduct an experiment more efficiently, or to study changes in participants’ behavior over time. The primary strengths of the repeated measures design is that it makes an experiment more efficient and helps keep the variability low. A disadvantage of the repeated measure design is that it may not be possible for each participant to be in all conditions of the experiment (due to time constraints, location of experiment, etc.). One of the greatest advantages to using the rANOVA, as is the case with repeated measures designs in general, is that you are able to partition out variability due to individual differences. The rANOVA is still highly vulnerable to effects from missing values, imputation, unequivalent time points between subjects, and violations of sphericity — factors which can lead to sampling bias and inflated levels of type I error. Key Terms longitudinal study: A correlational research study that involves repeated observations of the same variables over long periods of time. sphericity: A statistical assumption requiring that the variances for each set of difference scores are equal. order effect: An effect that occurs when a participant in an experiment is able to perform a task and then perform it again at some later time. Repeated measures design (also known as “within-subjects design”) uses the same subjects with every condition of the research, including the control. For instance, repeated measures are collected in a longitudinal study in which change over time is assessed. Other studies compare the same measure under two or more different conditions. For instance, to test the effects of caffeine on cognitive function, a subject’s math ability might be tested once after they consume caffeine and another time when they consume a placebo. Repeated measures design can be used to: Conduct an experiment when few participants are available: The repeated measures design reduces the variance of estimates of treatment-effects, allowing statistical inference to be made with fewer subjects. Conduct experiment more efficiently: Repeated measures designs allow many experiments to be completed more quickly, as only a few groups need to be trained to complete an entire experiment. Study changes in participants’ behavior over time: Repeated measures designs allow researchers to monitor how the participants change over the passage of time, both in the case of long-term situations like longitudinal studies and in the much shorter-term case of order effects. Advantages and Disadvantages The primary strengths of the repeated measures design is that it makes an experiment more efficient and helps keep the variability low. This helps to keep the validity of the results higher, while still allowing for smaller than usual subject groups. A disadvantage of the repeated measure design is that it may not be possible for each participant to be in all conditions of the experiment (due to time constraints, location of experiment, etc.). There are also several threats to the internal validity of this design, namely a regression threat (when subjects are tested several times, their scores tend to regress towards the mean), a maturation threat (subjects may change during the course of the experiment) and a history threat (events outside the experiment that may change the response of subjects between the repeated measures). Repeated Measures ANOVA Repeated measures analysis of variance (rANOVA) is one of the most commonly used statistical approaches to repeated measures designs. Partitioning of Error One of the greatest advantages to using the rANOVA, as is the case with repeated measures designs in general, is that you are able to partition out variability due to individual differences. Consider the general structure of the [latex]\text{F}[/latex]– statistic: [latex]\text{F} = \dfrac{\text{MS}_{\text{treatment}}}{\text{MS}_{\text{error}}} = \dfrac{\text{SS}_{\text{treatment}} / \text{df}_{\text{treatment}}}{\text{SS}_{\text{error}} / \text{df}_{\text{error}}}[/latex] In a between-subjects design there is an element of variance due to individual difference that is combined in with the treatment and error terms: [latex]\text{SS}_{\text{total}} = \text{SS}_{\text{treatment}} + \text{SS}_{\text{error}}[/latex] [latex]\text{df}_{\text{total}} = \text{n}-1[/latex] In a repeated measures design it is possible to account for these differences, and partition them out from the treatment and error terms. In such a case, the variability can be broken down into between-treatments variability (or within-subjects effects, excluding individual differences) and within-treatments variability. The within-treatments variability can be further partitioned into between-subjects variability (individual differences) and error (excluding the individual differences). [latex]\text{SS}_{\text{total}} = \text{SS}_{\text{treatment}} + \text{SS}_{\text{subjects}} + \text{SS}_{\text{error}}[/latex] [latex]\begin{align} \text{df}_{\text{total}} &= \text{df}_{\text{treatment}} + \text{df}_{\text{between subjects}} + \text{df}_{\text{error}}\\ &= (\text{k}-1) + (\text{n}-1) + ((\text{n}-\text{k})-(\text{n}-1)) \end{align}[/latex] In reference to the general structure of the [latex]\text{F}[/latex]-statistic, it is clear that by partitioning out the between-subjects variability, the [latex]\text{F}[/latex]-value will increase because the sum of squares error term will be smaller resulting in a smaller [latex]\text{MS}_{\text{error}}[/latex]. It is noteworthy that partitioning variability pulls out degrees of freedom from the [latex]\text{F}[/latex]-test, therefore the between-subjects variability must be significant enough to offset the loss in degrees of freedom. If between-subjects variability is small this process may actually reduce the [latex]\text{F}[/latex]-value. Assumptions As with all statistical analyses, there are a number of assumptions that should be met to justify the use of this test. Violations to these assumptions can moderately to severely affect results, and often lead to an inflation of type 1 error. Univariate assumptions include: Normality: For each level of the within-subjects factor, the dependent variable must have a normal distribution. Sphericity: Difference scores computed between two levels of a within-subjects factor must have the same variance for the comparison of any two levels. Randomness: Cases should be derived from a random sample, and the scores between participants should be independent from each other. The rANOVA also requires that certain multivariate assumptions are met because a multivariate test is conducted on difference scores. These include: Multivariate normality: The difference scores are multivariately normally distributed in the population. Randomness: Individual cases should be derived from a random sample, and the difference scores for each participant are independent from those of another participant. [latex]\text{F}[/latex]-Test Depending on the number of within-subjects factors and assumption violates, it is necessary to select the most appropriate of three tests: Standard Univariate ANOVA [latex]\text{F}[/latex]-test: This test is commonly used when there are only two levels of the within-subjects factor. This test is not recommended for use when there are more than 2 levels of the within-subjects factor because the assumption of sphericity is commonly violated in such cases. Alternative Univariate test: These tests account for violations to the assumption of sphericity, and can be used when the within-subjects factor exceeds 2 levels. The [latex]\text{F}[/latex] statistic will be the same as in the Standard Univariate ANOVA F test, but is associated with a more accurate [latex]\text{p}[/latex]-value. This correction is done by adjusting the [latex]\text{df}[/latex] downward for determining the critical [latex]\text{F}[/latex] value. Multivariate Test: This test does not assume sphericity, but is also highly conservative. While there are many advantages to repeated-measures design, the repeated measures ANOVA is not always the best statistical analyses to conduct. The rANOVA is still highly vulnerable to effects from missing values, imputation, unequivalent time points between subjects, and violations of sphericity. These issues can result in sampling bias and inflated rates of type I error. Further Discussion of ANOVA Due to the iterative nature of experimentation, preparatory and follow-up analyses are often necessary in ANOVA. Learning Objectives Contrast preparatory and follow-up analysis in constructing an experiment Key Takeaways Key Points Experimentation is often sequential, with early experiments often being designed to provide a mean -unbiased estimate of treatment effects and of experimental error, and later experiments often being designed to test a hypothesis that a treatment effect has an important magnitude. Power analysis is often applied in the context of ANOVA in order to assess the probability of successfully rejecting the null hypothesis if we assume a certain ANOVA design, effect size in the population, sample size and significance level. Effect size estimates facilitate the comparison of findings in studies and across disciplines. A statistically significant effect in ANOVA is often followed up with one or more different follow-up tests, in order to assess which groups are different from which other groups or to test various other focused hypotheses. Key Terms iterative: Of a procedure that involves repetition of steps (iteration) to achieve the desired outcome. homoscedasticity: A property of a set of random variables where each variable has the same finite variance. Some analysis is required in support of the design of the experiment, while other analysis is performed after changes in the factors are formally found to produce statistically significant changes in the responses. Because experimentation is iterative, the results of one experiment alter plans for following experiments. Preparatory Analysis The Number of Experimental Units In the design of an experiment, the number of experimental units is planned to satisfy the goals of the experiment. Most often, the number of experimental units is chosen so that the experiment is within budget and has adequate power, among other goals. Experimentation is often sequential, with early experiments often being designed to provide a mean-unbiased estimate of treatment effects and of experimental error, and later experiments often being designed to test a hypothesis that a treatment effect has an important magnitude. Less formal methods for selecting the number of experimental units include graphical methods based on limiting the probability of false negative errors, graphical methods based on an expected variation increase (above the residuals ) and methods based on achieving a desired confidence interval. Power Analysis Power analysis is often applied in the context of ANOVA in order to assess the probability of successfully rejecting the null hypothesis if we assume a certain ANOVA design, effect size in the population, sample size and significance level. Power analysis can assist in study design by determining what sample size would be required in order to have a reasonable chance of rejecting the null hypothesis when the alternative hypothesis is true. Effect Size Effect size estimates facilitate the comparison of findings in studies and across disciplines. Therefore, several standardized measures of effect gauge the strength of the association between a predictor (or set of predictors) and the dependent variable. Eta-squared ([latex]\eta^2[/latex]) describes the ratio of variance explained in the dependent variable by a predictor, while controlling for other predictors. Eta-squared is a biased estimator of the variance explained by the model in the population (it estimates only the effect size in the sample). On average, it overestimates the variance explained in the population. As the sample size gets larger the amount of bias gets smaller: [latex]\eta^2 = \dfrac{\text{SS}_{\text{treatment}}}{\text{SS}_{\text{total}}}[/latex] Jacob Cohen, an American statistician and psychologist, suggested effect sizes for various indexes, including [latex]\text{f}[/latex] (where [latex]0.1[/latex] is a small effect, [latex]0.25[/latex] is a medium effect and [latex]0.4[/latex] is a large effect). He also offers a conversion table for eta-squared ([latex]\eta^2[/latex]) where [latex]0.0099[/latex] constitutes a small effect, [latex]0.0588[/latex] a medium effect and [latex]0.1379[/latex] a large effect. Follow-Up Analysis Model Confirmation It is prudent to verify that the assumptions of ANOVA have been met. Residuals are examined or analyzed to confirm homoscedasticity and gross normality. Residuals should have the appearance of (zero mean normal distribution) noise when plotted as a function of anything including time and modeled data values. Trends hint at interactions among factors or among observations. One rule of thumb is: if the largest standard deviation is less than twice the smallest standard deviation, we can use methods based on the assumption of equal standard deviations, and our results will still be approximately correct. Follow-Up Tests A statistically significant effect in ANOVA is often followed up with one or more different follow-up tests. This can be performed in order to assess which groups are different from which other groups, or to test various other focused hypotheses. Follow-up tests are often distinguished in terms of whether they are planned (a priori) or post hoc. Planned tests are determined before looking at the data, and post hoc tests are performed after looking at the data. Post hoc tests, such as Tukey’s range test, most commonly compare every group mean with every other group mean and typically incorporate some method of controlling for type I errors. Comparisons, which are most commonly planned, can be either simple or compound. Simple comparisons compare one group mean with one other group mean. Compound comparisons typically compare two sets of groups means where one set has two or more groups (e.g., compare average group means of groups [latex]\text{A}[/latex], [latex]\text{B}[/latex], and [latex]\text{C}[/latex] with that of group [latex]\text{D}[/latex]). Comparisons can also look at tests of trend, such as linear and quadratic relationships, when the independent variable involves ordered levels.
Firstly, we must begin by finding out the surface gravity of such a planet as surface gravity is not dependent only on mass but distance from the center, hence the planet's radius, because $$g=G\frac{M}{R^2}$$ $g$ = surface, $G$ = Gravitational Constant($\approx 6.67408 × 10^{-11}\,m^3\,kg^{-1}\,s^{-2}$), $M$ = Mass of object, $R$ = Radius of object Utilizing NASA's Exoplanet Archive and filtering for planets with the mass of 2.5-3.5 Earth masses (no planet will have exactly 3 Earth masses), we get 20 planets. That is a low figure. So we add more data from exoplanets.org to get to a grand total of 289 planets (9 excluded as they were discovered by microlensing and hence, we don't know their radii). The average mass is $2.996925319\,M_\oplus$(Earth masses) (basically 3), and the average radii is $1.395256824\,R_\oplus$(Earth radius) OR $1.71541\times10^{25}\,kg$ and $8457.431881\,km$.This gives us a final surface gravity of $$(6.67408 × 10^{-11}\,m^3\,kg^{-1}\,s^{-2})\times\frac{1.71541\times10^{25}\,kg}{(8457431.881\,m)^2}=16.00542\,m\,s^{-2}$$which is around $1.62956\,g$ (Halo rings from Halo have a measured acceleration of $1.55\,g$). And frankly, as discussions of hypergravity go, this is pretty low. Comparable to reentry acceleration of 1.8 g. This planet would, on a list, come between Jupiter's $24.92\,m\,s^{-2}$ and Neptune's $11.15\,m\,s^{-2}$. The effects on humans in the long term will be more than noticeable, but not anything critically dangerous or too awesome of a spectacle. The average of 69.2 kg on earth would now weigh 101.03 kg, so not very comfortable, but survivable for a healthy 20-year-old adult. Falling would be a huge problem now as you will fall harder, but the bones and muscles and/or fat that would normally be pushing the weighing scales further would not be there to reduce the impact force. Hunting would much more difficult so would avoiding being hunt ed (assuming there is local fauna and flora) due to the increased stress on the heart while running. Sitting down would also be very tiring and would be forgotten in favor of lying down as then there is no vertical displacement of blood required. People will want to lose weight and go down to much more tolerable 75-80 kg range, but there actual mass would be between 46.8 and 50, meaning they would need to have a lot of there body to be pure muscle. Fasting, due to lack of food or just exhaustion, would be difficult as there would be little to no fat in the body. Though there is some hope as long as there are blood-thirsty man eaters out there, as the new citizens just lie down continuously for a day or two with minimal movement, acclimating to the new environment as the heart and skeletal muscles increase in size and strength. No place for flabs here. Except when they are starving and only flabs survive. Long term survival:- Reproduction wouldn't be impossible, and pregnancy wouldn't not kill (atleast in mice). But the babies would be lighter, and much more suited to the environment they were born into. The original survivors would also be much stronger and comfortable than before, but leagues behind their children and grandchildren. This is all assuming the original survivors didn't die out, had a guide book of some kind, built some sort of shelter (which may crush them in an earthquake) and had a food source with a decent availability. Sooo, yeah. Have the sudden stress and environment kill some 40% of the initial 100, maybe feed the corpses to the survivors to sustain them. Get used to the new environment. Get food. Get defenses. Get shelter. Get it on. Then humanity will do what it does, conquering lands and messing up habitats. In between, you may accidentally end up ripping off some survival show.
I know a bunch of different versions of the mean value theorem for integrals, and yet none of them are able to solve my problem, but it sure as heck looks like one of them should. I have attempted the following versions: $1)$ let $f$ be a continuous function on $[a,b]$. Then there is $c \in [a,b]$ such that $$\int_a^b f(x) dx = f(c)(b-a)\,.$$ $2)$ Let $f$ be a continuous function and $\alpha$ be a continuous function of bounded variation. Then there is $c \in [a,b]$ such that $$\int_a^b f(x) d \alpha (x) = f(a)\int_a^c f(x) d\alpha (x) + f(b)\int_c^bf(x) d \alpha(x)\,.$$ $3)$ the two theorems stated in this question here. Yet, unless I'm crazy, none of these theorems are able to prove the following fact, which certainly feels like some kind of mean value theorem problem. Problem: Suppose $f$ is bounded and continuous. Show that there is some $c \in (0, \infty)$ such that $$\int_0^\infty f(x) e^{-x} dx = f(c)\,.$$ Do I need a different version? Something else all together? Did I just miss something obvious? This was an old test problem, and I need someone to put me out of my misery over this question.
Search Now showing items 1-10 of 27 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
I'm currently learning mathemetical concepts of distribution and the way to use them in a ray tracer with the book "Physically Based Rendering". Let's start by uniformly sampling an hemisphere: As you probably know, a way to generate the uniformly distributed direction is to use the inversion method. Let us denote by $p$ our uniform probability density function: $p(\omega) = \cfrac{1}{2\pi}$ and so $p(\theta, \phi) = \sin(\theta)p(\omega)$. Then you compute $p(\theta)$, $p(\phi | \theta)$, you integrate your cumulative distribution function and you invert the function. My questions are: What does $p(\theta, \phi)$ really mean? What is the transformation between $p(\omega)$ and $p(\theta, \phi)$? In the book, to find $p(\theta,\phi)$ they state that $p(\theta, \phi) d\theta d\phi = p(\omega)d\omega$, but why? I know that for $p(\omega)$, our random variable is a given $\omega$ (a direction), so the function represents a relative probability for this direction (so a solid angle, because the relative term implies a direction and a delta area around this direction). But for $p(\theta,\phi)$, our random variable is now the couple $(\theta,\phi)$. To what extent is it different from a direction?
First of all, the metric tensor is one additional piece of structure one inserts on a smooth manifold to measure lenghts and angles. The metric is indeed not present in all applications of Differential Geometry to Physics (see e.g. Lagrangian Mechanics). In that case, it is important to know also how to deal with manifolds without metric tensors. Now, about the coordinate systems the point is that indeed usually manifolds require more than one to be covered. Take the two sphere $S^2$ for example, you need at least two stereographic projections to cover it all. The idea, however is not to piece two coordinate systems to get a global one. The idea is that given a point, around it there is some coordinate system that works and if that you have any overlapping one, you can be sure that the results and definitions don't depend on which coordinate system you use. One more readily seem example is cartesian and spherical coordinates in $\mathbb{R}^3$: you can use any one of them. If $(x,U)$ and $(y,V)$ are two coordinate systems, on the overlap $U\cap V$, if you make sure results independ on the coordinate system you can think of them as intrinsic to $M$ and yet use coordinates to carry down calculations. You can't assume that there is just one coordinate system because if you look at examples you find objects which you certainly want to consider as manifolds which cannot be covered by just one coordinate system. To clarify those points I recommend you take a look in this two books Modern Differential Geometry for Physicists - C. J. Isham A Comprehensive Introduction to Differential Geometry Vol. 1 - Michael Spivak Book 2 is more technical and it's for mathematicians, but it's very good. I recommend you first look at 1 and then look at some things in 2 to see some more detailed constructions. Edit: One counterexample might help you out, so I decided to give one. If $M$ is a smooth manifold and if $(x,U)$ is a coordinate system, then $x : U\subset M\to \mathbb{R}^n$ is a homeomorphism. If there exists one global coordinate system $(x,M)$ then $M$ is homeomorphic to $\mathbb{R}^n$ itself. This is problematic because many manifolds one encounters (not just in Math but in Physics as well) have a more complicated topology. Let's show that the sphere $S^2$ cannot have a global coordinate system. $S^2$ is compact: in truth, $S^2$ is endowed with the subspace topology and because of that, it suffices to show $S^2$ regarded as a subset of $\mathbb{R}^3$ is closed and bounded. Bounded it's easy, if $p\in S^2$ then $|p|=1$, hence $|p|< 2$ so that $S^2\subset B(0, 2)$ where $B(a,r)$ is the ball centered in $a$ with radius $r$. Closed is also easy: $S^2 = \{(a,b,c)\in \mathbb{R}^3 : a^2+b^2+c^2 = 1\}$ so that if we set $f : \mathbb{R}^3\to \mathbb{R}$ by $f(a,b,c)=a^2+b^2+c^2$ then $S^2 = f^{-1}(1)$, but $f$ is continuous and $\{1\}$ is a closed set so that $S^2$ is closed. Since $S^2$ is closed and bounded, $S^2$ is compact. Suppose now that $S^2$ has a global coordinate system $(x,S^2)$ then $x: S^2\to \mathbb{R}^2$ is a homeomorphism, but since $S^2$ is compact, then $\mathbb{R}^2$ is compact which is obviously wrong. So we are forced to conclude $S^2$ has no global coordinate system. So your procedure gives one $n$-tuple of numbers for each point of the manifold, but it doesn't respect the topological structure. If we suppose a global coordinate system for the sphere, you get an absurd unless you accept coordinates which are not continuous and those are not really interesting. Now regarding the metric tensor: I'll say again, the metric is not something you deduce from coordinates. The metric is something you postulate. In GR in particular it is the solution to Einstein's Equations. The formula you say about is just a way to relate coordinate representations of the metric tensor in different coordinate systems, not a way to deduce it from coordinates. For example if $M = \{(a,b)\in \mathbb{R}^2 : b > 0\}$ and if you use cartesian coordinates $(x,y)$ you can define $g$ in coordinates by $$g = \dfrac{dx\otimes dx + dy\otimes dy}{y^2}$$ This is the coordinate representation of $g$ in This post imported from StackExchange Mathematics at 2015-01-20 11:58 (UTC), posted by SE-user user1620696 this coordinate system. If you choose any other coordinates, $g$ will transform it's representation according to the formula you gave. Also, see that I postulated the metric, instead of deducing it.
Are negative incomes (perhaps even negative prices) also allowed? Unless they are, non-horizontal or non-vertical lines are impossible as Engel curves because they have to intersect the negative quadrant of the coordinate system. I will come back to this later. Without additional restrictions (perhaps a lot of additional restrictions) on the preference relations, for any non-negative sloped line I can fabricate a utility function such that the line will be its Engel curve. Let the line $L$ consist of the points $((x,y)\in\mathbb{R}^2|c = a \cdot x + b \cdot y)$ where $a,b,c \in \mathbb{R}$. I will denote this set of points by $L$, the line. It is assumed that we are talking about an actual line, so $a,b$ are not simultaneously equal to $0$. Let $$U(x,y) = \left\{\begin{array}{lc}\arctan (b \cdot x + a \cdot y) & \text{ if } (x,y) \in L \\-\pi & \text{ if } (x,y) \notin L. \end{array}\right.$$ Claim The utility function will always be maximized by a point on $L$. This is easy, because $\arctan$ maps to $(-\frac{\pi}{2},\frac{\pi}{2})$, thus its points always yield greater utility than other points. We specified that the slope of $L$ is non-negative, so for any positive price pair the budget line will cross it, enabling us to chose a point on $L$. We now come back to the question of whether income $I$ can be negative. If they can be, than every point $L$ will be a solution of the consumers utility maximization problem for a certain $I,p_x,p_y$. The reason for this is that $b \cdot x + a \cdot y$ creates an ordering of the points of $L$. (Lines perpendicular to $L$ have a slope of $-b/a$, these are the level curves of $b \cdot x + a \cdot y$.) Select a point $(x_0,y_0) \in L$. By setting $I$ to$$I = p_x \cdot x_0 + p_y \cdot y_0,$$the consumer has barely enough money to purchase this basket. If $L$ is non-negative sloped and prices are positive then no points with higher utility are attainable. The construction I gave is not unique. One could also use the continuous utility function$$\hat{U}(x,y) = \min\left(a \cdot \left(x - \frac{c}{a} \right); - b \cdot y\right)$$to achieve the same result. (If $a = 0$ one may use$$\hat{U}(x,y) = \min\left(a \cdot x; - b \cdot \left(y - \frac{c}{b} \right)\right)$$instead.)
I was in the middle of writing the same old geographic distance calculation using the Haversine formula when it occurred to me: shouldn't there be simpler way to do this? Haversine is of course derived from the Law of Cosines. But in thinking about this problem, I ran across Napier's Rules for right-angled spherical triangles. It seems like Napier's Rules should apply, after all Latitude small circles and Longitude great circles always intersect at right angles, so you should be able to draw a right-angled spherical triangle where the hypotenuse connects any two points (unless the case devolves into a line or single point). So if I'm applying Napier's Rules correctly, if our delta latitude and delta longitude in radians are $a$ and $b$ respectively, the angle $c$ should trace the arc between the two points. So Napier says: $\sin(\pi-c) = \cos(a) \cos(b)$ Which should then simplify to: $c = \arcsin(\cos(a) \cos(b))$ But when I try to verify this with a couple test points, the result doesn't match the Haversine formula. Is there a mistake in my (admittedly rusty) algebra or is my mistake in assuming I can apply Napier's Rules to this problem?
In order to prove (the non trivial part of) Chang-Los-Suszko's theorem [1], I'm struggling with the following lemma : Lemma.Let $T$ be a $\mathcal L$-theory and $T_{\forall\exists}$ the set of $\forall\exists$-sentences consequences of $T$. Then for every model $\mathfrak M \models T_{\forall\exists}$, there exists $\mathfrak N \models T$ such that $\mathfrak M \subseteq \mathfrak N$ and where the inclusion is essentially closed. I recall here that an inclusion $\mathfrak M \subseteq \mathfrak N$ is essentially closed if for every quantifier-free formula $\varphi(\bar x, \bar y)$, and for every tuple $\bar m$ of elements of $M$, one has $$ \mathfrak N \models \exists \bar x \varphi(\bar x,\bar m) \quad \text{implies} \quad\mathfrak M \models \exists \bar x \varphi(\bar x,\bar m) .$$ My thought was to get $\mathfrak N$ by showing that the $\mathcal L_M$-theory $$ T \cup \Delta(\mathfrak M) \cup \{ (\exists \bar x\varphi(\bar x,\bar m)) \to \varphi(\bar a_{\varphi(\bar x, \bar m)},\bar m) : \varphi \ \text{quantifier-free}, \bar m \in M \} $$ is consistent, where $\Delta(\mathfrak M)$ is the simple diagram of $\mathfrak M$, and where $\bar a_{\varphi(\bar x, \bar m)}$ is a choice of a tuple of elements of $M$ such that $\mathfrak M \models \varphi(\bar a_{\varphi(\bar x,\bar m)},\bar m)$ if it exists and else is arbitrary. And to do so, of course I would use compactness and only show that the $$ T \cup \{ \psi(\bar m) \wedge \bigwedge_{i=1}^n\left((\exists \bar x\varphi_i(\bar x,\bar m_i)) \to \varphi(\bar a_{\varphi(\bar x, \bar m_i)},\bar m_i) \right)\} $$ are consistent (with $\psi$ and the $\varphi_i$ quantifier-free). Here I'm kind of stuck… I want to express the latter formula (between the braces) as some $\exists \bar z \forall \bar t \chi(\bar z,\bar t)$ with $\chi$ quantifier-free : then $\mathfrak M$ will satisfy it and so prevent $T$ from showing its negation, yielding a model $\mathfrak N$ of $T$ satisfying $\exists \bar z \forall \bar t \chi(\bar z,\bar t)$. CQFD. It does not seem far from reach but I just can't compute that $\chi$. I hope someone can give me a hint (or tell me that I'm completely off). [1] For those who wonder, Chang-Los-Suszko's theorem states that a theory $T$ is inductive (i.e. its class of models is closed under taking increasing union) if and only if $T$ admits a $\forall\exists$-axiomatization.
Let $a$, $b$ and $c$ be positive numbers such that $a+b+c=2$. Prove: $$\frac{ab}{\sqrt{2c+a+b}}+\frac{bc}{\sqrt{2a+b+c}}+\frac{ca}{\sqrt{2b+c+a}}\le\sqrt\frac{2}{3}$$ Additional info:I'm looking for solutions and hint that using Cauchy-Schwarz, Hölder and AM-GM because I have background in them. Things I have tried: I was thinking to making the denominator smaller using AM-GM.but I was not successful. My other idea was to re write LHS into this form.something like my idea on this question $$A-\frac{ab}{\sqrt{2c+a+b}}+B-\frac{bc}{\sqrt{2a+b+c}}+C-\frac{ca}{\sqrt{2b+c+a}}$$ But I was not able to observe something good. I don't know this will lead to something useful but,here is my other idea. let $x=2c+a+b,y=2a+b+c,z=2b+c+a$. rewriting LHS:$$\sum\limits_{cyc}\frac{(3y-(x+z))(3z-(y+x))}{16\sqrt x} \le \sqrt\frac{2}{3}$$ $\sum\limits_{cyc}$denotes sums over cyclic permutations of the symbols $x,y,z$ . another thing I observed that $(x-y-z)^2-4(y-z)^2 = (3y-(x+z))(3z-(y+x))$ I looked at related problems and I think this and (Prove $\frac{a}{ab+2c}+\frac{b}{bc+2a}+\frac{c}{ca+2b} \ge \frac 98$) may have some common idea in proof with my question inequality. Well,it seems like someone posted it a little after on AoPS.right now there is a solution there by $uvw$ and Cauchy-Schwarz. I post the starting part of solution that is with Cauchy (credits to arqady of AoPS). By Cauchy-Schwarz:$$\left(\sum_{cyc}\frac{ab}{\sqrt{2c+a+b}}\right)^2\leq(ab+ac+bc)\sum_{cyc}\frac{ab}{2c+a+b}$$ Hence, it remains to prove that: $$(ab+ac+bc)\sum_{cyc}\frac{ab}{2c+a+b}\leq\frac{(a+b+c)^3}{12}$$ I stuck here.
Is there a standard for how to align three columns of math I could follow? And what is the LaTex for that? If no standard, how can I nicely format the columns in 3-column math? For example -1 \leq \sin(t) \leq 1 \int_0^x (-1) dt \leq \int_0^x \cos(t) dt \leq \int_0^x (1) dt-x \leq \cos(x)-1 \leq x I want the columns to line up over the inequalities. The left column flushed to the right and the right column flush left. The center column centered. Note that I've tried 'alignat', but can't quite get the spacing working right since it insists on aligning everything right/left/right/left..., and the columns are too far apart. Also tried 'tabular', but the columns are too wide and it doesn't indent according to 'fleqn,reqno' in my documentclass statement.
Factors Factors refer to systematic risks that explain at least partially movements in asset prices. Ross (1976) [1] defines the first multifactor model -- the famous arbitrage pricing theory (APT). The APT states that asset returns are simply linear combinations of factors plus a small idiosyncratic component. Returns of asset $i$ may then simply be expressed in the following way: $ r_i=\sum_k^K \beta_{k,i}f_k +\epsilon_i \tag{1}$ Unambiguously, all factors appearing in equation (1) are important in explaining return variation (covariance) and from a practical perspective relevant particularly for risk management. However, not all factors are relevant for proxying the state of the economy (the asset pricing wording) or just simply bad times. Examples of these factors are: sectors, countries or commodities. On the other hand factors representing (or correlated with) these bad times are crucial in explaining differences in average returns, and are important in the asset pricing literature, characterizing the ‘stochastic discount factor’. Differences in average returns are by this logic simply a result of risk compensation, hence, the factor theory gives rise to differences in expected returns. Classically, these systematic factors (also often called styles) are known as market, value, momentum, size, quality, carry, volatility, etc. and are widely used in the asset management industry to achieve higher returns. Exposures of assets to these factors are typically measured in form of betas ($\beta$). Hence, it explains the meaning of the word "smart beta" in the investment industry as a representation for factor based investing. (“Smart” because of the positive past performance of these factors). Why is factor investing important from investors’ point of view? First, several factors provide positive risk-premia, which might be attractive if the risks involved can be tolerated. Second, many (expensive) active funds sell simple beta-strategies as alpha ($\alpha$) - for example a value fund being benchmarked against a market index, rather than against a simple value index. Investors should understand what exactly they are paying for. The question whether these factors have delivered the positive (market neutral) returns in the past are indeed due to positive risk-premia associated with these (risk-) factors or whether they are due to behavioral biases, will not be answered here. We follow a similar strategy as the Nobel prize committee in 2013 and simply point to the both sides of the literature. This section provides an introduction to widely used factors with references to selected literature. We also add a section about low volatility and low beta investing, which have recently gained increased attention. For a general and a more practitioner oriented overview, we can highly recommend Andrew Ang's recent book [2], which provides a great and detailed overview of factor based asset management. The more academically interested reader is referred to John Cochrane’s classic Asset Pricing book [3]. Market Market is the most popular factor, both among practitioners and researchers. It accounts for the systematic risk of investing into the whole stock universe. It is typically measured by an index, a highly diversified portfolio of stocks. Examples of such indices are, the S&P 500 or the Russell 1000 for the US, and for the world, the most commonly used index is the MSCI World Developed. These indices are purely calculation based and not directly available as investable assets, the closest investable proxies for the these stock market indices are ETFs, where the providers charge a small fee (institutional investors would also consider the futures market as a good proxy). The market is also the most empirically studied factor with solid theoretical foundations --- the famous Capital Asset Pricing Model (CAPM), which links returns of individual assets to the market return, is the cornerstone of modern finance and factor theory. The model results in a simple yet powerful equation: $\underbrace{E[R_i]-r_f}_{\text{Excess Return}} = \underbrace{\beta_i(E[R_M] - r_f)}_{\textbf{beta} \times \text{Market Risk Premium}} \tag{2}$ The CAPM predicts that return on an asset in excess of the risk-free rate $E[R_i]-r_f$ is determined by: (i) market risk premium $E[R_M] - r_f$ (market return in excess of the risk-free rate); (ii) how sensitive is this asset's return to the market risk premium. The sensitivity parameter, also known as beta, measures exposure to the market risk: high-beta securities depend more on market movements and offer higher expected returns in order to compensate investors for losses during market downturns. Low-beta assets, on the other hand, have low risk premia - such assets are attractive for investors, who buy them as an 'insurance' for losses in the distressed market, pushing their prices up, or equivalently, decreasing their expected returns. The key point here is that the exactly same reasoning applies to other factors such as value or size - returns are just combinations of different risk factors with different sensitivities (or exposures) to each of these factors. For more details, see Market. Value Value is buying relatively discounted stocks. Obviously, we cannot simply use prices and sort them from low to high to determine whether a stock is relatively cheap. Hence, we first scale prices by some measure of stock or company value. Classically, we scale market prices by book value (the value of all balance sheet assets minus the outstanding debt) to receive a ratio what is called book-to-market. However, any reasonable indication of value would return a similar result - for example, the price-to-earnings ratio. An investor buying a portfolio of relatively cheap stocks (i.e. those with high book-to-market ratios) and selling selling relatively expensive stocks (those with low book-to-market ratios) can earn a so called value premium. Empirically, the average return of such a strategy was positive over almost 100 years of history. Risk-based or rational explanations can justify the existence of a positive value premium as compensation for some systematic risk factor. Similar to the market factor, all value stocks share a common factor which cannot be diversified away, when investing into value. The premium can also be explained by behavioral theories. For more details, see Value. Quality Quality of an asset refers to a set of characteristics attractive to investors, so they are willing to buy such asset at a premium. The main point here is that quality, in contrast to many other factors, can be defined across several dimensions. First of all, it is an aspect of value investing - indeed, buying high quality assets at a fair price is similar in spirit to buying average quality assets cheaply. Second, in case of equity markets, high quality stocks are issued by firms which (relative to other firms): (i) are more profitable; (ii) whose profits grow faster; (iii) are safer in terms of default probability, leverage, volatility of stock returns and returns on equity; (iv) redistribute a greater fraction of their profits to shareholders. In practice, an asset manager can evaluate the characteristics above and then aggregate them into some score. This score can be used further either as a screening tool - ‘invest only in assets that score above some predefined threshold’ - or as an allocation tool - ‘buy the highest quality assets and sell the lowest quality assets’ - or both. Empirical evidence shows, that both long-only high-quality and long-short high-minus-low quality strategies generate positive and significant risk-adjusted returns. Moreover, taking quality into account helps to dramatically improve risk profile of pure value investing even in the universe of large-cap stocks. Note, that almost all quality measures are unrelated to the price information of a stock, in contrast to value and momentum. This is desirable from a diversification point of view, as it can be constructed independently from the information used in value and momentum strategies. For more details, see Quality. References
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code. he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects. i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent. you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl. In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos... Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval @AkivaWeinberger are you familiar with the theory behind Fourier series? anyway here's a food for thought for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely. (a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$? @AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it. > In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d... Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions. @AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations hence you're free to rescale the sides, and therefore the (semi)perimeter as well so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality that makes a lot of the formulas simpler, e.g. the inradius is identical to the area It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane? $q$ is the upper summation index in the sum with the Bernoulli numbers. This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
Journal of Differential Geometry J. Differential Geom. Volume 102, Number 1 (2016), 67-126. Analytic differential equations and spherical real hypersurfaces Abstract We establish an injective correspondence $M \to \mathcal{E}(M)$ between real-analytic nonminimal hypersurfaces $M \subset \mathbb{C}^2$, spherical at a generic point, and a class of second order complex ODEs with a meromorphic singularity. We apply this result to the proof of the bound $\dim \mathfrak{hol}(M,p) \leq 5$ for the infinitesimal automorphism algebra of an arbitrary germ $(M,p) \not \sim (S^3, p')$ of a real-analytic Levi nonflat hypersurface $M \subset \mathbb{C}^2$ (the Dimension Conjecture). This bound gives the proof of the dimension gap $\dim \mathfrak{hol}(M,p) = \{8, 5, 4, 3, 2, 1, 0 \}$ for the dimension of the automorphism algebra of a real-analytic Levi nonflat hypersurface. As another application we obtain a new regularity condition for CR-mappings of nonminimal hypersurfaces, that we call Fuchsian type, and prove its optimality for the extension of CR-mappings to nonminimal points. We also obtain an existence theorem for solutions of a class of singular complex ODEs. Article information Source J. Differential Geom., Volume 102, Number 1 (2016), 67-126. Dates First available in Project Euclid: 5 January 2016 Permanent link to this document https://projecteuclid.org/euclid.jdg/1452002878 Digital Object Identifier doi:10.4310/jdg/1452002878 Mathematical Reviews number (MathSciNet) MR3447087 Zentralblatt MATH identifier 1342.53079 Citation Kossovskiy, Ilya; Shafikov, Rasul. Analytic differential equations and spherical real hypersurfaces. J. Differential Geom. 102 (2016), no. 1, 67--126. doi:10.4310/jdg/1452002878. https://projecteuclid.org/euclid.jdg/1452002878
I think CompEcon covered most of the points that I was going to mention. Just a few last thoughts:1) Why are Epstein-Zin preferences important?The preferences are important because they allow you to separate two of the dimensions along which people care about their allocations; namely, risk aversion and intertemporal substitution.Additionally, one ... This is only a quick answer, unfortunately. The key intuitive insight for Epstein-Zin is that they separate two distinct properties of preferences: risk aversion ("I'd prefer less uncertainty to more uncertainty*") and intertemporal substitution ("I may want to shift consumption forward or backwards in time**").In the very popular Constant Relative Risk ... We can say more generally that lexical preferences are not representable using a continuous utility function. Lexical preferences are not continuous. Note the definition of a continuous preference relation.The preference relation $\succeq$ is continuous if for any sequences of consumption bundles $(x_{i})_{i \in \mathbb{N}}$ and $(y_{i})_{i \in \mathbb{N}}$... Defn: A function $h:\mathbb{R}^2\rightarrow \mathbb{R}$ is homogenous of degree $k$ if for every nonzero $\alpha$, $h(\alpha x, \alpha y)=\alpha^k h(x,y)$.Defn: A function is homothetic if it is a monotonic transformation of a homogenous function.Lemma: If $f$ is homothetic, i.e. $f=g\circ u$ for some strictly increasing $g$ and homogenous $u$ then$$\... As often with models embodying some form of "irrationality" (whatever that means), HD does a great job at matching a whole lot of behaviors, but leaves room for rather annoying Dutch Book situations (also know as "money pump" situations). These suggest that HD might generate some inaccurate predictions, and induce undesirable behaviors when included in ... Are homothetic preferences strictly monotonically increasing?Homotheticity requires that$$ \alpha^\gamma U(x,y) = U(\alpha x, \alpha y) $$This is not defined over the "increasing" part of strictly monotonously increasing. Indeed, you can have decreasing preferences $U(x,y) = -x -y$ for whom homotheticity holds.Are nomothetic preferences weakly ... It's called a Principal-Agent Conflict.The RIAA/MPAA act as agents on behalf of the people who actually produce content (and consequently end-consumer value).To maintain relevance to their principals', the RIAA/MPAA must signal value to them (i.e. claim loudly and repeatedly that they do something good for them [regardless of the validity of that claim])... As alluded to in the comments the distinction is roughly:Indifference: The decision maker knows she will receive the same utility from the consumption of $x$ or $y$.Incompleteness: The DM does not know her preference between $x$ and $y$. (Note, this could stem from a lack of information, or because no preference exists)So, from a conceptual vantage, the ... The translog function can be used not only in preferences but also in production and cost functions. I am not very familiar with its implications in consumer theory, but from the production point of view, i've seen it widely used.The Translog Function doesn't impose additivity and homogeneity, and hence Constant Elasticity of Substitution. This is ... A good cannot be inferior over the entire income range.The paper A Convenient Utility Function with Giffen Behaviour shows that for a person with utility of the form:$$U(x,y) = \alpha_1 \ln(x-\gamma_x)- \alpha_2 \ln(\gamma_y - y) $$X is inferior if $\gamma_x$ and $\gamma_y$ are positive, $0<\alpha_1<\alpha_2$, and in the domain $x>\gamma_x$ ... Exogenous variables are believed to have some value given by nature. They are not caused by your theory's variables of interest. This is why they are said to be outside the model.Endogenous variables have values dependent on your theory's variables of interest. They both cause, and are caused by your topic.Example: In the study of wages, some ... The indifference curves are constructed by viewing the utility function as an equation (for a fixed utility index value per curve). So from$$U = U(x_1,x_2)$$where the left side is just a symbol, we move to$$\bar U = U(x_1,x_2)$$where now the left side is a specific number.Take the total differential on both sides to obtain$$0 = U_1dx_1 + U_2dx_2 ... Yes it is:If direction$$x \succ y \Rightarrow x \not \precsim y \Rightarrow u(x) > u(y).$$Only if direction:For all $x, y \in X$,$$x \succsim y \iff u(x) \geq u(y)$$implies$$x \sim y \iff u(x) = u(y).$$Also$$u(x) > u(y) \Rightarrow u(x) \geq u(y) \Rightarrow x \succsim y ,$$$$u(x) > u(y) \Rightarrow u(x) \not = u(y) \... Recall the definition:The function $u: X \rightarrow \mathbb{R}$ represents $\succeq$ on $X$ if for any $x,y \in X$, then $x \succeq y \iff u(x) \geq u(y)$We can show that if $u: X \rightarrow \mathbb{R}$ represents $\succeq$ on $X$, then for any strictly increasing function, $f: \mathbb{R} \rightarrow \mathbb{R}$, the function $v(x) = f(u(x))$ also ... Stigler and Becker's argument is methodological, not philosophical. They do not try to convince us that preferences are indeed identical across individuals and invariant across time as a matter of reality (the "Rocky Mountains" metaphor is an "as if " approach).Their point is that any outcome can be rationalized by assuming that "it was preferences that ... The problem is that there are no indifference "curves" but indifference "areas". Consider the following graph:For a reference bundle $A$ (equivalent to $\{2,3\}$), the gray regions indicate the areas of indifference, based on your definition of preferences (the black lines are part of the indifference areas).Thus, by selecting any bundle, you can find ... The easiest way to prove it is using the 'old' definition of continuity.$\succ$ is continuous iff whenever $x\succ y$, there exists neighborhoods of $x$ and $y$, $B_x, B_y$, such that all $z\in B(x)$ and $z'\in B(y)$, $z\succ z'$.Suppose $x\succ y$. Because $u$ represents $\succ$, $u(x)>u(y)$. Let $2\epsilon=u(x)-u(y)$. Because $u$ is continuous, ... First of all, in order to provide a counterexample, you need to construct a utility function that is homogeneous of degree two, but is not homothetic. Therefore, the counterexample you gave in your solution doesn't work.To prove the statement directly, let $u(x)$ be a utility representation that is homogeneous of degree two. That is, $u(\alpha x)=\alpha^2 ... That's interesting: the flavor of the frequentist approach to probability used for a socio-political fairness criterion: if my measure as a population group is $0<p<1$, and known, then my opinion should be accepted by the whole at the same measure, as number of issues goes to infinity. In other words, current observed acceptance rate should be a ... This won't get at individual choice, but how about evolutionary approaches? Perhaps this isn't what you are looking for, but one way to model decisions is to wander from the rational paradigm entirely. All changes in behavior are driven by natural selection, and so an equilibrium is based on stability.In a symmetric normal form game, an evolutionarily... (I cannot say if my answer will respond to your questions, which indeed, are a bit unclear).If one browses through many-many economic papers, one will get the impression that "representative" just means identical. Indeed in large chunks of literature this is the case, for historical reasons.The drive behind the adoption of the "representative ... I'm somewhat surprised that no one has linked to this paper: Backus, Routledge, and Zin (2004) Exotic Preferences for Macroeconomists (this version has some fixed typos, vs the NBER print).Their abstract is concise and extremely descriptive:We provide a user's guide to 'exotic' preferences: nonlinear time aggregators, departures from expected utility, ... The simple answer is that they don't think they would make as much money.In many countries illegally downloading music or movies is getting harder and harder. The recording industry has achieved this by persuading governments to instruct the ISPs to block torrent sites, torrent proxy sites and sites that list proxy sites completely so no one can access ... Consider the locations (1) $(0.000001,1)$ and (2) $(0.0000005,10)$. $U\left(x_1,y_1\right) = 1.1$.$U\left(x_2,y_2\right) = 10.05$. However, $x_2 < x_1$, so this is not a lexicographic ordering. It is only with the additional constraint that the values of $x$ and $y$ be integers $\in[0,1000]$ that the function you proposes has this attribute. Because real ... The assumptions are different. First one states that if a bundle is better than the optimal one the consumer cannot afford it. The second one states that if a bundle is as good (not necessarily better) than the optimal one it has to cost as least as much, not less.Consider a space with just one type of good, $x$, and a utility function $U(x) = 0$. Let the ... Is there any (economic) rational for the first-order expansion of the RHS? And for its different neighborhood evaluation?As for your first question:This is a purely mathematical tactic in order to obtain an (approximate) equation for $R$. The expansion of first order on the RHS is motivated by this fact, i.e. to bring $R$ alone "in the surface". The ... You implicitly assume that the utility of $n$ units of $Y$ equals $n$ times the utility of 1 unit of $Y$, and there is no reason for that. For instance, if $Y$ is a fridge, the gain in utility from having 1 fridge compared to 0 is certainly larger than the gain in utility from having 2 fridges compared to 1. The formula $U("nY")=n*U("Y")$ that you use does ... Looking more closely at your question, I think things should not be overly complicated. From Mas-Colell et.al.Definition 3.C.1:The preference relation $\succsim$ on X is continuous if it is preserved under limits. That is, for any sequence of pairs $\{(x^n, y^n)\}^\infty_{n=1}$ with $x^n \succsim y^n$ for all $n$, $x = \lim_{n \rightarrow \infty} x^n$, ... To begin with, I think the question is wrongly stated. For if the defininition of a thin indifference curve is such that continuity of a consumer's preferences implies thin indifference curves, then, surely, continuity implies thin indifference curves... This answers your question.However, if we are to make a suitable definition of a thin indifference ... I don't think continuity alone is enough to guarantee thin indifference curves.Consider preferences such that, for any $x$ and $y$ in the choice set, the consumer is indifferent between $x$ and $y$. This seems like it must fit any definition of a thick indifference curve because the whole choice set lies on a single indifference curve!But these ...
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever." Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field. "You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. " so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force. For the buoyancy do I: density of water * volume of water displaced * gravity acceleration? so: mass of bottle * gravity = volume of water displaced * density of water * gravity? @EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$? As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern... You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer. Though as it happens I have to go now - lunch time! :-) @JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth. Anonymous Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure Not sure about that, but the converse is certainly false :P Derrida has received a lot of criticism from the experts on the fields he tried to comment on I personally do not know much about postmodernist philosophy, so I shall not comment on it myself I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger. I can see why a man of that generation would be leaned towards that idea. I do too.
Quadratic equation An algebraic equation of the second degree. The general form of a quadratic equation is \begin{equation}\label{eq:1} ax^2+bx+c=0,\quad a\ne0. \end{equation} In the field of complex numbers a quadratic equation has two solutions, expressed by radicals in the coefficients of the equation: \begin{equation}\label{eq:2} x_{1,2} = \frac{-b \pm\sqrt{b^2-4ac}}{2a}. \end{equation} When $b^2>4ac$ both solutions are real and distinct, when $b^2<4ac$, they are complex (complex-conjugate) numbers, when $b^2=4ac$ the equation has the double root $x_1=x_2=-b/2a$. For the reduced quadratic equation \begin{equation} x^2+px+q=0 \end{equation} formula \eqref{eq:2} has the form \begin{equation} x_{1,2}=-\frac{p}{2}\pm\sqrt{\frac{p^2}{4}-q}. \end{equation} The roots and coefficients of a quadratic equation are related by (cf. Viète theorem): \begin{equation} x_1+x_2=-\frac{b}{2},\quad x_1x_2=\frac{c}{a}. \end{equation} The expression $b^2-4ac$ is called the discriminant of the equation. It is easily proved that $b^2-4ac=(x_1-x_2)^2$, in accordance with the fact mentioned above that the equation has a double root if and only if $b^2=4ac$. Formula \eqref{eq:2} holds also if the coefficients belong to a field with characteristic different from $2$. Formula \eqref{eq:2} follows from writing the left-hand side of the equation as $a(x+b/2a)^2+(c-b^2/4a)$ (splitting of the square). References [a1] K. Rektorys (ed.) , Applicable mathematics , Iliffe (1969) pp. Sect. 1.20 How to Cite This Entry: Quadratic equation. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Quadratic_equation&oldid=29172
If a function is a combination of other functions whose derivatives are known via composition, addition, etc., the derivative can be calculated using the chain rule and the like. But even the product of integrals can't be expressed in general in terms of the integral of the products, and forget about composition! Why is this? Here is an extremely generic answer. Differentiation is a "local" operation: to compute the derivative of a function at a point you only have to know how it behaves in a neighborhood of that point. But integration is a "global" operation: to compute the definite integral of a function in an interval you have to know how it behaves on the entire interval (and to compute the indefinite integral you have to know how it behaves on all intervals). That is a lot of information to summarize. Generally, local things are much easier than global things. On the other hand, if you can do the global things, they tend to be useful because of how much information goes into them. That's why theorems like the fundamental theorem of calculus, the full form of Stokes' theorem, and the main theorems of complex analysis are so powerful: they let us calculate global things in terms of slightly less global things. The family of functions you generally consider (e.g., elementary functions) is closed under differentiation, that is, the derivative of such function is still in the family. However, the family is not in general closed under integration. For instance, even the family of rational functions is not closed under integration because you $\int 1/x = \log$. Answering an old question just because I saw it on the main page. From Roger Penrose ( Road To Reality): ... there is a striking contrast between the operations of differentiation and integration, in this calculus, with regard to which is the ‘easy’ one and which is the ‘difficult’ one. When it is a matter of applying the operations to explicit formulae involving known functions, it is differentiation which is ‘easy’ and integration ‘difficult’, and in many cases the latter may not be possible to carry out at all in an explicit way. On the other hand, when functions are not given in terms of formulae, but are provided in the form of tabulated lists of numerical data,then it is integration which is ‘easy’ and differentiation ‘difficult’, and the latter may not, strictly speaking, be possible at all in the ordinary way. Numerical techniques are generally concerned with approximations, but there is also a close analogue of this aspect of things in the exact theory, and again it is integration which can be performed in circumstances where differentiation cannot. I guess the OP asks about the symbolic integration. Other answers already dealt with the numeric case where integration is easy and differentiation is hard. If you recall the definition of the differentiation, you can see it's just a subtraction and division by a constant. Even if you can't do any algebraic changes, it won't get any more complex than that. But usually you can do many simplifications due to the zero limit, as many terms fall out as being too small. From this definition it can be shown that if you know the derivative of $f(x)$ and $g(x)$, then you can use these derivatives to express the derivative of $f(x) \pm g(x)$, $f(x)g(x)$ and $f(g(x))$. This makes symbolic differentiation easy as you just need to apply the rules recursively. Now about integration. Integration is basically an infinite sum of small quantities. So if you see an $\int f(x) \; \d x$. You can imagine it as an infinite sum of $(f_1 + f_2 + ...) \; \d x$ where $f_i$ are consecutive values of the function. This means if you need to calculate integral of $\int (a f(x) + b g(x)) \; \d x$. Then you can imagine the sum $((af_1 + bg_1) + (af_2 + bg_2) + ...) \, \d x$. Using the associativity and distributivity, you can transform this into: $a(f_1 + f_2 +...)\d x + b(g_1 + g_2 + ...)\, \d x$. So this means $\int (a f(x) + b g(x)) \, \d x = a \int f(x) \d x + b \int g(x) \, dx$. But if you have $\int f(x) g(x) \, \d x$, you have the sum $(f_1 g_1 + f_2 g_2 + ...) \; \d x$. From which you cannot factor out the sum of $f$s and $g$s. This means there is no recursive rule for multiplication. Same goes for $\int f(g(x)) \; \d x$. You cannot extract anything from the sum $(f(g_1) + f(g_2) + ...) \; \d x$ in general. So far, only linearity is the useful property. What about the analogues of the Differentiation rules? We have the product rule: $$\frac{d f(x)g(x) }{\, d x} = f(x) \frac{d g(x)}{\, d x} + g(x) \frac{d f(x)}{\, d x}.$$ Integrating both sides and rearranging the terms, we get the well-known integral by parts formula: $$\int f(x) \frac{d g(x)}{\, d x} \, d x = f(x)g(x) - \int g(x) \frac{d f(x)}{\, d x} \, d x.$$ But this formula is only useful if $\frac{d f(x)}{dx} \int g(x) \, d x$ or $\frac{d g(x)}{dx} \int f(x) \, d x$ is easier to integrate than $f(x)g(x)$. And it's often hard to see when this rule is useful. For example, when you try to integrate $\mathrm{ln}(x)$, it's not obvious to see that it's $1 \mathrm{ln}(x)$. The integral of $1$ is $x$ and the derivative of $\mathrm{ln}(x)$ is $\frac{1}{x}$, which lead to a very simple integral of $x\frac{1}{x} = 1$, whose integral is again $x$. Another well-known differential rule is the chain rule $$\frac{d f(g(x))}{\, d x} = \frac{d f(g(x))}{d g(x)} \frac{d g(x)}{\, d x}.$$ Integrating both sides, you get the reverse chain rule: $$f(g(x)) = \int \frac{d f(g(x))}{d g(x)} \frac{d g(x)}{\, d x} \, d x.$$ But again it's hard to see when it is useful. For example what about the integration of $\frac{x}{\sqrt{x^2 + c}}$? Is it obvious to you that $\frac{x}{\sqrt{x^2 + c}} = 2x \frac{1}{2\sqrt{x^2 + c}}$ and this is the derivative of $\sqrt{x^2 + c}$? I guess not, unless someone showed you the trick. For differentiation, you can mechanically apply the rules. For integration, you need to recognize patterns and even need to introduce cancellations to bring the expression into the desired form and this requires lot of practice and intuition. For example how would you integrate $\sqrt{x^2 + 1}$? First you turn it into a fraction: $$\frac{x^2 + 1}{\sqrt{x^2+1}}$$ Then multiply and divide by 2: $$\frac{2x^2 + 2}{2\sqrt{x^2+1}}$$ Separate the terms like this: $$\frac{1}{2}\left(\frac{1}{\sqrt{x^2+1}}+\frac{x^2+1}{\sqrt{x^2+1}}+\frac{x^2}{\sqrt{x^2+1}} \right)$$ Play with 2nd and 3rd term: $$\frac{1}{2} \left( \frac{1}{\sqrt{x^2+1}}+ 1\sqrt{x^2+1}+ x2x\frac{1}{2\sqrt{x^2+1}} \right)$$ Now you can see the first bracketed term is the derivative of $\mathrm{arsinh(x)}$. The second and third term is the derivative of the $x\sqrt{x^2+1}$. Thus the integral will be: $$\frac{\mathrm{arsinh}(x)}{2} + \frac{x\sqrt{x^2+1}}{2} + C$$ Were these transformations obvious to you? Probably not. That's why differentiation is just a mechanic while integration is an art. In the MIT lecture 6.001 "Structure and Interpretation of Computer Programs" by Sussman and Abelson this contrast is briefly discussed in terms of pattern matching. See the lecture video (at 3:56) or alternatively the transcript (p. 2 or see the quote below). The book used in the lecture does not provide further details. Edit: Apparently, they discuss the Risch algorithm. It might be worthwhile to have a look at the same question on mathoverflow.SE: Why is differentiating mechanics and integration art? And you know from calculus that it's easy to produce derivatives of arbitrary expressions. You also know from your elementary calculus that it's hard to produce integrals. Yet integrals and derivatives are opposites of each other. They're inverse operations. And they have the same rules. What is special about these rules that makes it possible for one to produce derivatives easily and integrals why it's so hard? Let's think about that very simply. Look at these rules. Every one of these rules, when used in the direction for taking derivatives, which is in the direction of this arrow, the left side is matched against your expression, and the right side is the thing which is the derivative of that expression. The arrow is going that way. In each of these rules, the expressions on the right - hand side of the rule that are contained within derivatives are subexpressions, are proper subexpressions, of the expression on the left - hand side. So here we see the derivative of the sum, with is the expression on the left - hand side is the sum of the derivatives of the pieces. So the rule of moving to the right are reduction rules. The problem becomes easier. I turn a big complicated problem it's lots of smaller problems and then combine the results, a perfect place for recursion to work. If I'm going in the other direction like this, if I'm trying to produce integrals, well there are several problems you see here. First of all, if I try to integrate an expression like a sum, more than one rule matches.Here's one that matches. Here's one that matches. I don't know which one to take. And they may be different. I may get to explore different things. Also, the expressions become larger in that direction. And when the expressions become larger, then there's no guarantee that any particular path I choose will terminate, because we will only terminate by accidental cancellation.So that's why integrals are complicated searches and hard to do. I will try to bring this to you in another way .Let us start by thinking in terms of something as simple as a straight line . If I give you the equation of a line y = mx + c , it's slope can be easily determined which in this case is nothing but m . .Now let me make the question a bit trickier .Let me say that the line given above intersects the x and y axis at some points .I ask you to give me the area between the line,the abcissa and the ordinate This is obviously not as easy as finding the slope .You shall have to find the intersection of the line with the axis and get two points of intersection and then taking the origin as a third point find the area . This is not the only method of finding the area as we know there are loads of formulas for finding the area of a triangle . Let us now view this in terms of curves .If the simple process of finding the slope in case of a line is translated to curves we get differential calculus which is a bit more complicated than the method of finding slopes of straight lines . Add finding the area under the curve to that and you get integral calculus which by our experience from straight lines we know should be much harder than finding the slope ie differentiation .Also there is no one fixed method for finding the area of a figure .hence the Many methods of. Integration.
Rotational Impulse-Momentum Theorem By now we have a very good sense of how to develop the formalism for rotational motion in parallel with what we already know about linear motion. We turn now to momentum. Replacing the mass with rotational inertia and the linear velocity with angular velocity, we get: The vector \(L\) is called angular momentum, and it has units of: \( \left[ L \right] = \dfrac{kg \cdot m^2}{s} = J \cdot s \) Continuing the parallel with the linear case, the momentum is relates to the force through the impulse-momentum theorem, which is: While there is no need to append " cm" to the angular momentum as we do with the linear momentum, we do have to keep in mind that all of the quantities in the rotational case must be referenced to the same point. That is, the net torque requires a reference point, and the angular momentum contains a rotational inertia, which also requires a reference point. Recall that the impulse-momentum theorem is just a repackaging of Newton's second law, and so it is with the rotational case, though there is a twist, as we will see shortly: \[ \overrightarrow F_{net} = \dfrac{d\overrightarrow p_{cm} }{dt} = \dfrac{d\left(m\overrightarrow v_{cm}\right) }{dt} = m\overrightarrow a_{cm} \;\;\; \iff \;\;\; \overrightarrow \tau_{net} = \dfrac{d \overrightarrow L}{dt} = \dfrac{d\left(I\overrightarrow \omega\right) }{dt} = I\overrightarrow \alpha_{cm} \] Link Between Angular and Linear Momentum When there are several particles in a system, we find the momentum of the system by adding the momenta of the particles: \[ \overrightarrow p_{cm} = \overrightarrow {p_1 }+ \overrightarrow {p_2} + \dots \] We have a definition for the angular momentum of a rigid object, but can we define the angular momentum of a single particle, and then add up all of the angular momenta of the particles to get the angular momentum of the system, in the same way that we do it for linear momentum? The answer is yes, but we have to be careful about our reference point. That is, to add the angular momentum of every particle together to get a total angular momentum, the individual angular momenta must be measured around the same reference. So how do we define the angular momentum of an individual particle around a certain reference point? Let's look at a picture of the situation. The particle has a mass \(m\), a velocity \(\overrightarrow v\), and is located at a position \(\overrightarrow r\) with the tail of that position vector at the reference point.
The Landau free energy, also called the Landau-Ginzburg Hamiltonian, is treated in an adhoc and rather confusing manner in a lot of textbooks. But in the modern view, it has a simple interpretation as an effective Hamiltonian attained by integrating out degrees of freedom. Suppose we have a spin system, such as an Ising magnet. We can describe the state of the system by a magnetization field $\phi(x)$, noting that this field doesn't make sense if we examine length scales smaller than the lattice spacing $a$. We can write a sum over all spin states by an integral over field configurations, as long as the integral is cut off at the distance scale $a$. If the Hamiltonian is $H[\phi]$, then the thermodynamic free energy $F$ obeys$$Z = e^{-\beta F} = \int_{\Delta x\, >\, a} \mathcal{D}\phi \, e^{-\beta H[\phi]}$$which is just a rewording of the standard identity $F = - k_B T \log Z$. In the Wilsonian view, the thermodynamic free energy is acquired by integrating out all microscopic degrees of freedom. The result only depends on macroscopic quantities like temperature, pressure, and the external field. This is useful because the entire point of thermodynamics is to ignore the microscopic details and focus on macroscopic quantities that are easy to measure. For example, using just the function $F$, we can determine the equilibrium magnetization by minimizing it. Now, the Landau free energy $H_L$ satisfies$$Z = \int_{\Delta x \, > \, b} \mathcal{D}\phi \, e^{-\beta H_L[\phi]}$$ where $b$ is a mesoscopic distance scale, larger than $a$ but still much smaller than a macroscopic length. In the Wilsonian view, the Landau free energy is the effective Hamiltonian acquired by integrating out degrees of freedom on lengthscales $a < x < b$. The point of the Landau free energy is that it represents a compromise between the completely microscopic $H$, which has too much detail to be useful, and the completely macroscopic $F$, which tells us nothing about, e.g. position dependence. Like $H$, $H_L$ is a functional, but it's a functional of "fewer variables". The above explains why $H_L$ can be called a Hamiltonian, but why is it also called a free energy? Usually, the starting point for applying Landau theory is the saddle point approximation, which states that typical equilibrium field configurations minimize $H_L$. Since we're minimizing $H_L$, we're treating it like we would a free energy, which is why it's sometimes called the Landau free energy. But why is this valid? You definitely can't get the right answer to any thermodynamic question by minimizing $H$, because it doesn't take into account thermal effects; you instead have to minimize $F$. Minimizing $H_L$ gives the right answer precisely when thermal effects are negligible on distance scales greater than $b$. This is true when $b$ is much greater than the system's correlation length $\xi$, which is why Landau theory does such a good job, and usually not true at a critical point where $\xi$ diverges, which is why Landau theory fails to describe continuous phase transitions.
SHOGUN 6.1.3 class SGVector< T > shogun vector More... class CQuadraticTimeMMD This class implements the quadratic time Maximum Mean Statistic as described in [1]. The MMD is the distance of two probability distributions \(p\) and \(q\) in a RKHS which we denote by \[ \hat{\eta_k}=\text{MMD}[\mathcal{F},p,q]^2=\textbf{E}_{x,x'} \left[ k(x,x')\right]-2\textbf{E}_{x,y}\left[ k(x,y)\right] +\textbf{E}_{y,y'}\left[ k(y,y')\right]=||\mu_p - \mu_q||^2_\mathcal{F} \] . More...
"A not so simple approach". I may have messed a little bit too much with grouping the terms, do forgive my elementary math skills, it's a side effect of using tools like wolfram and mathematica too much. Without loss of generality we can assume that the near plane is given as $z=1$ (that is having normal $(0,0,1)$ and a point on it $(0,0,1)$) and the camera center (the reflected pinhole camera aperture point) is at $(0,0,0)$. Let $p,q \in \mathbb{R}^3$ be two points on a line (non-degenerate, meaning $p\ne q$, otherwise it's just a point and the projection is also a point - unless it's at $(0,0,0)$ then it's undefined). If the line defined by $p,q$ passes through $(0,0,0)$ the projection is a point and the requirement is trivially satisfied.Now let this not be the case. A perspective projection onto the plane through $(0,0,0)$ is given by $p' = \frac{p}{p_z}$, and $q' = \frac{q}{q_z}$. The line defined by $p,q$ is given as $r(\lambda) = p + \lambda(q-p), \lambda \in \mathbb{R}$. Its projection is given by $r'(\lambda) = \frac{r(\lambda)}{r_z(\lambda)}$. Since $z=1$ for all projected points, then we need only study the $x,y$ coordinates. We want to show that any projected point $r'(\lambda)$ can be written as an affine combination of $p',q'$ (this defines a line). Taking that into account:$$p'+\mu(q'-p') = \frac{r(\lambda)}{r_z(\lambda)}$$$$(1-\mu)\frac{p}{p_z} + \mu\frac{q}{q_z} = \frac{p + \lambda(q-p)}{p_z + \lambda(q_z-p_z)}$$$$(1-\mu)q_z(p_z+\lambda(q_z-p_z))p + \mu p_z(p_z+\lambda(q_z-p_z))q = q_zp_z(p + \lambda(q-p))$$$$q_zp_z(p - \lambda p) + q^2_z\lambda p - \mu q_z(p_z+\lambda(q_z-p_z))p + \mu p_z(p_z+\lambda(q_z-p_z))q - q_zp_z(p - \lambda p) - q_zp_z\lambda q = 0$$$$(1-\mu)\lambda q^2_z p -\mu(1-\lambda)q_zp_z p + \mu(1-\lambda)p^2_zq - (1-\mu)\lambda p_zq_z q = 0$$$$(1-\mu)\lambda q_z(q_zp - p_zq) - \mu(1-\lambda)p_z(q_zp-p_zq) = 0$$$$((1-\mu)\lambda q_z - \mu(1-\lambda)p_z)(q_zp - p_zq) = 0$$ For this to be satisfied one of the terms must be $0$. The first case concerns the second term: $p' = q'$, which we already considered as the trivial case. Then we must show that we can always pick a $\mu$ such that $(1-\mu)\lambda q_z - \mu(1-\lambda)p_z = 0$. Solve for $\mu$ and you get: $\mu = \frac{\lambda q_z}{(1-\lambda)p_z + \lambda q_z}$. Thus I have not only found that it is a straight line, but also the relationship between the parameterizations. In general a projection onto a different 2-manifold (not a plane) will not result in the lines remaining straight (think of a sphere or a cylinder).
ok, suppose we have the set $U_1=[a,\frac{a+b}{2}) \cup (\frac{a+2}{2},b]$ where $a,b$ are rational. It is easy to see that there exists a countable cover which consists of intervals that converges towards, a,b and $\frac{a+b}{2}$. Therefore $U_1$ is not compact. Now we can construct $U_2$ by taking the midpoint of each half open interval of $U_1$ and we can similarly construct a countable cover that has no finite subcover. By induction on the naturals, we eventually end up with the set $\Bbb{I} \cap [a,b]$. Thus this set is not compact I am currently working under the Lebesgue outer measure, though I did not know we cannot define any measure where subsets of rationals have nonzero measure The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure that is, trying to compute the Lebesgue outer measure of the irrationals using only the notions of covers, topology and the definition of the measure What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set Problem: Let $X$ be some measurable space and $f,g : X \to [-\infty, \infty]$ measurable functions. Prove that the set $\{x \mid f(x) < g(x) \}$ is a measurable set. Question: In a solution I am reading, the author just asserts that $g-f$ is measurable and the rest of the proof essentially follows from that. My problem is, how can $g-f$ make sense if either function could possibly take on an infinite value? @AkivaWeinberger For $\lambda^*$ I can think of simple examples like: If $\frac{a}{2} < \frac{b}{2} < a, b$, then I can always add some $\frac{c}{2}$ to $\frac{a}{2},\frac{b}{2}$ to generate the interval $[\frac{a+c}{2},\frac{b+c}{2}]$ which will fullfill the criteria. But if you are interested in some $X$ that are not intervals, I am not very sure We then manipulate the $c_n$ for the Fourier series of $h$ to obtain a new $c_n$, but expressed w.r.t. $g$. Now, I am still not understanding why by doing what we have done we're logically showing that this new $c_n$ is the $d_n$ which we need. Why would this $c_n$ be the $d_n$ associated with the Fourier series of $g$? $\lambda^*(\Bbb{I}\cap [a,b]) = \lambda^*(C) = \lim_{i\to \aleph_0}\lambda^*(C_i) = \lim_{i\to \aleph_0} (b-q_i) + \sum_{k=1}^i (q_{n(i)}-q_{m(i)}) + (q_{i+1}-a)$. Therefore, computing the Lebesgue outer measure of the irrationals directly amounts to computing the value of this series. Therefore, we first need to check it is convergent, and then compute its value The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set Alessandro: and typo for the third $\Bbb{I}$ in the quote, which should be $\Bbb{Q}$ (cont.) We first observed that the above countable sum is an alternating series. Therefore, we can use some machinery in checking the convergence of an alternating series Next, we observed the terms in the alternating series is monotonically increasing and bounded from above and below by b and a respectively Each term in brackets are also nonegative by the Lebesgue outer measure of open intervals, and together, let the differences be $c_i = q_{n(i)-q_{m(i)}}$. These form a series that is bounded from above and below Hence (also typo in the subscript just above): $$\lambda^*(\Bbb{I}\cap [a,b])=\sum_{i=1}^{\aleph_0}c_i$$ Consider the partial sums of the above series. Note every partial sum is telescoping since in finite series, addition associates and thus we are free to cancel out. By the construction of the cover $C$ every rational $q_i$ that is enumerated is ordered such that they form expressions $-q_i+q_i$. Hence for any partial sum by moving through the stages of the constructions of $C$ i.e. $C_0,C_1,C_2,...$, the only surviving term is $b-a$. Therefore, the countable sequence is also telescoping and: @AkivaWeinberger Never mind. I think I figured it out alone. Basically, the value of the definite integral for $c_n$ is actually the value of the define integral of $d_n$. So they are the same thing but re-expressed differently. If you have a function $f : X \to Y$ between two topological spaces $X$ and $Y$ you can't conclude anything about the topologies, if however the function is continuous, then you can say stuff about the topologies @Overflow2341313 Could you send a picture or a screenshot of the problem? nvm I overlooked something important. Each interval contains a rational, and there are only countably many rationals. This means at the $\omega_1$ limit stage, thre are uncountably many intervals that contains neither rationals nor irrationals, thus they are empty and does not contribute to the sum So there are only countably many disjoint intervals in the cover $C$ @Perturbative Okay similar problem if you don't mind guiding me in the right direction. If a function f exists, with the same setup (X, t) -> (Y,S), that is 1-1, open, and continous but not onto construct a topological space which is homeomorphic to the space (X, t). Simply restrict the codomain so that it is onto? Making it bijective and hence invertible. hmm, I don't understand. While I do start with an uncountable cover and using axiom of choice to well order the irrationals, the fact that the rationals are countable means I eventually end up with a countable cover of the rationals. However the telescoping countable sum clearly does not vanish, so this is weird... In a schematic, we have the following, I will try to figure this out tomorrow before moving on to computing the Lebesgue outer measure of the cantor set: @Perturbative Okay, kast question. Think I'm starting to get this stuff now.... I want to find a topology t on R such that f: R, U -> R, t defined by f(x) = x^2 is an open map where U is the "usual" topology defined by U = {x in U | x in U implies that x in (a,b) \subseteq U}. To do this... the smallest t can be is the trivial topology on R - {\emptyset, R} But, we required that everything in U be in t under f? @Overflow2341313 Also for the previous example, I think it may not be as simple (contrary to what I initially thought), because there do exist functions which are continuous, bijective but do not have continuous inverse I'm not sure if adding the additional condition that $f$ is an open map will make an difference For those who are not very familiar about this interest of mine, besides the maths, I am also interested in the notion of a "proof space", that is the set or class of all possible proofs of a given proposition and their relationship Elements in a proof space is a proof, which consists of steps and forming a path in this space For that I have a postulate that given two paths A and B in proof space with the same starting point and a proposition $\phi$. If $A \vdash \phi$ but $B \not\vdash \phi$, then there must exists some condition that make the path $B$ unable to reach $\phi$, or that $B$ is unprovable under the current formal system Hi. I believe I have numerically discovered that $\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n$ as $K\to\infty$, where $c=0,\dots,K$ is fixed and $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$. Any ideas how to prove that?
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
Coupon bonds Exercise: XLF Co has issued a bond with a face value of $100,000 paying an annual coupon of 9%. The bond matures in 10 years from now, the market yield is 8% pa and a coupon has just been paid. Pricing The price of a bond \(P\) is given by the formula \(P=\sum\limits_{j=1}^T \dfrac{C_j} {(1+i)^{t_j}}\) where \(C_j\) is the periodic cash flow, \(T\) is the number of \(j^{th} \) cash flows, and \(i\) is the current periodic yield. Alternatively, \(P= { \sum\limits_{t=1}^N {CF_t \cdot (1+i)^{-t}} } \) in the spirit of Saunders and Cornett (2008) Task 1: In Excel, setup a table with columns showing the timing and magnitude of each coupon, plus the face value of the bond at maturity. Also include columns for the vector of discount factors, and the discounted value of each cash flow. Calculate the present value of the bond. Present the table in suitable business style. Duration Duration measures the weighted average time to maturity of the cash flows for a bond, and the measure is often used in financial risk analysis. A common measure of duration is the method documented by Macaulay where duration \( D = \dfrac{ \sum\limits_{j=1}^T {t_j \cdot C_j \cdot (1+i)^{-t_j}}} { \sum\limits_{j=1}^T {C_j \cdot (1+i)^{-t_j}} } \) Or, expressed in the nomenclature of Saunders and Cornett (2008, p225) \( D = \dfrac{ \sum\limits_{t=1}^N {CF_t \cdot (1+i)^{-t} \cdot t}} { \sum\limits_{t=1}^N {CF_t \cdot (1+i)^{-t}} } = \dfrac{ \sum\limits_{t=1}^N {PV_t \cdot t}} { \sum\limits_{t=1}^N {PV_t }} \) Task 2 a: In Excel, setup a table to display the components used in the calculation of the duration measure shown in the equation. This table will be similar to the one developed in task 1, with the addition of the time weight component \(t_j\). Also include one linked cell that allows the user to vary the amount of the face value. Calculate the value of the Macaulay duration measure in years. Present the table in appropriate business style. Task 2 b: Try different amounts for face value > 0 and examine the resultant change in the Macaulay duration number. The Excel solution file for this module: xlf-fin-exercise-bonds.xlsm 26KB The Excel solution file for this module: xlf-fin-exercise-bonds.xlsx 22KB [6 May 2019] This module was developed in Excel 2013 Pro 64 bit. Saunders A, and M Cornett, (2008), Financial Institutions Management, a Risk Management Approach, McGraw-Hill Irwin
Particle Moving Along y Axis on Surface of Rotating Ellipse \[xy\]plane. A particle on the surface of the ellipse is made to move on the surface of the ellipse as the ellipse rotates. What will be the velocity and acceleration of the particle? If the minor and major axes have length \[b, \; a\]respectively, the coordinates of a point on the surface of the ellipse with origin at one focus is \[(r, \theta )\]in polar coordinates with \[r= \frac{b^2}{a(1- ecos \theta )}\]and the velocity is \[\frac{dr}{dt}= - \frac{eb^2 \dot{\theta} sin \theta }{a(1-e cos \theta )^2} \]. The acceleration is \[\frac{d^2r}{dt^2}= - \frac{eb^2 (\ddot{\theta} sin \theta + \dot{\theta}^2 cos \theta )}{a(1-e cos \theta )^2} - \frac{2e^2b^2 (\dot{\theta}^2 sin^2 \theta}{a(1-e cos \theta )^3} \].
what is the molecular mechanism with which thermal conductivity increases by increasing temperature? at least for metals? I know that heat increases the oscillations of the atoms in the crystal. But how that explains the increase in thermal conductivity? The mechanism for increasing the thermal conductivity is phonon assisted hopping. For a disordered system, one which do not preserve the long range order, the electronic wave function becomes localized. The wave function extent is typically much smaller than the system size and is characterized by the localization length $\xi$, a parameter in theory. In this case, electron propagates by hopping events along the applied electric field. The tunneling rate for an electron to hop from a localized state $i$ to another state $j$ is proportional (exponentially) to the distance between those states $r_{ij}$ and temperature \begin{equation} \Gamma_{ij}=\gamma_0\exp(-\frac{r_{ij}}{\xi}-\frac{|E_i-E_j|}{T}) \end{equation} where $\gamma_0$ is another parameter of the theory, that depends of phonon DOS and electron-phonon coupling. From this equation you may see how $\Gamma$ (and hence conductivity) depends on $T$. The hopping theory was first introduced to describe electron transport in disordered semiconductors. A notable person is F. N. Mott. Nowadays, it is used also, for example, for organic materials. For metal, it might be applied, if it is disordered enough, in a sense of wave function localization as said above. I know that heat increases the oscillations of the atoms in the crystal. But how that explains the increase in thermal conductivity? Yes, heat increases atomic displacements, but this apples to "normal" (not disordered) metal. As a result, electron scattering increases and conductivity decreases. Note that this is related to different conduction mechanism. If a system studied shows increasing conductivity with $T$ - it is a hopping conduction. A classic text book on the subject is Electronic Properties of Doped Semiconductors by Shklovskii and Efros.
The problem: Now find a polynomial function $f$ of degree $n - 1$ such that $f(x_i) = a_i$, where $a, \ldots, a_n$ are given numbers. I found that this question had been asked before, but I did not understand the solution. Moving on to the actual problem: we are asked to use the formula derived in the previous problem, which is: $$f_i(x) = \prod^n_{j = 1, j \neq i}\frac{x - x_j}{x_i - x_j}.$$ This function is equal to $0$ for all $x_j$ if $j \neq i$, and equal to $1$ at $x_i$. Note that $x_j$ and $x_i$ come from the list of distinct numbers $x_1, \ldots, x_n$. My thinking as to the solution is that we simply need to multiply $f_i(x)$ by $a_i$, giving us $$f(x) = a_i \cdot \prod^n_{j = 1, j \neq i}\frac{x - x_j}{x_i - x_j}.$$ Then when we plug in $x_i$ we would have $$f(x_i) = a_i \cdot \prod^n_{j = 1, j \neq i}\frac{x_i - x_j}{x_i - x_j} = a_i.$$ However, in the answer key, solution is $$f(x) = \sum^n_{i = 1} a_i \cdot \prod^n_{j = 1, j \neq i}\frac{x_i - x_j}{x_i - x_j}.$$ This seems like it must be false, as substituting in $x_i$ gives us $$f(x_i) = \sum^n_{i = 1} \cdot \prod^n_{j = 1, j \neq i}\frac{x_i - x_j}{x_i - x_j} = \sum^n_{i = 1} a_i.$$ It looks to me like $\sum^n_{i = 1} a_i$ only equals $a_i$ if $n = 1$. As a result, I'm very confused as to how this answer could be correct. Any clarification would be appreciated.
Building Blocks Summary The dtflearn object makes it easy to create and train neural networks of varying complexity. Activation Functions¶ Apart from the activation functions defined in tensorflow for scala, DynaML defines some additional activations. Hyperbolic Tangent 1 val act = dtflearn.Tanh("SomeIdentifier") Cumulative Gaussian 1 val act = dtflearn.Phi("OtherIdentifier") Generalized Logistic 1 val act = dtflearn.GeneralizedLogistic("AnotherId") Layers¶ DynaML aims to supplement and extend the collection of layers available in org.platanios.tensorflow.api.layers, all the layers defined in DynaML's tensorflow package extend the Layer[T, R] class in org.platanios.tensorflow.api.layers. Radial Basis Function Network¶ Radial Basis Function (RBF) networks are an important class of basis functions, each of which are expressed as decaying with distance from a defined central node. The RBF layer implementation in DynaML treats the node center positions c_i and length scales \sigma_i as parameters to be learned via gradient based back-propagation. 1 2 3 import io.github.mandar2812.dynaml.tensorflow._ val rbf = dtflearn.rbf_layer(name = "rbf1", num_units = 10) Continuous Time RNN¶ Continuous time recurrent neural networks (CTRNN) are an important class of recurrent neural networks. They enable the modelling of non-linear and potentially complex dynamical systems of multiple variables, with feedback. Each state variable is modeled by a single neuron y_i, the evolution of the system y = (y_1, \cdots, y_n)^T is governed by a set of coupled ordinary differential equations. These equations can be expressed in vector form as follows. The parameters of the system above are. Time Constant/Decay Rate Gain Bias Weights In order to use the CTRNN model in a modelling sequences of finite length, we need to solve its governing equations numerically. This gives us the trajectory of the state upto T steps y^{0}, \cdots, y^{T}. DynaML's implementation of the CTRNN can be used to learn the trajectory of dynamical systems upto a predefined time horizon. The parameters \Lambda, G, b, W are learned using gradient based loss minimization. The CTRNN implementations are also instances of Layer[Output, Output], which take as inputtensors of shape n and produce tensors of shape (n, T), there are two variants that userscan choose from. Fixed Time Step Integration¶ When the integration time step \Delta t is user defined and fixed. 1 2 3 4 5 6 import io.github.mandar2812.dynaml.tensorflow._ import org.platanios.tensorflow.api._ val ctrnn_layer = dtflearn.ctrnn( name = "CTRNN_1", units = 10, horizon = 5, timestep = 0.1) Dynamic Time Step Integration¶ When the integration time step \Delta t is a parameter that can be learned during the training process. 1 2 3 4 5 6 import io.github.mandar2812.dynaml.tensorflow._ import org.platanios.tensorflow.api._ val dctrnn_layer = dtflearn.dctrnn( name = "DCTRNN_1", units = 10, horizon = 5) Stack & Concatenate¶ Often one would need to combine inputs of previous layers in some manner, the following layers enable these operations. Stack Inputs¶ This is a computational layer which performs the function of dtf.stack(). 1 2 3 4 import io.github.mandar2812.dynaml.tensorflow._ import org.platanios.tensorflow.api._ val stk_layer = dtflearn.stack_outputs("StackTensors", axis = 1) Concatenate Inputs¶ This is a computational layer which performs the function of dtf.concatenate(). 1 2 3 4 import io.github.mandar2812.dynaml.tensorflow._ import org.platanios.tensorflow.api._ val concat_layer = dtflearn.stack_outputs("ConcatenateTensors", axis = 1) Collect Layers¶ A sequence of layers can be collected into a single layer which accepts a sequence of symbolic tensors. 1 2 3 4 5 6 7 8 9 10 11 12 13 import io.github.mandar2812.dynaml.tensorflow._ import org.platanios.tensorflow.api._ val layers = Seq( tf.learn.Linear("l1", 10), dtflearn.identity("Identity"), dtflearn.ctrnn( name = "CTRNN_1", units = 10, horizon = 5, timestep = 0.1 ) ) val combined_layer = dtflearn.stack_layers("Collect", layers) Input Pairs¶ To handle inputs consisting of pairs of elements, one can provide a separate layer for processing each of the elements. 1 2 3 4 5 6 7 import io.github.mandar2812.dynaml.tensorflow._ import org.platanios.tensorflow.api._ val sl = dtflearn.tuple2_layer( "tuple2layer", dtflearn.rbf_layer("rbf1", 10), tf.learn.Linear("lin1", 10)) Combining the elements of Tuple2 can be done as follows. 1 2 3 4 5 6 7 import io.github.mandar2812.dynaml.tensorflow._ import org.platanios.tensorflow.api._ //Stack elements of the tuple into one tensor val layer1 = dtflearn.stack_tuple2("tuple2layer", axis = 1) //Concatenate elements of the tuple into one tensor val layer2 = dtflearn.concat_tuple2("tuple2layer", axis = 1) Stoppage Criteria¶ In order to train tensorflow models using iterative gradient based models, the user must define some stoppage criteria for the training process. This can be done via the method tf.learn.StopCriteria(). The following preset stop criteria call tf.learn.StopCriteria() under the hood. Iterations Based¶ 1 val stopc1 = dtflearn.max_iter_stop(10000) Change in Loss¶ Absolute Value of Loss¶ 1 val stopc2 = dtflearn.abs_loss_change_stop(0.1) Relative Value of Loss¶ 1 val stopc2 = dtflearn.rel_loss_change_stop(0.1) Network Building Blocks¶ To make it convenient to build deeper stacks of neural networks, DynaML includes some common layer design patterns as ready made easy to use methods. Convolutional Neural Nets¶ Convolutional neural networks (CNN) are a crucial building blockof deep neural architectures for visual pattern recognition. It turns out that CNN layers must be combined withother computational units such as rectified linear (ReLU) activations, dropout and max pool layers. Currently two abstractions are offered for building large CNN based network stacks Convolutional Unit¶ A single CNN unit is expressed as a convolutional layer followed by a ReLU activation and proceeded by a dropout layer. 1 2 3 4 5 6 7 8 9 import io.github.mandar2812.dynaml.tensorflow._ import org.platanios.tensorflow.api._ //Learn 16 filters of shape (2, 2, 4), suitable for 4 channel jpeg images. //Slide the filters over the image in steps of 1 pixel in each direction. val cnn_unit = dtflearn.conv2d_unit( shape = Shape(2, 2, 4, 16), stride = (1, 1), relu_param = 0.05f, dropout = true, keep_prob = 0.55f)(i = 1) Convolutional Pyramid¶ A CNN pyramid builds a stack of CNN units each with a stride multiplied by a factor of 2 and depth divided by a factor of 2 with respect to the previous unit. 1 2 3 4 5 6 7 8 9 10 import io.github.mandar2812.dynaml.tensorflow._ import org.platanios.tensorflow.api._ //Start with a CNN unit of shape (2, 2, 3, 16) stride (1, 1) //End with a CNN unit of shape (2, 2, 8, 4) and stride of (8, 8) val cnn_stack = dtflearn.conv2d_pyramid( size = 2, num_channels_input = 3)( start_num_bits = 4, end_num_bits = 2)( relu_param = 0.1f, dropout = true, keep_prob = 0.6F) Feed-forward Neural Nets¶ Feed-forward networks are the oldest and most frequently used components of neural network architectures, they are oftenstacked into a number of layers. With dtflearn.feedforward_stack(), you can define feed-forward stacks of arbitrarywidth and depth. 1 2 3 4 5 6 7 8 import io.github.mandar2812.dynaml.tensorflow._ import org.platanios.tensorflow.api._ val net_layer_sizes = Seq(10, 20, 13, 15) val architecture = dtflearn.feedforward_stack( (i: Int) => dtflearn.Phi("Act_"+i), FLOAT64)( net_layer_sizes) Building Tensorflow Models¶ After defining the key ingredients needed to build a tensorflow model, dtflearn.build_tf_model() builds a new computational graph and creates a tensorflow model and estimator which is trained on the provided data. In the following example, we bring together all the elements of model training: data, architecture, loss etc. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 import ammonite.ops._ import io.github.mandar2812.dynaml.tensorflow.dtflearn import org.platanios.tensorflow.api._ import org.platanios.tensorflow.api.ops.NN.SamePadding import org.platanios.tensorflow.data.image.CIFARLoader import java.nio.file.Paths val tempdir = home/"tmp" val dataSet = CIFARLoader.load( Paths.get(tempdir.toString()), CIFARLoader.CIFAR_10) val trainImages = tf.data.TensorSlicesDataset(dataSet.trainImages) val trainLabels = tf.data.TensorSlicesDataset(dataSet.trainLabels) val trainData = trainImages.zip(trainLabels) .repeat() .shuffle(10000) .batch(128) .prefetch(10) println("Building the classification model.") val input = tf.learn.Input( UINT8, Shape( -1, dataSet.trainImages.shape(1), dataSet.trainImages.shape(2), dataSet.trainImages.shape(3)) ) val trainInput = tf.learn.Input(UINT8, Shape(-1)) val architecture = tf.learn.Cast("Input/Cast", FLOAT32) >> dtflearn.conv2d_pyramid(2, 3)(4, 2)(0.1f, true, 0.6F) >> tf.learn.MaxPool( "Layer_3/MaxPool", Seq(1, 2, 2, 1), 1, 1, SamePadding) >> tf.learn.Flatten("Layer_3/Flatten") >> dtflearn.feedforward(256)(id = 4) >> tf.learn.ReLU("Layer_4/ReLU", 0.1f) >> dtflearn.feedforward(10)(id = 5) val trainingInputLayer = tf.learn.Cast("TrainInput/Cast", INT64) val loss = tf.learn.SparseSoftmaxCrossEntropy("Loss/CrossEntropy") >> tf.learn.Mean("Loss/Mean") >> tf.learn.ScalarSummary("Loss/Summary", "Loss") val optimizer = tf.train.AdaGrad(0.1) println("Training the linear regression model.") val summariesDir = java.nio.file.Paths.get( (tempdir/"cifar_summaries").toString() ) val (model, estimator) = dtflearn.build_tf_model( architecture, input, trainInput, trainingInputLayer, loss, optimizer, summariesDir, dtflearn.max_iter_stop(1000), 100, 100, 100)(trainData)
In 1202, Leonardo of Pisa (filius Bonacci) published Liber Abaci, or ABAC book. At that time, the book has not attracted attention, because it used arabic numbers. However, a problem of that book remained famous: "A certain man put a pair of rabbits in a place surrounded by a wall. How many pairs of rabbits can be produced from that pair in a year if it is supposed that every month each pair begets a new pair which from the second month on becomes productive?" Leonardo established law of multiplication of a pair of house rabbits, that is a mathematical expression corresponding to organic growth, and this proved to have very broad applications. Analyzed the problem's solution is obtained Fibonacci series: 1, 1, 2, 3, 5, 8, 13, 21 … In mathematical terms: Num n = Num n-1 + Num n-2 It may be said that the Fibonacci string is a double additive. The ratio of two consecutive terms of Fibonacci's string value represents an approximate of golden number, Ø, and the limit of this report is Ø. Ø 2=Ø+1 The question may arise whether a positive integer z is a Fibonacci number. Since F n is the closest integer to $\Phi^n/\sqrt{5}$, the most straightforward, brute-force test is the identity: $F\bigg(\bigg\lfloor\log_\Phi(\sqrt{5}z)+\frac{1}{2}\bigg\rfloor\bigg)=z,$ which is true if and only if z is a Fibonacci number. Alternatively, a positive integer $z$ is a Fibonacci number if and only if one of $5z^2+4$ or $5z^2-4$ is a perfect square. Continued fraction representation for the golden ratio:(1) We affirm that our number Phi is closely linked to the row of Fibonacci. For those who do not know of Fibonacci series is defined by f [0] = 0 f [1] = 1 f [n] = f [0] + f [1] (each n32). This series cast (in a naive way) increase the population of rabbits. It is assumed that the rabbits were put in pairs once every month after reach the age of two months. Also, chickens do not die and never are one male and one female. Thus, the number of existing pairs of rabbits after n months would have to be f - [n]. We put the question and may have in common with F Fibonacci's series? This is a remarkable idea of mathematics. To begin to see that F is a fraction to look infinitã.Acum partial fractions: All the fractions are reports of Fibonacci numbers Successive which "motivated" theorem that says that: In words we can say that as the n approaching infinity, terms of the n +1- th and the n-th row of the Fibonacci approaching Theorems of F. This is valid for any sequence Arbitrary recurring satisfying f [n] = f [0] + f [1] (each n32), with the property that the first two terms are different. Fascination of Fibonacci numbers Fibonacci sequence of numbers of fascinated by the history along the many scientists, mathematicians, physicists, biologists, and continue to do so even today. History Fibonacci (1170 - 1240) is regarded as one of the greatest European mathematicians of the Middle Ages. He was born in Pisa, italian city famous for inclinat tower, which seems to be fall. His father was a customs officer in the town of North Africa called Bougie, so Fibonacci increased in mid-North African civilization, making, however, many trips on the Mediterranean coast. He knew so many Arab and Indian merchants and their knowledge learned arithmetic, and writing Arabic numbers. Fibonacci is known as one of the first who introduced Arabic numbers in Europe, figures that we use in our days: 0, 1, 2, 3, … 9. A yupana is a calculator which was used by the Incas. Researchers assume that calculations were based on Fibonacci numbers to minimize the number of necessary grains per field. See also: Fibonacci series Series of integers that, starting from third position, each term is calculated by summing the previous two, is called in mathematics of Fibonacci series:1,1,2,3,5,8,13,21,34,55,89,144…So,2=1+1; 3=1+2; 5=2+3; 8=3+5; 13=5+8; 21=13+8; 34=21+13… About the number of gold is carefully assessing whether the series of powers of φ:φ,φ2 ,φ3…..φn,…. We know that φ2 =φ+1.Then we have :φ3 =φ2 xφ=(φ+1)φ=φ2+φ=2φ+1 φ4 =φ3 xφ=(2φ+1)φ=2φ2+φ=3φ+2 Is shown that φn=Fn x φ+Fn+1 , where Fn in the n' th term of Fibonacci's series, that any power of φ can be written with two terms in the series of integers and of his φ =(φ1)This shows that the string of Fibonacci give coefficients developments in geomatrica's progression φ.Calculating the ratio of consecutive terms of Fibonacci series: 2/1=2; 3/2=1,5; 5/3=1,66; 8/5=1,60; 13/8=1,625; 21/13=1,615385; 43/21=1,619048; 55/34=1,617647; 89/55=1,618182; 144/89=1,617978; 233/144=1,618056. Egyptians and pythagoras obviously used in terms of Fibonacci row and even if they had today's equipment algebra calculation, demonstrated a high knowledge and practical application of laws resonance, using integers and the relationship between them. Divine proportion in the four kingdoms Fibonacci's series models well many growth processes and is virtually everywhere in nature. Linked being Sectuinea Golden Fibonacci series is the default nature of solar.It is known generally as Fibonacci's series presents a certain periodicity, considering separately the figure units, tens, hundreds etc.. And the frequency is always at the 60 (!) Number, and it was hard to follow, was also entirely Forced. Not as forced at bringing all terms inside the decimal cycle by calculating the remainder of the division to 9 for each term.We get 24 terms, but within the cycle seems rather "stirred":1,1,2,3,5,8,4,3,7,1,8,0,8,8,7,6,4,8,5,6,2,8,1,0,…(the 24 terms). If you believe the table but complementary terms below, in which always take place within each period by the smallest of him and "complementary "of (not that sum is 9!),production within cycle 24, two identical subcycle of 12 items:(0), 1,1,2,3,4,1,4,3,2,1,1,0 1,1,2,3,4,1,4,3,2,1,1,0 (2 periods of 12 words).
What is the depreciation rate ($\delta$)? Assumptions: $Y_t = K_t^{\alpha} (A_tL_t)^{1-\alpha}$, $\alpha = 0.5$ ($\alpha$ = capital income share), $A_0 =1$ and $g = 0$ ($g$ = growth rate of technology), $s = 0.2$ ($s$ = savings rate), $n = 0.05$ ($n$ = growth rate of the labor force/population). Capital in the next period is equal to the capital from the current period, minus the part that depreciates, plus the extra investment (savings rate times production):$$K_{t+1} = K_t(1 - \delta) + s K_t^{\alpha} (A_t L_t)^{1-\alpha}$$ Multiply and divide LHS by $A_{t+1}L_{t+1}$ and multiply and divide RHS by $A_{t}L_{t}$. $$\frac{K_{t+1}}{A_{t+1}L_{t+1}} A_{t+1}L_{t+1} = A_t L_t \left[(1 - \delta) \frac{K_t}{A_t L_t} + s \frac{K_t^{\alpha} (A_t L_t)^{1-\alpha}}{A_t L_t} \right] $$ Then note that $A_t = A_0 (1+g)^t$ and $L_t = L_0 (1 + n)^t$ and capital per effective worker is $\tilde{k}_t = \frac{K_t}{A_t L_t}$. Using this, rewrite the above equation to obtain:$$\tilde{k}_{t+1} = \frac{(1 - \delta) \tilde{k}_t + s \tilde{k}^\alpha_t}{(1+g)(1+n)}$$ On the balanced growth path $\tilde{k}_t = \tilde{k}_{t+1} = \tilde{k}^\ast$. Using this, rewrite the above to get:$$\tilde{k}^\ast = \left( \frac{s}{g+n+gn+\delta} \right)^{ \frac{1}{1-\alpha} } $$ Since you implicitly assume that $A_t = 1, \forall t$ this balanced growth path capital per effective worker is equal to capital per worker: $$\tilde{k}^\ast = \frac{K}{AL} = \frac{K}{L} = \left( \frac{s}{g+n+gn+\delta} \right)^{ \frac{1}{1-\alpha} } = \left( \frac{0.2}{0.05+\delta} \right)^{2} $$ As you can see this $\frac{K}{L}$ is stable. For consumption per worker, we can consume what we don't invest:$$C = Y(1 - s)$$$$\frac{C}{L} = \frac{Y}{L} (1 - s)$$ Plugging the $\tilde{k}^\ast$ from above into the production function gives us:$$\tilde{y}^\ast = \frac{Y}{L} = \left( \frac{s}{g+n+gn+\delta} \right)^{\frac{\alpha}{1-\alpha}} $$ So,$$\frac{C}{L} = \left( \frac{s}{g+n+gn+\delta} \right)^{\frac{\alpha}{1-\alpha}} (1 -s) = \frac{0.16}{0.05 + \delta}$$
Algebra ReviewAlgebra Review Knowledge of the following mathematical operations is required for STAT 200: Addition Subtraction Division Multiplication Radicals (i.e., square roots) Exponents Summations \(\left( \sum \right) \) Factorials (!) Additionally, the ability to perform these operations in the appropriate order is necessary. Use these materials to check your understanding and preparation for taking STAT 200. We want our students to be successful! And we know that students that do not possess a working knowledge of these topics will struggle to participate successfully in STAT 200. Review Materials Are you ready? As a means of helping students assess whether or not what they currently know and can do to meet the expectations of instructors of STAT 200, the online program has put together a brief review of these concepts and methods. This is then followed by a short self-assessment exam that will help give you an idea if this prerequisite knowledge is readily available for you to apply. Self-Assessment Procedure 1. Review the concepts and methods on the pages in this section of this website. 2. Download and complete the Self-Assessment Exam. 3. Review the Self-Assessment Exam Solutions and determine your score. Your score on this self-assessment should be 100%! If your score is below this you should consider further review of these materials and are strongly encouraged to take MATH 021 or an equivalent course. If you have struggled with the methods that are presented in the self assessment, you will indeed struggle in the courses that expect this foundation. Note: These materials are NOT intended to be a complete treatment of the ideas and methods used in algebra. These materials and the self-assessment are simply intended as simply an 'early warning signal' for students. Also, please note that completing the self-assessment successfully does not automatically ensure success in any of the courses that use these foundation materials. Please keep in mind that this is a review only. It is not an exhaustive list of the material you need to have learned in your previous math classes. This review is meant only to be a simple guide of things you should remember and that are built upon in STAT 200. A.1 Order of OperationsA.1 Order of Operations When performing a series of mathematical operations, begin with those inside parentheses or brackets. Next, calculate any exponents or square roots. This is followed by multiplication and division, and finally, addition and subtraction. Parentheses Exponents & Square Roots Multiplication and Division Addition and Subtraction Example A.1 Simplify: $(5+\dfrac{9}{3})^{2}$ \end{align} Example A.2 Simplify: $\dfrac{5+6+7}{3}$ \end{align} Example A.3 Simplify: $\dfrac{2^{2}+3^{2}+4^{2}}{3-1}$ A.2 SummationsA.2 Summations This is the upper-case Greek letter sigma. A sigma tells us that we need to sum (i.e., add) a series of numbers. \[\sum\] For example, four children are comparing how many pieces of candy they have: ID Child Pieces of Candy 1 Marty 9 2 Harold 8 3 Eugenia 10 4 Kevi 8 We could say that: \(x_{1}=9\), \(x_{2}=8\), \(x_{3}=10\), and \(x_{4}=8\). If we wanted to know how many total pieces of candy the group of children had, we could add the four numbers. The notation for this is: \[\sum x_{i}\] So, for this example, \(\sum x_{i}=9+8+10+8=35\) To conclude, combined, the four children have 35 pieces of candy. In statistics, some equations include the sum of all of the squared values (i.e., square each item, then add). The notation is: \[\sum x_{i}^{2}\] or \[\sum (x_{i}^{2})\] Here, \(\sum x_{i}^{2}=9^{2}+8^{2}+10^{2}+8^{2}=81+64+100+64=309\). Sometimes we want to square a series of numbers that have already been added. The notation for this is: \[(\sum x_{i})^{2}\] Here,\( (\sum x_{i})^{2}=(9+8+10+8)^{2}=35^{2}=1225\) Note that \(\sum x_{i}^{2}\) and \((\sum x_{i})^{2}\) are different. Summations Here is a brief review of summations as they will be applied in STAT 200: A.3 FactorialsA.3 Factorials Factorials are symbolized by exclamation points (!). A factorial is a mathematical operation in which you multiple the given number by all of the positive whole numbers less than it. In other words \(n!=n \times (n-1) \times … \times 2 \times 1\). For example, “Four factorial” = \(4!=4\times3\times2\times1=24\) “Six factorial” = \(6!=6\times5\times4\times3\times2\times1)=720\) When we discuss probability distributions in STAT 200 we will see a formula that involves dividing factorials. For example, \[\frac{3!}{2!}=\frac{3\times2\times1}{2\times1}=3\] Here is another example, \[\frac{6!}{2!(6-2)!}=\frac{6\times5\times4\times3\times2\times1}{(2\times1)(4\times3\times2\times1)}=\frac{6\times5}{2}=\frac{30}{2}=15\] Also note that 0! = 1 Factorials Here is a brief review of factorials as they will be applied in STAT 200: A.4 Self-AssessA.4 Self-Assess Self-Assessment Procedure Reviewthe concepts and methods on the pages in this section of this website. Downloadand Completethe STAT 200 Algebra Self-Assessment Determine your Scoreby Reviewing the STAT 200 Algebra Self-Assessment: Solutions. Your score on this self-assessment should be 100%! If your score is below this you should consider further review of these materials and are strongly encouraged to take MATH 021 or an equivalent course. If you have struggled with the methods that are presented in the self assessment, you will indeed struggle in the courses above that expect this foundation. Note: These materials are NOT intended to be a complete treatment of the ideas and methods used in these algebra methods. These materials and the accompanying self-assessment are simply intended as simply an 'early warning signal' for students. Also, please note that completing the self-assessment successfully does not automatically ensure success in any of the courses that use this foundation.
There is the standard argument, using the definition of the inner product; that $\langle f|A|g\rangle =\langle g|A|f\rangle ^{*}$ for a Hermitian operator $A$, given any wave vectors $|f\rangle,~ |g\rangle$. Also consider the following: Consider the infinite dimensional one dimensional position space, with a column vector of values of the wave function at discretized points along the $x$-axis. In the limit of infinitesimal differences between position values, we have $\frac{\mathrm df}{\mathrm dx}\approx \frac{1}{2h} (f(x+h)-f(x-h))$, where $(x+h)$ and $(x-h)$ are the discrete position values just preceding and succeeding the position value $x$, and $h$ is sufficiently small. Then we might talk of an infinite dimensional matrix representation of ${\rm d/d}x$, where only the two off diagonal "diagonals" adjacent to the actual diagonal has $1/2h$ and $-\:1/2h$ entries. This matrix is skew symmetric. Everywhere else we have $0$ entries. If we multiplied this matrix with $'\mathrm i'$, this skew symmetry becomes Hermitian, which makes ${\rm i~ d/d}x$ hermitian. Edit: As pointed out by tparker in an answer below, we get $\langle x|\partial|x^\prime\rangle =\frac{\partial}{\partial x}\langle x|x^\prime \rangle =\frac{\partial}{\partial x}\delta(x-x^\prime)$. Since we are dealing with a discrete set of points in space here, we must have the normalization given by the Kronecker delta $\delta_{x,x^\prime}$. Informally, we then have $\partial(\delta_{x,x^\prime})|_{x~=~(x^\prime-h)}\approx\frac{1}{2h}(\delta_{x^\prime,x^\prime}-\delta_{x^\prime-2h,x^\prime})=\frac{1}{2h}$, and also $\partial(\delta_{x,x^\prime})|_{x~=~(x^\prime+h)}\approx-\frac{1}{2h}, ~~\partial(\delta_{x,x^\prime})|_{x~=~x^\prime}\approx 0$, which again gives us a skew-symmetric matrix of the form obtained before. Here we scanned the matrix vertically in a given column, whereas in the previous calculation, we scanned a fixed row horizontally.
On the Kuratowski measure of noncompactness for duality mappings Abstract Let $(X,\Vert \cdot\Vert ) $ be an infinite dimensional real Banach space having a Fréchet differentiable norm and $\varphi\colon \mathbb{R}_{+}\rightarrow \mathbb{R}_{+}$ be a gauge function. Denote by $J_{\varphi}\colon X\rightarrow X^{\ast}$ the duality mapping on $X$ corresponding to $\varphi$. Then, for the Kuratowski measure of noncompactness of $J_{\varphi}$, the following estimate holds: $$ \alpha( J_{\varphi}) \geq \sup\bigg\{ \frac{\varphi(r) }{r}\ \bigg|\ r> 0\bigg\} . $$ In particular, for $-\Delta_{p}\colon W_{0}^{1,p}( \Omega)\rightarrow W^{-1,p^{\prime}}( \Omega) $, $1< p< \infty$, ${1}/{p}+{1}/{p^{\prime}} = 1$, viewed as duality mapping on $W_{0}^{1,p}(\Omega)$, corresponding to the gauge function $\varphi(t)=t^{p-1}$, one has $$ \alpha( -\Delta_{p}) =\cases 1 & \text{for }p=2,\\ \infty & \text{for }p\in( 1,2) \cup( 2,\infty). \endcases $$ space having a Fréchet differentiable norm and $\varphi\colon \mathbb{R}_{+}\rightarrow \mathbb{R}_{+}$ be a gauge function. Denote by $J_{\varphi}\colon X\rightarrow X^{\ast}$ the duality mapping on $X$ corresponding to $\varphi$. Then, for the Kuratowski measure of noncompactness of $J_{\varphi}$, the following estimate holds: $$ \alpha( J_{\varphi}) \geq \sup\bigg\{ \frac{\varphi(r) }{r}\ \bigg|\ r> 0\bigg\} . $$ In particular, for $-\Delta_{p}\colon W_{0}^{1,p}( \Omega)\rightarrow W^{-1,p^{\prime}}( \Omega) $, $1< p< \infty$, ${1}/{p}+{1}/{p^{\prime}} = 1$, viewed as duality mapping on $W_{0}^{1,p}(\Omega)$, corresponding to the gauge function $\varphi(t)=t^{p-1}$, one has $$ \alpha( -\Delta_{p}) =\cases 1 & \text{for }p=2,\\ \infty & \text{for }p\in( 1,2) \cup( 2,\infty). \endcases $$ Keywords Kuratowski measure of noncompactness; smooth Banach spaces; duality mappings; p-Laplacian Full Text:FULL TEXT Refbacks There are currently no refbacks.
You're right about the second one, but your reasoning for both is sketchy. The definition of $f(n)=O(g(n))$ is that there exist constants $c,N$ both greater than zero such that $f(n)\le c\cdot g(n)$ for all $n\ge N$. Trying a single value for $n$ won't generally give anything in the way of an answer. Consider, though, the intuition we get from trying $n=4, 8, 16, 32, \dotsc$:$$\begin{array}{lcccc}n: & 4 & 8 & 16 & 32 & 64 & 128\\\log^2 n: & 4 & 9 & 16 & 25 & 36 & 49\end{array}$$ It seems to be the case that $n$ grows much faster than $\log^2n$. In fact, for this example we have that at least for powers of 2, $\log^2(n)=\log^2(2^k)=k^2\le 1\cdot2^k=1\cdot n$ for all $n \ge 16$, so we might suspect that $\log^2n=O(n)$, which is indeed the case, as it happens. What is most helpful is to get an intuition of the big-O relationship between frequently-used functions. Here are some examples (all of which should be proved, but that'll be up to you do do or look up). Assuming everything in sight is positive, then: $\log n = O(n^k)$ for any $k>0$. (polynomials beat logs, eventually) If $a \le b$, then $n^a=O(n^b)$. (polynomials behave as expected, by degree) If $a>1$, then $n^k=O(a^n)$. (exponentials beat polynomials) and so on. The point here is that it's far better to develop your intuition and only use the definition of big-O when faced with a problem where intuition doesn't give you any help. That's what the experts do.
In an economy with two agents whose utility functions are $$ U_A(x_1,x_2) = \alpha \cdot x_1 + x_2 \hskip 20pt U_B(y_1,y_2) = y_1 \cdot y_2. $$ The given allocations are bundle (4,0) for A and bundle (1,5) for B. Consider the following question Taking into consideration the respective utilities for the bundles, we have $U_A=4\alpha$ and $U_B=5$. For this allocation to be a No Envy allocation, it has to be $4\alpha \geq 5$, which means alpha has to be greater than or equal to $\frac{5}{4}$. Is this the right approach to solve this problem? If its not, please find the solution and show me the steps.
Let $S$ be a bounded and closed star domain of $\mathbb{R}^n$, $n>1$ with not empty interior and with an inner a "good" star-center $x_0$. I define "good" star-center as a star-center $x_0$ such that for every $x \in S$, the straight line $\overline{x_0 x}$ that crosses $x_0$ and $x$, also intersects $\partial S$ only one point. I have a sketch of a proof that if the boundary of $S$, $\partial S$, is locally path-connected then $\partial S$ is path-connected. Proof: Let $B(x_0,r) \subset S^\circ$ be a close ball centered in $x_0$ in the interior of $S$. Let define a bijective map $f:\partial B \rightarrow \partial S$ defined by $f(x) = \partial S \cap \overline{x_0 x}$. As $S$ is bounded, $f(x)$ is defined for all $x \in \partial B$. Now consider two points $x,y \in \partial S$. As $\partial B$ is path-connected, then there exists a path $\gamma$ on $\partial B$ that connects $f^{-1}(x)$ and $f^{-1}(y)$. Since $\partial S$ is locally path-connected, we can divide $\gamma$ into smaller segments $\{\gamma_i\}_{i=1}^m$ such that $f(\gamma_i)$ is a continuous path in $\partial S$. Hence the union $\cup_i f(\gamma_i)$ is a path that connects $x$ and $y$. Is correct this proof?
11.3.1 - Example: Gender and Online Learning A sample of 314 Penn State students was asked if they have ever taken an online course. Their genders were also recorded. The contingency table below was constructed. Use a chi-square test of independence to determine if there is a relationship between gender and whether or not someone has taken an online course. Have you taken an online course? Yes No Men 43 63 Women 95 113 \(H_0:\) There is not a relationship between gender and whether or not someone has taken an online course (they are independent) \(H_a:\) There is a relationship between gender and whether or not someone has taken an online course (they are dependent) Looking ahead to our calculations of the expected values, we can see that all expected values are at least 5. This means that the sampling distribution can be approximated using the \(\chi^2\) distribution. In order to compute the chi-square test statistic we must know the observed and expected values for each cell. We are given the observed values in the table above. We must compute the expected values. The table below includes the row and column totals. Have you taken an online course? Yes No Men 43 63 106 Women 95 113 208 138 176 314 \(E=\frac{row\;total \times column\;total}{n}\) \(E_{Men,\;Yes}=\frac{106\times138}{314}=46.586\) \(E_{Men,\;No}=\frac{106\times176}{314}=59.414\) \(E_{Women,\;Yes}=\frac{208\times138}{314}=91.414\) \(E_{Women,\;No}=\frac{208 \times 176}{314}=116.586\) Note that all expected values are at least 5, thus this assumption of the \(\chi^2\) test of independence has been met. Observed and expected counts are often presented together in a contingency table. In the table below, expected values are presented in parentheses. Have you taken an online course? Yes No Men 43 (46.586) 63 (59.414) 106 Women 95 (91.414) 113 (116.586) 208 138 176 314 \(\chi^2=\sum \frac{(O-E)^2}{E} \) \(\chi^2=\frac{(43-46.586)^2}{46.586}+\frac{(63-59.414)^2}{59.414}+\frac{(95-91.414)^2}{91.414}+\frac{(113-116.586)^2}{116.586}=0.276+0.216+0.141+0.110=0.743\) The chi-square test statistic is 0.743 \(df=(number\;of\;rows-1)(number\;of\;columns-1)=(2-1)(2-1)=1\) We can determine the p-value by constructing a chi-square distribution plot with 1 degree of freedom and finding the area to the right of 0.743. \(p = 0.388702\) \(p>\alpha\), therefore we fail to reject the null hypothesis. There is not evidence that gender and whether or not an individual has completed an online course are related. Note that we cannot say for sure that these two categorical variables are independent, we can only say that we do not have evidence that they are dependent.
Suppose we have a population of voters, each of them casting a vote in some (not necessarily deterministic) voting system. Let w be the winning candidate. Consider the following experiment: Fix some number N. Draw a sample of N voters with replacement (i.e. you may draw the same voter more than once). Run the election again. What is the probability of the result of the vote being different from the vote on the whole population (if the voting system is non-deterministic, rerun the system on the whole population too)? Call this number S(N). I suspect S(N) might reveal interesting behaviours of voting systems, but I haven’t yet analyzed this in any great detail or thought about it much. It does however have the interesting property that it isn’t dependent on the type of voting system used – it works equally well for ranked, graded or scored votes, and for deterministic and non-deterministic systems, so it provides an interesting way of comparing potentially very different voting systems. Some interesting questions one may reasonably ask: What is the limiting behaviour of S(N) as \(N \to \infty\)? Is S(N) monotonic increasing? What is the behaviour like for very small N? (I don’t have a precise formulation of this yet) Is there anything special about S(original number of voters)? Two examples: For first past the post voting, \(S(1)\) is the fraction of the population who voted for the candidate, and \(S(N) \to 1\) as \(N \to \infty\), because as \(N \to \infty\) the fractions of the samples who vote for a given candidate concentrate on the fractions in the original sample. I think \(S(N)\) is monotone increasing but have only proven it for the two candidate case. It’s intuitively plausible though. For random ballot, \(S(N) = \sum_{i = 1}^m p_i^2 \), where we have candidates \(1 \ldots m\) and \(p_i\) is the fraction of the population who voted for candidate \(i\). Note that this is independent of N. This is because picking N candidates then picking a random one of them is exactly the same as just picking a random candidate in the first place, so \(S(N)\) is just the probability of running the election twice and getting the same result. I haven’t worked out the answers for other, more complicated, systems. I would be interested to do so, and may well at some point, but if someone wants to do it for me or tell me if it’s already been done that’d be cool too.
I'm wondering where the following equality came from: $$ \langle x , y \rangle = \|x \| \| y \| \cos \theta$$ where the thing on the LHS is the inner product and $\|\cdot\|$ is the norm induced by $\langle \cdot, \cdot \rangle$. Do we need the Cauchy Schwarz inequality to prove this? I'm asking because I'm reading my notes and there is a proof of the C. S. - inequality. It's fairly short but longer than the following: Claim: $|\langle x,y \rangle | \leq \|x \| \|y \|$ Proof: Since $\langle x , y \rangle = \|x \| \| y \| \cos \theta$ we have $-\|x \| \| y \| \leq \langle x , y \rangle \leq \|x \| \| y \|$. Thanks.
I'm trying to follow a proof of the fact that if $g$ is an element of a hyperbolic group $G$ with infinite order, then $\langle g \rangle$ is an undistorted subgroup of $G$. The proof relies on the following lemma: If $[p,q] \cup [q,r] \cup [r,s] \cup [s, p]$ is a geodesic quadrangle in a $\delta$-hyperbolic space $X$, then for any pair of points $x \in [p, q]$, $y \in [r, s]$ with $d(p, x) = d(s, y)$ we have $$d(x, y) \leq \max \{d(p, s), d(q, r) \} + 10\delta.$$ The problem is that the proof of this lemma is left as an exercise. I have now spent over 3 hours trying to prove this lemma, but I seem to make no progress whatsoever. This leads me to believe that I missed something elementary and I therefore wanted to ask if anyone could point me to a proof/give me an idea how to prove it. Thanks!
Advance warning: If you’re familiar with Bayesian reasoning it is unlikely this post contains anything new unless I’m making novel mistakes, which is totally possible. Second advance warning: If taken too seriously, this article may be hazardous to your health. Let me present you with a hypothetical, abstracted, argument: Me: C You: Not C! Me: B? You: *shrug* Me: A? You: Yes Me: A implies B? You: Yes? Me: B implies C? You: … Yes Me: Therefore C? You: C. :-( Does this seem like a likely scenario to you? We have had a disagreement. I have presented a logical argument from shared premises for my side of the disagreement. You have accepted that argument and changed your position. Yeah, it sounds pretty implausible to me too. A more likely response from you at the end is: You: Not C! I will of course find this highly irrational and be irritated by your response. …unless you’re a Bayesian reasoner, in which case you are behaving entirely correctly, and I’ll give you a free pass. Wait, what? Lets start with a simplified example with only two propositions. Suppose you have propositions \(A\) and \(B\), which you believe with probabilities \(a\) and \(b\) respectively. You currently believe these two to be independent, so \(P(A \wedge B) = ab\) Now, suppose I come along and convince you that \(A \implies B\) is true (I’ll call this proposition \(I\)). What is your new probability for k\(B\)? Well, by Bayes rule, \(P(B|I) = \frac{P(B \wedge I)}{P(I)} = P(B) \frac{P(I|B)}{P(I)}\) \(I = A \implies B = \neg\left( A \wedge \neg B\right)\). So \(P(I) = 1 – a(1 – b)\). \(P(I|B) = 1\) because everything implies a true proposition. Therefore \(P(B|I) = \frac{b}{(1 – a(1 – b))}\). This is a slightly gross formula. Note however it does have the obviously desirable property that your believe in B goes up, or at least stays the same. Lets quickly check it with some numbers. \(a\) \(b\) \(P(B | I)\) 0.100 0.100 0.110 0.100 0.500 0.526 0.100 0.900 0.909 0.500 0.100 0.182 0.500 0.500 0.667 0.500 0.900 0.947 0.900 0.100 0.526 0.900 0.500 0.909 0.900 0.900 0.989 These look pretty plausible. Our beliefs do not seem to change to an unrealistic degree, but we have provided significant evidence in favour of \(B\). But as a good Bayesian reasoner, you shouldn’t assign probabilities 0 or 1 to things. Certainty is poisonous to good probability updates. So when I came along and convinced you that \(A \implies B\), you really shouldn’t have believed me completely. Instead you should have assigned some probability \(r\) to it. So what happens now? Well we know what the probability of \(B\) given \(I\) is, but what is the probability given \(\neg I\)? Well \(\neg I = \neg (A \implies B) = A \wedge \neg B\), so \(P(B|\neg I) = 0\). The implication can only be false if \(B\) is (because everything implies a true statement). This means that your posterior probability for \(B\) should be \(r P(B|I)\). So \(r\) is essentially a factor slowing your update process. Note that because my posterior belief in B is \(b \frac{r}{P(I)}\), as long as my claim that \(A \implies B\) is at least as convincing as my prior belief in it, my argument will increase your belief in it. Now. Lets suppose that you are in fact entirely convinced before hand that \(A\) and that \(\neg B\), and my argument entirely convinces you that \(A \implies B\). Of course, we don’t believe in certainty. Things you are entirely convinced of may prove to be false. Suppose now that in the past you have noticed that when you’re entirely convinced of something, you’re right with about probability \(p\). Lets be over-optimistic and say that \(p\) is somewhere in the 0.9 range. What should your posterior probability for \(B\) now be? We have \(b = 1 – p\) and \(a = r = p\). Then your posterior probability for \(B\) is \(r P(B | I) = p \frac{1 – p}{(1 – p(1 – (1 – p)))} = p \frac{1 – p}{1 – p^2} = \frac{p}{p+1} = 1 – \frac{1}{p+1}\). You know what the interesting thing about this is? The interesting thing is that it’s always less than half. A perfectly convincing argument that a thing I completely believe in implies a thing I completely disbelieve in should never do more than create a state of complete uncertainty in your mind. It turns out that reasonable degrees of certainty get pretty close to that too. If you’re right about things you’re certain about with probability 0.9 then your posterior probability for \(B\) should be 0.47. If you’re only right with probability 0.7 then it should be \(0.41\). Of course, if you’re only right about that often then \(0.41\) isn’t all that far from your threshold for certainty in the negative result. In conclusion: If you believe A and not B, and I convince you that A implies B, you should not now go away and believe B. Instead you should be confused, with a bias towards still assuming not B, until you’ve resolved this. Now, lets go one step further to our original example. We are instead arguing about \(C\), and my argument proceeds via an intermediary \(B\). Your prior is that \(A\), \(B\) and \(C\) are all independent. You are certain that \(A\), certain that \(\neg C\) and have no opinion on \(B\) (i.e. you believe it with probability \(\frac{1}{2}\). I now provide you with a p-convincing argument that \(A \implies B\). What is your posterior probability for \(B\)? Well, plugging it into our previous we get \(b’ = p \frac{b}{1 – p(1 – b)} = \frac{p}{2 – p}\). Again, checking against some numbers, if \(p = 0.9\) then \(b’ \approx 0.82\), which seems reasonable. Suppose now that I provide you p-convincing evidence that \(B \implies C\). What’s your posterior for \(C\)? Well, again with the previous formula only replacing \(a\) with \(b’\) and \(b\) with \(c\) we have \[\begin{align*} c’ &= \frac{p c}{1 – b'(1 – c)} \\ &= \frac{p(1-p)}{1 – \frac{p^2}{2 – p}} \\ &= \frac{p(1-p)(2 – p)}{2 – p – p^2}\\ \end{align*}\] this isn’t a nice formula, but we can plug numbers in. Suppose your certainties are 0.9. Then your posterior is \(c’ \approx 0.34\). You’re no longer certain that \(C\) is false, but you’re still pretty convinced despite the fact that I’ve just presented you with an apparently water-tight argument to the contrary. This result is pretty robust with respect too your degree of certainty, too. As \(p \to 1\), this seems to tend to \(\frac{1}{3}\), and for \(p = \frac{1}{2}\) (i.e. you’re wrong half the time when you’re certain!) we get \(c = 0.3\). In conclusion: An apparently water tight logical argument that goes from a single premise you believe in to a premise you disbelieve in via something you have no opinion on should not substantially update your beliefs, even if it casts some doubt on them. Of course, if you’re a Bayesian reasoner, this post is an argument that starts from a premise you believe in, goes via something you have no opinion on, and concludes something you likely don’t believe in. Therefore it shouldn’t change your beliefs very much.
Search Now showing items 1-10 of 165 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code. he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects. i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent. you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl. In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos... Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval @AkivaWeinberger are you familiar with the theory behind Fourier series? anyway here's a food for thought for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely. (a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$? @AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it. > In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d... Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions. @AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations hence you're free to rescale the sides, and therefore the (semi)perimeter as well so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality that makes a lot of the formulas simpler, e.g. the inradius is identical to the area It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane? $q$ is the upper summation index in the sum with the Bernoulli numbers. This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
A series has a constant difference between terms. For example, 3 + 7 + 11 + 15 + ….. + 99. We name the first term as a1. The common difference is often named as “ d”, and the number of terms in the series is n. We can find out the sum of the arithmetic series by multiplying the number of times the average of the last and first terms. The formula for finding out the sum of the terms of the arithmetic series is given as:\(\large x_{1}+x_{2}+x_{3}+….+x_{n}=\sum_{i-1}^{n}x_{i}\) \(\large Sum=n\left(\frac{a_{1}+a_{n}}{2}\right)\) or\(\large \frac{n}{2}\left[2a_{1}+\left(n-1\right)d\right]\) Solved Example Example: 3 + 7 + 11 + 15 + ··· + 99 has a1 = 3 and d = 4. Find n, using the explicit formula for an arithmetic sequence. Solution: We solve 3 + ( n – 1) x 4 = 99 to get n = 25 $Sum=25\left(\frac{3+99}{2}\right)=1275$ $Sum=\frac{25}{2}\left[2\cdot 3+\left(25-1\right)\cdot 4\right]=1275$ More topics in Series Formula Infinite Series Formula
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
The Poisson Random Variable The Poisson random variable is a discrete random variable that counts the number of times a certain event will occur in a specific interval. Learning Objectives Apply the Poisson random variable to fields outside of mathematics Key Takeaways Key Points The Poisson distribution predicts the degree of spread around a known average rate of occurrence. The distribution was first introduced by Siméon Denis Poisson (1781–1840) and published, together with his probability theory, in his work “Research on the Probability of Judgments in Criminal and Civil Matters” (1837). The Poisson random variable is the number of successes that result from a Poisson experiment. Given the mean number of successes (μ) that occur in a specified region, we can compute the Poisson probability based on the following formula: P(x; μ) = (e-μ) (μx) / x!. Key Terms factorial: The result of multiplying a given number of consecutive integers from 1 to the given number. In equations, it is symbolized by an exclamation mark (!). For example, 5! = 1 * 2 * 3 * 4 * 5 = 120. Poisson distribution: A discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time and/or space, if these events occur with a known average rate and independently of the time since the last event. disjoint: having no members in common; having an intersection equal to the empty set. The Poisson Distribution and Its History The Poisson distribution is a discrete probability distribution. It expresses the probability of a given number of events occurring in a fixed interval of time and/or space, if these events occur with a known average rate and independently of the time since the last event. The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area, or volume. For example: Let’s suppose that, on average, a person typically receives four pieces of mail per day. There will be a certain spread—sometimes a little more, sometimes a little less, once in a while nothing at all. Given only the average rate for a certain period of observation (i.e., pieces of mail per day, phonecalls per hour, etc.), and assuming that the process that produces the event flow is essentially random, the Poisson distribution specifies how likely it is that the count will be 3, 5, 10, or any other number during one period of observation. It predicts the degree of spread around a known average rate of occurrence. The distribution was first introduced by Siméon Denis Poisson (1781–1840) and published, together with his probability theory, in 1837 in his work Recherches sur la Probabilité des Jugements en Matière Criminelle et en Matière Civile (“Research on the Probability of Judgments in Criminal and Civil Matters”). The work focused on certain random variables N that count, among other things, the number of discrete occurrences (sometimes called “events” or “arrivals”) that take place during a time interval of given length. Properties of the Poisson Random Variable A Poisson experiment is a statistical experiment that has the following properties: The experiment results in outcomes that can be classified as successes or failures. The average number of successes (μ) that occurs in a specified region is known. The probability that a success will occur is proportional to the size of the region. The probability that a success will occur in an extremely small region is virtually zero. Note that the specified region could take many forms: a length, an area, a volume, a period of time, etc. The Poisson random variable, then, is the number of successes that result from a Poisson experiment, and the probability distribution of a Poisson random variable is called a Poisson distribution. Given the mean number of successes (μ) that occur in a specified region, we can compute the Poisson probability based on the following formula: [latex]\text{P}(\text{x}; \mu ) = ((\text{e}^{-\mu }) (\mu ^\text{x})) / \text{x}![/latex], where: e = a constant equal to approximately 2.71828 (actually, e is the base of the natural logarithm system); μ = the mean number of successes that occur in a specified region; x: the actual number of successes that occur in a specified region; P(x; μ): the Poisson probability that exactly x successes occur in a Poisson experiment, when the mean number of successes is μ; and x! is the factorial of x. The Poisson random variable satisfies the following conditions: The number of successes in two disjoint time intervals is independent. The probability of a success during a small time interval is proportional to the entire length of the time interval. The mean of the Poisson distribution is equal to μ. The variance is also equal to μ. Apart from disjoint time intervals, the Poisson random variable also applies to disjoint regions of space. Example The average number of homes sold by the Acme Realty company is 2 homes per day. What is the probability that exactly 3 homes will be sold tomorrow? This is a Poisson experiment in which we know the following: μ = 2; since 2 homes are sold per day, on average. x = 3; since we want to find the likelihood that 3 homes will be sold tomorrow. e = 2.71828; since e is a constant equal to approximately 2.71828. We plug these values into the Poisson formula as follows: [latex]\text{P}(\text{x}; \mu ) = ((\text{e}^{-\mu }) (\mu ^\text{x})) / \text{x}![/latex] [latex]\text{P}(3; 2) = ((2.71828^{-2}) (2^3)) / 3![/latex] [latex]\text{P}(3; 2) = ((0.13534) (8)) / 6[/latex] [latex]\text{P}(3; 2) = 0.180[/latex] Thus, the probability of selling 3 homes tomorrow is 0.180. Applications of the Poisson Random Variable Applications of the Poisson distribution can be found in many fields related to counting: electrical system example: telephone calls arriving in a system astronomy example: photons arriving at a telescope biology example: the number of mutations on a strand of DNA per unit length management example: customers arriving at a counter or call center civil engineering example: cars arriving at a traffic light finance and insurance example: number of losses/claims occurring in a given period of time Examples of events that may be modelled as a Poisson distribution include: the number of soldiers killed by horse-kicks each year in each corps in the Prussian cavalry (this example was made famous by a book of Ladislaus Josephovich Bortkiewicz (1868–1931); the number of yeast cells used when brewing Guinness beer (this example was made famous by William Sealy Gosset (1876–1937); the number of goals in sports involving two competing teams; the number of deaths per year in a given age group; and the number of jumps in a stock price in a given time interval. The Hypergeometric Random Variable A hypergeometric random variable is a discrete random variable characterized by a fixed number of trials with differing probabilities of success. Learning Objectives Contrast hypergeometric distribution and binomial distribution Key Takeaways Key Points The hypergeometric distribution applies to sampling without replacement from a finite population whose elements can be classified into two mutually exclusive categories like pass/fail, male/female or employed/unemployed. As random selections are made from the population, each subsequent draw decreases the population causing the probability of success to change with each draw. It is in contrast to the binomial distribution, which describes the probability of [latex]\text{k}[/latex] successes in [latex]\text{n}[/latex] draws with replacement. Key Terms binomial distribution: the discrete probability distribution of the number of successes in a sequence of $n$ independent yes/no experiments, each of which yields success with probability $p$ Bernoulli Trial: an experiment whose outcome is random and can be either of two possible outcomes, “success” or “failure” hypergeometric distribution: a discrete probability distribution that describes the number of successes in a sequence of $n$ draws from a finite population without replacement The hypergeometric distribution is a discrete probability distribution that describes the probability of [latex]\text{k}[/latex] successes in [latex]\text{n}[/latex] draws without replacement from a finite population of size [latex]\text{N}[/latex] containing a maximum of [latex]\text{K}[/latex] successes. This is in contrast to the binomial distribution, which describes the probability of [latex]\text{k}[/latex] successes in [latex]\text{n}[/latex] draws with replacement. The hypergeometric distribution applies to sampling without replacement from a finite population whose elements can be classified into two mutually exclusive categories like pass/fail, male/female or employed/unemployed. As random selections are made from the population, each subsequent draw decreases the population causing the probability of success to change with each draw. The following conditions characterize the hypergeometric distribution: The result of each draw can be classified into one or two categories. The probability of a success changes on each draw. A random variable follows the hypergeometric distribution if its probability mass function is given by: [latex]\displaystyle \text{P}(\text{X}=\text{k}) = \frac{{{\text{K}}\choose{\text{k}}}{{\text{N}-\text{K}}\choose{\text{n}-\text{k}}}}{{{\text{N}}\choose{\text{n}}}}[/latex] Where: [latex]\text{N}[/latex] is the population size, [latex]\text{K}[/latex] is the number of success states in the population, [latex]\text{n}[/latex] is the number of draws, [latex]\text{k}[/latex] is the number of successes, and [latex]\displaystyle {{\text{a}}\choose{\text{b}}}[/latex] is a binomial coefficient. A hypergeometric probability distribution is the outcome resulting from a hypergeometric experiment. The characteristics of a hypergeometric experiment are: You take samples from 2 groups. You are concerned with a group of interest, called the first group. You sample without replacement from the combined groups. For example, you want to choose a softball team from a combined group of 11 men and 13 women. The team consists of 10 players. Each pick is not independent, since sampling is without replacement. In the softball example, the probability of picking a women first is [latex]\frac{13}{24}[/latex]. The probability of picking a man second is [latex]\frac{11}{23}[/latex], if a woman was picked first. It is [latex]\frac{10}{23}[/latex] if a man was picked first. The probability of the second pick depends on what happened in the first pick. You are not dealing with Bernoulli Trials.