text
stringlengths 256
16.4k
|
|---|
Approach
My question asks whether or not the elapsed proper time between meetings is always less for A than it is for B, for every possible radius of A's orbit.
One way to answer this would be to calculate the ratio of the two elapsed proper times as in reference [1] cited by user m4r35n357, but my question is less demanding: I only want to know which of the two elapsed proper times is smaller. This less-demanding question can be answered using a less-demanding approach.
Start with the proper-time equation shown in the question, hereafter called equation (1), and consider any worldline of the form$$ r(t) = r_A\hskip2cm \phi(t) = \omega t\tag{2}$$where $r_A$ and $\omega$ are constants, meaning that they are independent of the coordinate $t$. Object A's circular orbit has the form (2) for a special value of $\omega$ that we won't need here except for the fact that it's not zero. Now, consider one cycle of A's orbit. The second of equations (2) says that the elapsed
coordinate time during one cycle is $$ \Delta t=\frac{2\pi}{\omega}\tag{3}$$According to equation (1) in the question, the corresponding elapsed proper time is$$ \Delta \tau_A = \left(1-\frac{R}{r_A}-r_A^2\omega^2\right)^{1/2} \,\Delta t.\tag{4}$$Now let $\Delta\tau_0$ denote the elapsed proper time for an object that hovers at $r=r_A$ and $\phi=0$. This object meets A once per orbit. Equation (4) gives the elapsed proper time between meetings according to A. The elapsed proper time between meetings according to the hovering object is$$ \Delta \tau_0 = \left(1-\frac{R}{r_A}\right)^{1/2}\,\Delta t\tag{5}$$with the same value (3) of $\Delta t$. (The elapsed coordinate time depends only on the two meeting-events, not on the worldline that connects them.) Equations (4)-(5) give$$ \Delta\tau_A < \Delta\tau_0.\tag{6}$$This doesn't answer the question yet, because the hovering object is not the one we want. We want an object B that is in radial free-fall, meeting A twice with coordinate time interval (3) between the two meetings.
Let $\Delta\tau_B$ elapsed proper time between meetings according to the radially-falling object B. The goal is to prove that $$ \Delta\tau_0 < \Delta\tau_B.\tag{7}$$If we can prove (7), then combining the inequalities (6)-(7) will answer the question: the orbiting object A ages less than B between meetings, regardless of the orbital radius $r_A$.
How to prove (7)
Object B is in free-fall (as is A). In other words, the worldline of object B is a geodesic. To prove (7), I will use this fact:
Among worldlines with the given endpoints, a geodesic gives a local maximum of the elapsed proper time.
A geodesic doesn't necessarily give a
global maximum (that's the point of the question), but it does give a local maximum. To exploit this, we can continuously deform the hovering worldline into B's worldline in such a way that the elapsed proper time continuously increases by virtue of this local-maximum principle.
Start with the hovering worldline. The coordinate-times of the two meetings with object A are $t=0$ and $t=\Delta t$, where $\Delta t$ is given by equation (3). Write $[a,b]$ for the coordinate-time interval from $a$ to $b$. Choose an intermediate time $T$, with $T\in[0,\Delta t]$, and consider the worldline $W_T$ defined like this:
During the interval $t\in[0,T]$, the worldline $W_T$ is a radial geodesic that starts and ends at $r=r_A$ (at times $0$ and $T$, respectively).
During the interval $t\in[T,\Delta t]$, the worldline $W_T$ is that of an object hovering at $r=r_A$.
The angular coordinate is $\phi=0$ during both segments, so the motion (if any) is purely radial. For any value of $T$, the worldline $W_T$ meets A twice, namely at times $t=0$ and $t=\Delta t$. By varying $T$ continuously from $0$ to $\Delta t$, the worldline $W_T$ varies continuously from that of an always-hovering object to that of an object that is in radial free fall during the whole interval between meetings. The latter case, namely $W_{T=\Delta t}$, corresponds to object B in the question.
From here, the proof is easy. The quantity $\Delta\tau_0$ in the inequality (6) corresponds to the worldline $W_{T=0}$. For any $T$, the worldline $W_{T}$ is a geodesic at times $t\in[0, T]$. For infinitesimal $\delta T$, the worldline $W_{T-\delta T}$ is
not a geodesic during part of the interval $[0,T]$, and it differs only infinitesimally from $W_T$, which is a geodesic during all of that same interval. Therefore, the elapsed proper time for $W_T$ in the interval $[0,T]$ is greater than the elapsed proper time for $W_{T-\delta T}$ in the same interval $[0,T]$, because a geodesic locally maximizes the proper time. Since the two worldlines are identical to each other during the remaining interval $[T,\Delta t]$, we infer that the elapsed proper time for $W_T$ in the full interval $[0,\Delta t]$ is greater than the elapsed proper time for $W_{T-\delta T}$ in $[0,\Delta t]$. This is true for all $T$, so the elapsed proper time for B's worldline $W_{\Delta t}$ is greater than the elapsed proper time for the hovering worldline $W_0$. This gives the inequality (7), and combining this with (6) gives $\Delta \tau_A<\Delta \tau_B$, for arbitrary $r_A$. This answers the question.
Answer: Between meetings, the orbiting object A ages less than the radially-falling object B, regardless of the orbital radius $r_A$. This conclusion is consistent with the result of the more laborious calculations in [1], where the ratio of the elapsed proper times is calculated quantitatively. Comments
This approach does not assume that A's worldline is a geodesic, only that A's worldline follows a circular path. In other words, the approach works even if A is moving faster or slower than orbital speed. However, the question is most interesting when A's worldline is a geodesic, because then it illustrates the fact that geodesics don't necessarily maximize the proper time
globally, even though they do locally.
This approach was inspired by the worldline-deformation arguments used in Witten's excellent review [2], which also mentions a variant of the scenario I described here.
References:
[1] "Proper Time for Intersecting Orbits", https://www.mathpages.com/rr/s6-05/6-05.htm
[2] Witten (2019), "Light rays, singularities, and all that," https://arxiv.org/abs/1901.03928
|
Crossing Bridge in Crowds Problem
Solution
If there is nobody on the bridge at noon, no one has entered it in the five-minute interval before noon. Since there are $144$ intervals of $5$ minutes in $12$ hours, the probability that an individual enters the bridge in a specific one is $\displaystyle\frac{1}{144}.$ The probability that none of the $1000$ individuals enters in that interval is
$\displaystyle\left(1-\frac{1}{144}\right)^{1000}=\left[\left(1-\frac{1}{144}\right)^{144}\right]^{\frac{1000}{144}}\approx e^{-\frac{125}{18}}\approx 0.00096.$
Acknowledgment
This is problem 68 from the Canadian Crux Mathematicorum (v1, 1975). The problem is by E.G. Dworschak, the solution is by G.D. Kaye.
65462462
|
If I take a series RLC circuit connected to a battery, the impedance is minimized when $\omega = \frac{1}{\sqrt{LC}}$.
I also know that the series RLC circuit is analogous to a damped driven harmonic oscillator. However, the resonant frequency of a damped driven harmonic oscillator is reduced due to the damping. It is given by $\omega = \sqrt{\omega_0^2 - \gamma^2}$ where $\gamma$ is a damping parameter.
I am unable to see why the analogy fails here: How come the RLC circuit's resonant frequency has no dependence on $R$ but the harmonic oscillator does?
For definition - resonant frequency is the frequency of the driving force (or voltage) that maximized the amplitude (or current).
|
Let $(a_n)$ be any sequence and $(b_n)=n(a_n-a_{n+1})$.
Prove that if $\sum a_n$ and $\sum b_n$ converges then $\lim_{n\to \infty}na_n=0$ and $\sum a_n= \sum b_n$.
The second part, assuming the first part, I've shown. I'm having trouble showing $\lim_{n\to \infty}na_n=0$. I know that $\lim a_n=0$ and $\lim n(a_n-a_{n+1})=0$. How can I use these facts to show the first one?
I would greatly appreciate any help.
From the answers below I got that $\lim na_n$ exists. So I tried to show that the limit is $0$ by assuming that it is not.
First, assume that the limit $l \gt 0$. Then $\liminf na_n=l$. Hence for $l/2 \gt 0$, there is some $N$ such that for $n \ge N$, we have $na_n \gt l/2$. This implies that for $n \ge N$, $a_n \gt l/(2n)$. But this is a contradiction since we assumed that $\sum a_n$ converges.
Finally, assume that $l \lt 0$. Then $\limsup na_n=l$. So for $-l/2$, there is some $N$ such that if $n\ge N$ then $na_n\lt l/2$. This implies that $-a_n\gt -l/(2n)\gt 0$. Thus, again by comparison, we get that $-\sum a_n$ diverges, which is a contradiction.
Hence the limit must be $0$.
|
This is a question in the theory of random dynamical systems.
Let $(X,d)$ be a compact metric space, let $(I,\mathcal{I},\nu)$ be a probability space, and let $(f_\alpha)_{\alpha \in I}$ be an $I$-indexed family of continuous functions $f_\alpha:X \to X$ such that the map $(\alpha,x) \mapsto f_\alpha(x)$ is jointly measurable. Suppose we have a measurable function $a:I^\mathbb{N} \to X$ with the property that for $\nu^{\otimes \mathbb{N}}$-almost every sequence $(\alpha_1,\alpha_2,\alpha_3,\ldots) \in I^\mathbb{N}$,
$\ \ \ \ f_{\alpha_1}(a(\alpha_2,\alpha_3,\ldots)) \ = \ a(\alpha_1,\alpha_2,\alpha_3,\ldots)$.
Let $\rho$ be the image measure of $\nu^{\otimes \mathbb{N}}$ under $a$.
Is it necessarily the case that for $(\rho \otimes \rho \otimes \nu^{\otimes \mathbb{N}})$-almost every $(x,y,\alpha_1,\alpha_2,\ldots) \in X \times X \times I^\mathbb{N}$,
$\ \ d(f_{\alpha_n} \circ \ldots \circ f_{\alpha_1}(x),f_{\alpha_n} \circ \ldots \circ f_{\alpha_1}(y)) \to 0 \ \textrm{as} \ n \to \infty\,$?
[The papers which I have seen never seem to work on the basis that the answer is "yes" - but I don't know if this is because there are known counterexamples, or if this is because the answer is not known.]
Remark: It is known that for $\nu^{\otimes \mathbb{N}}$-almost all $\omega \! = \! (\alpha_1,\alpha_2,\alpha_3,\ldots) \in I^\mathbb{N}$, for every $\varepsilon>0$, if we let $B_{\omega,\varepsilon}$ denote the ball of radius $\varepsilon$ about $a(\omega)$ then
$\ \ \ \rho((f_{\alpha_1} \circ \ldots \circ f_{\alpha_n})^{-1}(B_{\omega,\varepsilon})) \to 1 \ \textrm{as} \ n \to \infty$.
As a result, it is known that for every $\varepsilon>0$,
$\ \ \rho \otimes \rho \otimes \nu^{\otimes n}( \, (x,y,\alpha_1,\ldots,\alpha_n) \, : \, d(f_{\alpha_n} \circ \ldots \circ f_{\alpha_1}(x),f_{\alpha_n} \circ \ldots \circ f_{\alpha_1}(y)) < \varepsilon \, ) \, \to \, 1$
as $n \to \infty$.
|
Welcome to TeX SE!
The key issue is that
ebgaramond-maths issues the following
\DeclareSymbolFont{letters} {OML} {EBGaramond-Maths} {m} {it}
This overwrites the existing
letters font. This enables all of the characters which the font does provide and which are used in the
OML encoding. However, it does this by telling LaTeX that whenever a maths symbol uses the
letters font, it should use
OML/EBGaramond-Maths/m/it. So LaTeX does this whether the symbol exists in the font or not. It cannot tell, essentially, one way or another.
To make this a bit clearer, it is helpful to look at the definition of
\partial from
fontmath.ltx:
\DeclareMathSymbol{\partial}{\mathord}{letters}{"40}
This tells LaTeX that
\partial corresponds to the character in the relevant slot in the
letters font. So before
ebgaramond-maths is loaded, this symbol will be taken from whichever font is currently configured to provide
letters (Computer Modern by default or whatever
newtxmath is configured to use). After
ebgaramond-maths is loaded, the symbol will be taken from whichever font is currently configured to provide
letters. But now that font is EBGaramond-Maths which, as we know does not provide the symbol. What it provides is an empty slot - a blank.
The point of loading
newtxmath is for the symbols it provides in
other encodings (i.e. not
OML) because these complement EBGaramond better than the defaults. Hence,
ebgaramond-maths does not 'cancel out' what
newtxmath does. Rather, it overrides one
part of what
newtxmath does.
In fact,
ebgaramond-maths knocks out symbols from the default fonts, too. (You can still reproduce the problem if you comment out the line loading
newtxmath.)
So the sequence is as follows:
Default maths fonts are set up by LaTeX. (In particular,
letters is defined.)
newtxmath is loaded and overwrites large parts of the default configuration for maths fonts. (In particular,
letters is redefined.)
ebgaramond-maths is loaded and overwrites a part of the previous configuration for maths fonts. (In particular,
letters is redefined.)
There are alternative approaches but I think that this is the most practical and least incompatible one. Please see the question I asked about this and David Carlisle's response for the background to my decision.
[I am the package author.]
Redefining missing symbols
Note that you must do this in the PREAMBLE but AFTER loading
ebgaramond-maths!
If you try to use one of the missing symbols, the package now issues an error. If you ask for help, it tells you how to define the missing symbols using the
\re@DeclareMathSymbol command from
newtxmath.
Here is an example of how to do this using one of the options for
letters from
newtxmath. Here, a new maths font,
ntxletters is set up and missing symbols are defined using it.
\makeatletter...\makeatother is required as the redeclaration command includes the
@ symbol.
\makeatletter
\DeclareSymbolFont{ntxletters}{OML}{ntxmi}{m}{it}
\SetSymbolFont{ntxletters}{bold}{OML}{ntxmi}{b}{it}
\re@DeclareMathSymbol{\leftharpoonup}{\mathrel}{ntxletters}{"28}
\re@DeclareMathSymbol{\leftharpoondown}{\mathrel}{ntxletters}{"29}
\re@DeclareMathSymbol{\rightharpoonup}{\mathrel}{ntxletters}{"2A}
\re@DeclareMathSymbol{\rightharpoondown}{\mathrel}{ntxletters}{"2B}
\re@DeclareMathSymbol{\triangleleft}{\mathbin}{ntxletters}{"2F}
\re@DeclareMathSymbol{\triangleright}{\mathbin}{ntxletters}{"2E}
\re@DeclareMathSymbol{\partial}{\mathord}{ntxletters}{"40}
\re@DeclareMathSymbol{\flat}{\mathord}{ntxletters}{"5B}
\re@DeclareMathSymbol{\natural}{\mathord}{ntxletters}{"5C}
\re@DeclareMathSymbol{\star}{\mathbin}{ntxletters}{"3F}
\re@DeclareMathSymbol{\smile}{\mathrel}{ntxletters}{"5E}
\re@DeclareMathSymbol{\frown}{\mathrel}{ntxletters}{"5F}
\re@DeclareMathSymbol{\sharp}{\mathord}{ntxletters}{"5D}
\re@DeclareMathAccent{\vec}{\mathord}{ntxletters}{"7E}
\makeatother
|
Wavepackets in inhomogeneous periodic media: effective particle-field dynamics and Berry curvature
Date2017-04-23
Repository Usage Stats
247
views
views
114
downloads
downloads
Abstract
We consider a model of an electron in a crystal moving under the influence of an external electric field: Schr\"{o}dinger's equation with a potential which is the sum of a periodic function and a general smooth function. We identify two dimensionless parameters: (re-scaled) Planck's constant and the ratio of the lattice spacing to the scale of variation of the external potential. We consider the special case where both parameters are equal and denote this parameter $\epsilon$. In the limit $\epsilon \downarrow 0$, we prove the existence of solutions known as semiclassical wavepackets which are asymptotic up to `Ehrenfest time' $t \sim \ln 1/\epsilon$. To leading order, the center of mass and average quasi-momentum of these solutions evolve along trajectories generated by the classical Hamiltonian given by the sum of the Bloch band energy and the external potential. We then derive all corrections to the evolution of these observables proportional to $\epsilon$. The corrections depend on the gauge-invariant Berry curvature of the Bloch band, and a coupling to the evolution of the wave-packet envelope which satisfies Schr\"{o}dinger's equation with a time-dependent harmonic oscillator Hamiltonian. This infinite dimensional coupled `particle-field' system may be derived from an `extended' $\epsilon$-dependent Hamiltonian. It is known that such coupling of observables (discrete particle-like degrees of freedom) to the wave-envelope (continuum field-like degrees of freedom) can have a significant impact on the overall dynamics.
TypeJournal article
Permalinkhttp://hdl.handle.net/10161/14060
Published Version (Please cite this version)10.1063/1.4976200
Publication InfoLu, Jianfeng; Watson, AB; & Weinstein, Michael I (2017). Wavepackets in inhomogeneous periodic media: effective particle-field dynamics and Berry curvature. 10.1063/1.4976200. Retrieved from http://hdl.handle.net/10161/14060.
This is constructed from limited available data and may be imprecise. To cite this article, please review & use the official citation provided by the journal.
Collections
More InfoShow full item record
Associate Professor of Mathematics
Jianfeng Lu is an applied mathematician interested in mathematical analysis and algorithm development for problems from computational physics, theoretical chemistry, materials science and other related fields.More specifically, his current research focuses include:Electronic structure and many body problems; quantum molecular dynamics; multiscale modeling and analysis; rare events and sampling techniques.
|
I refer to hand-waving a lot in this post. That is not to say that it was in appropriate. Feynman pretty much said at the outset that his treatment of thermal physics was going to be less than rigorous.
I've been rereading Feynman's Lectures on thermal physics with the intent of summarizing the material. I have not been able to sort out actual argument concluding that the molecular speed distribution is homogeneous for an ideal gas in thermal equilibrium. That is, the shape of the distribution curve doesn't change with molecular density.
I accept his argument that the velocity distribution is isotropic at any given location in the body of the gas. That is, statistically, the velocity has no preferred direction.
I accept his hand-wavy argument that the mean kinetic energy per atom is a function of temperature, alone.
But just because the mean velocity is the same everywhere doesn't automatically require the distribution to be the same everywhere.
In Vol I, 40-1 The exponential atmosphere Feynman appears to acknowledge that he has not yet shown that the velocity distribution is necessarily homogeneous:
"So, these are the
two questionsthat we shall try to answer: How are the molecules distributed in spacewhen there are forces acting on them, and how are they distributed in velocity?
"It
turns outthat the two questions are completely independent, and that the distribution of velocities is always the same. We already received a hint of the latter factwhen we found that the average kinetic energy is the same, $\frac{1}{2}kT$ per degree of freedom, no matter what forces are acting on the molecules. The distribution of the velocities of the molecules is independent of the forces, because the collision rates do not depend upon the forces."
However, in section V-I 40-4 on The distribution of molecular speeds, Feynman claims that he has already shown the distribution to be homogeneous.
"We know already that the distribution of velocities is the same, after the argument we made earlier about the temperature being constant all the way through the atmosphere."
I see nothing in the intervening text that implies the velocity distribution is homogeneous. So I am left to wonder if the development of the velocity distribution function is needed to show that the velocity distribution is homogeneous, or if the development depends on the assumption that it is homogeneous.
It seems to me that the homogeneity of velocity distribution follows from the derivation of the velocity distribution formula. But that begs the question of whether a homogeneous velocity distribution is necessary for the assumption that the atoms can be treated as not interacting, for the sake of this development.
This is my method of deriving the one-dimensional velocity distribution formula. It seems to make clearer the connection between gravitational potential energy and the velocity distribution.
If I accept the hand-waving that allows me to treat the molecules as if they are not interacting, and only consider the relationship between upward speed at height $0$ and potential energy at height $h$, then there is an obvious correlation between upward speed distribution and particle density at the corresponding height.
The attached graphic shows how an interval of the velocity distribution maps onto a depiction of the gas in a column. Each point $v$ in the velocity range corresponds to the height at which $\frac{1}{2}mv^{2}=mgh$. In the velocity distribution diagram, the colored regions are of equal width. In the depiction of the column of gas, they are of different widths because $h=\frac{v^{2}}{2g}$.
Ideal gas in thermal equilibrium is confined to a tall cylinder. Assign some arbitrary horizontal plane height $h_{0}=0$. The equation giving density as a function of height is already established: $$n_{h}=n_{0}e^{-h\frac{mg}{kT}}$$
The only component of velocity under discussion is the positive vertical component, denoted $v$. Height is related to velocity by $mgh=\frac{1}{2}mv^{2}$, or
$$h=\frac{v^{2}}{2g}.$$
So
$$n_{h}=n_{v}=n_{0}e^{-v^{2}\frac{m}{2kT}}$$
Assume there is some velocity distribution $\mathscr{P[v]}$ The proportion of atoms with velocity between $v$ and $v+\Delta{v}$ is given by $$\mathscr{P}\left[v,v+\Delta v\right]=\int_{v}^{v+\Delta V}\mathscr{P}\left[v\right]dv.$$
The atoms in that range will come to rest between the heights $h_{v}=\frac{v^{2}}{2g}$ and $h_{v+\Delta v}=\frac{\left(v+\Delta v\right)^{2}}{2g}$. The number of those atoms is equal to the difference in density between those heights.
So the rate at which atoms with velocities between $v$ and $v+\Delta{v}$ pass $h_{0}$ is the rate at which atoms come to rest between $h_{v}$ and $h_{v+\Delta{v}},$ which, assuming $\Delta{v}$ is infinitesimal is $dn_{h}=dn_{v}$. The rate at which such atoms pass $h_{0}$ is proportional to the number density $n_{0}$, the velocity $v$ and the proportion given by $\mathscr{P}\left[v\right]dv$.
$$v\alpha n_{0}\mathscr{P}\left[v\right]dv=dn_{v}=-v\frac{m}{kT}n_{0}e^{-v^{2}\frac{m}{2kT}}dv$$
Canceling out common factors gives:
$$\mathscr{P}\left[v\right]dv=-\frac{m}{\alpha kT}e^{-v^{2}\frac{m}{2kT}}dv$$
The value of $\alpha$ is given by integrating both sides and setting the result equal to unity. The expression obtained using Feynman's development is
$$\mathscr{P}\left[v\right]dv=-\sqrt{\frac{m}{2\pi kT}}e^{-v^{2}\frac{m}{2kT}}dv.$$
Comparing these results gives:
$$\sqrt{\frac{m}{2\pi kT}}=\frac{m}{\alpha kT}.$$
Solving for $\alpha$ gives
$$\alpha = \sqrt{\frac{2\pi m}{kT}}$$.
I did not explicitly assume the velocity distribution is homogeneous in establishing the distribution formula. But the fact that the derivation gives the same result regardless of the arbitrary choice of $h_{0}$ shows the distribution to homogeneous.
So the question remains. Is the assumption that the atoms can be treated as non-interacting for purposes of this derivation implicitly assuming the velocity distribution to be homogeneous?
|
Answer
$39.7mg \space C_{21}H_{26}N_{2}S_{2}$ for one tablet
Work Step by Step
Let's use the entire 12 tablet data and divide by the answer by 12 to get the initial tablet thioridazine content We are given $.301g$ BaSO$_{4}$ so we need to work backwards in the way the reaction occurred because we started with Thioridazine and ended with BaSO$_{4}$ $\frac{0.301g\space BaSO_{4}}{1}\times\frac{1 mol\space BaSO_{4}}{233.4g\space BaSO_{4}}\times\frac{1 mol\space SO_{4}^{-2}}{1 mol\space BaSO_{4}}\times\frac{1 mol\space C_{21}H_{26}N_{2}S_{2}}{1 mol\space SO_{4}^{-2}}\times\frac{370.2g\space C_{21}H_{26}N_{2}S_{2}}{1 mol\space C_{21}H_{26}N_{2}S_{2}}\times\frac{1000mg}{1g} = 477mg\space C_{21}H_{26}N_{2}S_{2} $ This mass is for 12 tablets so divide by 12 to get the mass for one tablet: $477mg\space C_{21}H_{26}N_{2}S_{2} \div 12 = 39.7mg \space C_{21}H_{26}N_{2}S_{2}$
|
Package
GLMMadaptive provides a suit of functions for fitting and post-processing mixed effects models for grouped/clustered outcomes which have a distribution other than a normal distribution. In particular, let \(y_i\) denote a vector of grouped/clustered outcome for the \(i\)-th sample unit (\(i = 1, \ldots, n\)). The conditional distribution of \(y_i\) given a vector of random effects \(b_i\) is assumed to be a member of the extended exponential family, with linear predictor given by \[g\{E(y_i \mid b_i)\} = X_i \beta + Z_i b_i,\] where \(g(\cdot)\) denotes a monotonic link function, \(X_i\) a design matrix for the fixed effects coefficients \(\beta\), and \(Z_i\) a design matrix for the random effects coefficients \(b_i\). Typically, matrix \(Z_i\) is assumed to be a subset of \(X_i\). The random effects are assumed to follow a normal distribution with mean 0 and variance-covariance matrix \(D\). In addition, the distribution \([y_i \mid b_i]\) may potentially have extra dispersion/shape parameters \(\phi\).
The package focuses in settings in which the distribution \([y_i \mid b_i]\) is not normal and/or the link function \(g(\cdot)\) is not the identity. In these settings, the estimation of the model is complicated by the fact that the marginal log-likelihood function of the observed \(y_i\) cannot be derived analytically. In particular, the log-likelihood function has the form: \[\begin{eqnarray*} \ell(\theta) & = & \sum_{i = 1}^n \log p(y_i; \theta)\\ & = & \sum_{i = 1}^n \log \int p(y_i \mid b_i; \theta) \, p(b_i; \theta) \, db_i, \end{eqnarray*}\] where \(\theta\) denotes the full parameter vector including the fixed effects, the extra potential dispersion/shape parameters \(\phi\) and the unique element of the covariance matrix \(D\), and \(p(\cdot)\) denotes a probability density or probability mass function. The integral in the definition of \(\ell(\theta)\) does not have a closed-form solution, and numerical approximations are required to obtain the maximum likelihood estimates.
In the literature several approaches have been proposed to approximate such integrals, and a nice overview is given in Pinheiro and Chao (2006). A typical approach to approximate these integrals is the Laplace approximation. However, the general consensus has been that in the standard but difficult cases of binary/dichotomous data and count data with small counts and few repeated measurements, the accuracy of this approximation is rather low. Due to this fact, the general consensus is that the gold standard numerical approximation method is the adaptive Gaussian quadrature rule (
note: we focus here on maximum likelihood estimation; under the Bayesian paradigm, approaches, such as, MCMC and Hamiltonian Monte Carlo also provide accurate evaluation of the integrals). This is more computationally intensive but also more accurate. This package provides an efficient implementation of the adaptive Gaussian quadrature rule, allowing for multiple correlated random effects (e.g., random intercepts, linear and quadratic random slopes) but currently a single grouping factor (i.e., no nested or crossed random effects designs).
A hybrid optimization procedure is implemented starting with an EM algorithm, treating the random effects as ‘missing data’, followed by a direct optimization procedure with a quasi-Newton algorithm. Fine control of this procedure is allowed with a series of control arguments.
We illustrate the use of the package in the standard case of a mixed effects logistic regression. That is, the distribution of \([y_i \mid b_i]\) is binomial, and the distribution of \([b_i]\) multivariate normal.
We start by simulating some data for a binary longitudinal outcome:
set.seed(1234)
n <- 100 # number of subjects
K <- 8 # number of measurements per subject
t_max <- 15 # maximum follow-up time
# we constuct a data frame with the design:
# everyone has a baseline measurment, and then measurements at random follow-up times
DF <- data.frame(id = rep(seq_len(n), each = K),
time = c(replicate(n, c(0, sort(runif(K - 1, 0, t_max))))),
sex = rep(gl(2, n/2, labels = c("male", "female")), each = K))
# design matrices for the fixed and random effects
X <- model.matrix(~ sex * time, data = DF)
Z <- model.matrix(~ time, data = DF)
betas <- c(-2.13, -0.25, 0.24, -0.05) # fixed effects coefficients
D11 <- 0.48 # variance of random intercepts
D22 <- 0.1 # variance of random slopes
# we simulate random effects
b <- cbind(rnorm(n, sd = sqrt(D11)), rnorm(n, sd = sqrt(D22)))
# linear predictor
eta_y <- drop(X %*% betas + rowSums(Z * b[DF$id, ]))
# we simulate binary longitudinal data
DF$y <- rbinom(n * K, 1, plogis(eta_y))
We fit a mixed effects logistic regression for
y, assuming random intercepts for the random-effects part. The basic model-fitting function in
GLMMadaptive is called
mixed_model(), and has four required arguments, namely
fixed a formula for the fixed effects,
random a formula for the random effects,
family a family object specifying the type of response variable, and
data a data frame containing the variables in the previously mentioned formulas. Hence, the call to fit the random intercepts logistic regression is:
The summary method gives a detailed output of the model:
summary(fm1)
#>
#> Call:
#> mixed_model(fixed = y ~ sex * time, random = ~1 | id, data = DF,
#> family = binomial())
#>
#> Data Descriptives:
#> Number of Observations: 800
#> Number of Groups: 100
#>
#> Model:
#> family: binomial
#> link: logit
#>
#> Fit statistics:
#> log.Lik AIC BIC
#> -374.5197 759.0394 772.0653
#>
#> Random effects covariance matrix:
#> StdDev
#> (Intercept) 2.064622
#>
#> Fixed effects:
#> Estimate Std.Err z-value p-value
#> (Intercept) -2.6182 0.4448 -5.8862 < 1e-04
#> sexfemale -1.3242 0.6436 -2.0575 0.039639
#> time 0.2851 0.0396 7.2089 < 1e-04
#> sexfemale:time 0.0402 0.0545 0.7372 0.461005
#>
#> Integration:
#> method: adaptive Gauss-Hermite quadrature rule
#> quadrature points: 11
#>
#> Optimization:
#> method: hybrid EM and quasi-Newton
#> converged: TRUE
We continue by checking the impact of the chosen number of quadrature points to the parameters estimates and the log-likelihood value at convergence. First, we refit the model with an increasing number of quadrature points. The default when the number of random effects is smaller or equal to two is 11 points. We fit then with 15, and 21 points:
We now extract from the model the estimated parameter for the fixed effects (using function
fixef()), for the random effects, and the log-likelihood (using function
logLik()):
extract <- function (obj) {
c(fixef(obj), "var_(Intercept)" = obj$D[1, 1], "logLik" = logLik(obj))
}
sapply(models, extract)
#> nAGQ=11 nAGQ=15 nAGQ=21
#> (Intercept) -2.61816557 -2.61679077 -2.6168440
#> sexfemale -1.32418678 -1.32349829 -1.3235049
#> time 0.28514773 0.28501148 0.2850151
#> sexfemale:time 0.04015372 0.04020003 0.0401995
#> var_(Intercept) 4.26266211 4.26351799 4.2636311
#> logLik -374.51972108 -374.51983340 -374.5198046
We observe a rather stable model with virtually no differences between the different choices of quadrature points.
We first compare the model with a simple logistic regression that does not include any random effects:
We obtain a highly significant p-value suggesting that there are correlations in the data that cannot be ignored.
Note: the
anova() method that performs the likelihood ratio test calculates the p-value using the standard \(\chi^2\) distribution, here with one degree of freedom. However, because the null hypothesis for testing variance parameters is on the boundary of the corresponding parameter space, it would be more appropriate to use a mixture of \(\chi^2\) distributions.
We extend model
fm1 by also including a random slopes term; however, we assume that the covariance between the random intercepts and random slopes is zero. This is achieved by using the
|| symbol in the specification of the
random argument, i.e.,
The likelihood ratio test between the two models is computed with function
anova(). When two
"MixMod" objects are provided, the function assumes that the first object represents the model under the null hypothesis, and the second object the model under the alternative, i.e.,
The results suggest that we need the random slopes term. We continue by testing whether the covariance between the random effects terms is zero. The model under the alternative hypothesis is (results not shown):
And again the likelihood ratio test is performed by a call to
anova() (results not shown):
The results now suggest that indeed the covariance between the two random effects terms is not statistically different from zero.
We continue our illustration with a Poisson mixed effects model. We start again by simulating some data for a Poisson longitudinal outcome:
set.seed(1234)
n <- 100 # number of subjects
K <- 8 # number of measurements per subject
t_max <- 15 # maximum follow-up time
# we constuct a data frame with the design:
# everyone has a baseline measurment, and then measurements at random follow-up times
DF <- data.frame(id = rep(seq_len(n), each = K),
time = c(replicate(n, c(0, sort(runif(K - 1, 0, t_max))))),
sex = rep(gl(2, n/2, labels = c("male", "female")), each = K))
# design matrices for the fixed and random effects
X <- model.matrix(~ sex * time, data = DF)
betas <- c(2.13, -0.25, 0.24, -0.05) # fixed effects coefficients
D11 <- 0.48 # variance of random intercepts
# we simulate random effects
b <- rnorm(n, sd = sqrt(D11))
# linear predictor
eta_y <- drop(X %*% betas + b[DF$id])
# we simulate Poisson longitudinal data
DF$y <- rpois(n * K, exp(eta_y))
We fit the mixed effects Poisson regression for
y assuming random intercepts for the random-effects part.
As previously, the summary method gives a detailed output of the model:
summary(gm1)
#>
#> Call:
#> mixed_model(fixed = y ~ sex * time, random = ~1 | id, data = DF,
#> family = poisson())
#>
#> Data Descriptives:
#> Number of Observations: 800
#> Number of Groups: 100
#>
#> Model:
#> family: poisson
#> link: log
#>
#> Fit statistics:
#> log.Lik AIC BIC
#> -2782.306 5574.611 5587.637
#>
#> Random effects covariance matrix:
#> StdDev
#> (Intercept) 0.9781393
#>
#> Fixed effects:
#> Estimate Std.Err z-value p-value
#> (Intercept) 2.9447 0.3082 9.5545 < 1e-04
#> sexfemale -0.8611 0.2699 -3.1906 0.0014199
#> time 0.2404 0.0016 151.3464 < 1e-04
#> sexfemale:time -0.0511 0.0028 -18.3982 < 1e-04
#>
#> Integration:
#> method: adaptive Gauss-Hermite quadrature rule
#> quadrature points: 11
#>
#> Optimization:
#> method: hybrid EM and quasi-Newton
#> converged: TRUE
In settings with few subjects, repeated measurements of separation effects, the package also allows to place a penalty in the fixed effects regression coefficients \(\beta\). The penalty/prior is in the form of a Student’s t distribution with mean 0, scale parameter 1, and 3 degrees of freedom, and it is placed in all \(\beta\) coefficients except from the intercept. The penalized model can be fitted by setting argument
penalized to
TRUE, i.e.,
In this example we observe small differences between the penalized and unpenalized models. The users have the option to alter the specification of the Student’s t penalty by directly specifying the mean, scale and degrees of freedom arguments of the distribution. For example, a ridge penalty could be placed by setting the degrees of freedom to a high value. The call in this case should be:
|
Some readers may be familiar with Bob Palais’ article “π Is Wrong”. Within it Palais argues that π is the wrong choice of circle constant. This quote, from the author’s website, summarizes his main argument:
As noted in the last page of the pdf, I suggest calling the alternate constant 2 π=6.283… `1 turn’, so that 90 degrees is `a quarter turn’, just as we would say in natural language. The main point is that the historical choice of the value of π obscures the benefit of radian measure. It is easy to see that 1/4 turn is more natural than 90° , but π/2 seems almost as arbitrary. It is apparent that we can’t eliminate π but it is to be aware of its pitfalls, and introduce an alternative for those who might wish to use one.
— Bob Palais
Palais then goes on to define a “newpi” symbol through a TeX macro, which resembles the fusion of two π:
The aforementioned article has been in print since 2001, and very little has changed on this front since then. The ideas it put forth are an amusing opinion that many of us tend to agree with, but 2π has not been adopted by the mathematical community.
Today Michael Hartl announced “The Tau Manifesto” on what he calls Tau Day (6/28 for 6.28…). In this document, Hartl echoes the good points that Palais made and builds upon them to construct a strong case in favor of adopting a circle constant which is the ratio of a circle’s circumference to its radius, not its diameter. Inspired by Palais’ use of the word “turn”, he proposes calling this constant τ (tau).
As Hartl argues, this symbol already exists (unlike the odd symbol that Palais introduced), it’s still generally available in mathematics, and it resembles π.
This new constant would not only be an improvement from a pedagogical standpoint (as shown in the diagram above), but would also “improve” several formulas. For example, Euler’s identity:
[tex]\displaystyle e^{i\pi} + 1 = 0[/tex]
Or:
[tex]\displaystyle e^{i\pi} = -1[/tex]
Which would become neater as:
[tex]\displaystyle e^{i\tau} = 1[/tex]
This makes sense intuitively (a rotation in the complex plane by one turn is 1).
(The Tau Manifesto addresses the issue of how this too can relate to the “five most important numbers in mathematics” with a slight rearrangement.)
What are your thoughts on this? As mathematics evolves, is it time to start using “Let τ = 2π” as a means of adopting a better circle constant?
Get more stuff like this
Get interesting math updates directly in your inbox.
Thank you for subscribing. Please check your email to confirm your subscription.
Something went wrong.
|
June 19th, 2018, 07:45 AM
# 3
Math Team
Joined: Jan 2015
From: Alabama
Posts: 3,264
Thanks: 902
I had not seen the term "pseudo-quadratic equation" before but it appears to be what I have seen called "an equation of quadratic type". That is any equation that can be made into a quadratic by a substitution. For example, the equation $\displaystyle x^4- 6x^2+ 3x= 0$, while quartic, is "of quadratic type" because the substitution $\displaystyle y= x^2$ converts it to a quadratiic equation (in $\displaystyle y$).
With that substitution, we have $\displaystyle y^2- 6y+ 3= 0$. Completing the square, $\displaystyle y^2- 6y+ 9- 6= 0$, $\displaystyle (y- 3)^2= 6$, $\displaystyle y= 3\pm\sqrt{6}$ so that $\displaystyle x^2j= 2\pm\sqrt{6}$ and the original equation has the four roots $\displaystyle x= \sqrt{2+ \sqrt{6}}$, $\displaystyle x= -\sqrt{2+\sqrt{6}}$, $\displaystyle x= \sqrt{2- \sqrt{6}}$, and $\displaystyle x= -\sqrt{2- \sqrt{6}}$.
Similarly, $\displaystyle \sin^2(x)+ 3\sin(x)- 4= 0$ is "of quadratic type" because the substitution $\displaystyle y= \sin(x)$ converts it to $\displaystyle y^2+ 3y- 4= 0$, a quadratic equation. That can be factored as $\displaystyle (y+ 4)(y- 1)$ so we have $\displaystyle y= \sin(x)= -4$ or $\displaystyle y= \sin(x)= 1$. Of course, $\displaystyle \sin(x)$ cannot equal -4, so the roots are given by $\displaystyle \sin(x)= 1$: $\displaystyle x= (2n+1)\frac{\pi}{2}$.
Last edited by skipjack; June 19th, 2018 at 10:54 AM.
June 19th, 2018, 10:51 AM
# 4
Global Moderator
Joined: Dec 2006
Posts: 20,978
Thanks: 2229
The equation $\displaystyle x^4- 6x^2+ 3x= 0$ isn't of quadratic type.
I assume you intended to give $\displaystyle x^4 - 6x^2 + 3 = 0$.
Its solutions are given by $\displaystyle x = \pm\sqrt{3 \pm \sqrt6}$, not $\displaystyle x = \pm\sqrt{2 \pm \sqrt6}$.
The solution of $\sin(x) = 1$ is $\displaystyle x = (4n + 1)\frac\pi2$, not $\displaystyle x = (2n + 1)\frac\pi2$.
Tags equation, pseudo, quadratic
Search tags for this page
Click on a term to search for related topics.
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post Help creating pseudo-randomized groups Scott Oates Complex Analysis 1 January 11th, 2016 02:00 PM Breaking news : Fermat pseudo-primes mobel Number Theory 38 October 15th, 2015 01:13 AM Pseudo-finite fields Mathworm Abstract Algebra 1 March 5th, 2009 02:20 PM Pseudo-Distances within a depth-determination problem Shaitan00 Computer Science 0 November 16th, 2008 01:46 AM Pseudo-Diophantine Nuisance reddmann Number Theory 2 August 30th, 2007 08:03 AM
|
Blackbody radiation is a cornerstone in the study of quantum mechanics.This experiment is what led to the discovery of a field that would revolutionize physics and chemistry. Quantum mechanics gives a more complete understanding of the fundamental mechanisms at the sub-atomic level.
Introduction
The work done at the turn of the 20
th century on blackbody radiation was the beginning of a totally new field of science. Blackbody radiation is a theoretical concept in quantum mechanics in which a material or substance completely absorbs all frequencies of light. Because of the laws of thermodynamics, this ideal body must also re-emit as much light as it absorbs.Although there is no material that can truly be a blackbody, some have come close.Carbon in its graphite form is about 96% efficient in its absorption of light.
The concept of blackbody radiation is seen in many different places.The intensity of the energy coming from the radiator is a function only of temperature. A good example of this temperature dependence is a flame. The flame starts out with a low frequency emitting red light in the visible range, as the temperature increases the flame turns white and then blue as is moves across the visible spectrum with an increasing temperature. Also, with each temperature corresponds a new maximum radiance which can be emitted. As the temperature increases, the total radiation emitted also increases due to an increase in the area under the curve.
Electromagnetic spectrum:click for citation
Lord Rayleigh and J. H. Jeans developed an equation which explained blackbody radiation at low frequencies.The equation which seemed to express blackbody radiation was built upon all the known assumptions of physics at the time. The big assumption which Rayleigh and Jean implied was that infinitesimal amounts of energy were continuously added to the system when the frequency was increased. Classical physics assumed that energy emitted by atomic oscillations could have any continuous value. This was true for anything that had been studied up until that point, including things like acceleration, position, or energy.They came up with
Rayleigh-Jeans law and the equation they derived was
\[ d\rho \left( \nu ,T \right) = \rho_{\nu} \left( T \right) d \nu = \frac{8 \pi k_B T}{c^3} \nu^2 d\nu \]
Experimental data performed on the black box showed slightly different results than what was expected by the
Rayleigh-Jeans law. The law had been studied and widely accepted by many physicists of the day, but the experimental results did not lie, something was different between what was theorized and what actually happens.The experimental results showed a bell type of curve but according to the Rayleigh-Jeans law the frequency diverged as it neared the ultraviolet region.This inconsistency was termed the ultraviolet catastrophe. Ultraviolet Catastrophe
During the 19th century much attention was given to the study of heat properties of various objects. An idealised model that was considered was the Black Body, an object which absorbs all incident radiation and then re-emits all this energy again. We can think of the radiating energy as standing waves inside our blackbody cavity. The energy of the radiating waves at a given frequency ν, should be proportional to the number of modes at this frequency. Classical physics states that all these modes have the same energy
k T (a result derived from classical thermodynamics) and as the number of modes is proportional to \(\nu^2\):
\[E \propto \nu^2 kT \label{1.1.1}\]
This implies that we would expect most of the energy at higher frequency, and this energy diverges with frequency. If we try and sum the energies at each frequency we find that there is an infinite energy in ths system! This paradox was called the
ULTRAVIOLET CATASTROPHE.
It was left to Planck to resolve this gaping paradox, but postulated that the energy of the modes could only come in discrete packets – quanta – of energy:
\[E = h\nu, 2h\nu, 3h\nu, \ldots \qquad \Delta E = h\nu \label{1.1.2}\]
Using statistical mechanics Planck found that the modes at higher frequency were less likely excited so the average energy of these modes would decrease with the frequency. The exact expression for the average energy of each mode is given by the Planck distribution:
\[\langle E \rangle = \frac{h\nu}{\exp(\frac{h\nu}{KT}) - 1} \label{1.1.3}\]
You can see that if the frequency is low then the average energy tends towards the classical result, and as frequency goes to infinity we get that the average energy goes to zero as expected.
Max Planck was the first person to properly explain this experimental data. Rayleigh and Jean made the assumption that energy is continuous, but Planck took a slightly different approach. He said energy must come in certain unit intervals instead of being any random unit or number. He instead “quantized” energy in the form of \(E= nh\nu\) where \(n\) is an integer, \(h\) is a constant, and \(\nu\) is the frequency. This assumption proved to be the missing piece of the puzzle and Planck derived an expression which could explain the experimental data.
\[ d\rho \left( \nu ,T \right) = \rho_{\nu} \left( T \right) d\nu = \dfrac{8 \pi k_B T}{c^3} \dfrac{nu^2}{e^{hv/K_bT}-1} d\nu \]
This now famous equation is known as the
Planck Distribution Law for Blackbody Radiation.The h in this equation is the famous Planck’s constant, which has a value of \(6.626 \times 10^-34\; J \;s\).
|
№ 8
All Issues Volume 60, № 8, 2008
Ukr. Mat. Zh. - 2008. - 60, № 8. - pp. 1011–1026
We prove a Hadamard-type theorem which connects the generalized order of growth $\rho^*_f(\alpha, \beta)$ of entire transcendental function $f$ with coefficients of its expansion into the Faber series. The theorem is an original extension of a certain result by S. K. Balashov to the case of finite simply connected domain $G$ with the boundary $\gamma$ belonging to the S. Ya. Al'per class $\Lambda^*.$
This enables us to obtain boundary equalities that connect $\rho^*_f(\alpha, \beta)$ with the sequence of the best polynomial approximations of $f$ in some Banach spaces of functions analytic in $G$.
Ukr. Mat. Zh. - 2008. - 60, № 8. - pp. 1027–1034
We establish conditions for the existence of an optimal impulsive control for an implicit operator differential equation with quadratic cost functional. The results obtained are applied to the filtration problem.
Ukr. Mat. Zh. - 2008. - 60, № 8. - pp. 1035–1041
We consider maximal stable orders on semigroups that belong to a certain class of inverse semigroups of finite rank.
Lower bound for the best approximations of periodic summable functions of two variables and their conjugates in terms of Fourier coefficients
Ukr. Mat. Zh. - 2008. - 60, № 8. - pp. 1042–1050
In terms of Fourier coefficients, we establish lower bounds for the sum of norms and the sum of the best approximations by trigonometric polynomials for functions from the space
L( Q²) and functions conjugate to them with respect to each variable and with respect to both variables, provided that these functions are summable.
Ukr. Mat. Zh. - 2008. - 60, № 8. - pp. 1051–1057
We investigate approximative characteristics of classes of ψ-differentiable multivariable functions introduced by A. I. Stepanets. We give asymptotics of the approximation of functions from these classes.
Ukr. Mat. Zh. - 2008. - 60, № 8. - pp. 1058–1074
We investigate generalizations of classes of monotone dynamical systems in a partially ordered Banach space. We establish algebraic conditions for the stability of equilibrium states of differential systems on the basis of linearization and application of derivatives of nonlinear operators with respect to a cone. Conditions for the positivity and absolute stability of a certain class of differential systems with delay are proposed. Several illustrative examples are given.
Rate of convergence of the price of European option on a market for which the jump of stock price is uniformly distributed over an interval
Ukr. Mat. Zh. - 2008. - 60, № 8. - pp. 1075–1086
We consider a model of the market such that a jump of share price is uniformly distributed on some symmetric interval and establish the rate of convergence of fair prices of European options by using the theorem on asymptotic decompositions of distribution function for the sum of independent identically distributed random variables. We show that, in the prelimit model, there exists a martingale measure on the market such that the rate of convergence of prices of European options to the Black - Scholes price is of order 1/
n 1/2. On the problem of approximation of functions by algebraic polynomials with regard for the location of a point on a segment
Ukr. Mat. Zh. - 2008. - 60, № 8. - pp. 1087–1098
We obtain a correction of an estimate of the approximation of functions from the class
W r H ω (here, ω( t) is a convex modulus of continuity such that tω '( t) does not decrease) by algebraic polynomials with regard for the location of a point on an interval.
Ukr. Mat. Zh. - 2008. - 60, № 8. - pp. 1099–1109
The problem of optimal control of differential equations with interaction is consider. It is proved that the optimal control satisfies the maximum principle and there exists the generalized optimal control. It is shown that, in the considered problem, new technical aspects arise as compared with the usual problem of optimal control.
Ukr. Mat. Zh. - 2008. - 60, № 8. - pp. 1110–1118
The structure of the family of finite subgroups of the form
L g = ‹ a, a› in periodic Shunkov's group is studied. As a colorraries of the result obtained, two characterizations of periodic Shunkov's groups follow. g
Ukr. Mat. Zh. - 2008. - 60, № 8. - pp. 1119 – 1127
We obtain the exact Jackson-type inequalities for approximations in
L 2 ( R) of functions f∈ L 2 ( R)
with the use of partial sums of the wavelet series in the case of the Meyer wavelets and the Shannon–Kotelnikov wavelets. Interval distribution function of a bounded chaotic sequence as a basis of nonaxiomatic probability theory
Ukr. Mat. Zh. - 2008. - 60, № 8. - pp. 1128–1137
We introduce the notion of interval distribution function of random events on the set of elementary events and the notion of interval function of the frequencies of these events. In the limiting case, the interval function turns into the ordinary distribution function and the interval function of frequencies (under certain conditions) turns into the density of distribution of random events. The case of discrete sets of elementary events is also covered. This enables one to introduce the notion of the probability of occurrence of random events as a result of the limit transition.
Ukr. Mat. Zh. - 2008. - 60, № 8. - pp. 1138–1143
For a system of classical particles interacting via stable pairwise integrable and positive many-body (nonpairwise) finite-range potentials, we prove the existence of a solution of the symmetrized Kirkwood-Salsburg equation.
Ukr. Mat. Zh. - 2008. - 60, № 8. - pp. 1144–1152
We consider the Euler transform of the power series of an analytic function playing the role of its expansion in a series in a system of polynomials and study the domain of convergence of the transform depending on the parameter of transformation and the character of singular points of the function. It is shown that the transform extends the function beyond the boundaries of the disk of convergence of its series on the interval of the boundary located between two singular points of the function. In particular, it is established that the power series of the function whose singular points are located on a single ray is summed by the transformation in the half plane.
|
Article
Keywords: general connection; classical linear connection; bundle functor; natural operator
Summary: Let $G$ be a bundle functor of order $(r,s,q)$, $s\geq r\leq q$, on the category $\Cal F\Cal M_{m,n}$ of $(m,n)$-dimensional fibered manifolds and local fibered diffeomorphisms. Given a general connection $\Gamma$ on an $\Cal F\Cal M_{m,n}$-object $Y\to M$ we construct a general connection $\Cal G(\Gamma,\lambda,\Lambda)$ on $GY\to Y$ be means of an auxiliary $q$-th order linear connection $\lambda$ on $M$ and an $s$-th order linear connection $\Lambda$ on $Y$. Then we construct a general connection $\Cal G (\Gamma,\nabla_1,\nabla_2)$ on $GY\to Y$ by means of auxiliary classical linear connections $\nabla_1$ on $M$ and $\nabla_2$ on $Y$. In the case $G=J^1$ we determine all general connections $\Cal D(\Gamma,\nabla)$ on $J^1Y\to Y$ from general connections $\Gamma$ on $Y\to M$ by means of torsion free projectable classical linear connections $\nabla$ on $Y$.
References:
[1] Doupovec M., Mikulski W.M.:
On the existence of prolongation of connections
. Czechoslovak Math. J., to appear. MR 2280811
| Zbl 1164.58300
[2] Janyška J., Modugno M.:
Relations between linear connections on the tangent bundle and connections on the jet bundle of a fibered manifold
. Arch. Math. (Brno) 32 (1996), 281-288. MR 1441399
[3] Kolář I.:
Prolongation of generalized connections
. Colloq. Math. Soc. János Bolyai 31. Differential Geometry, Budapest (1979), 317-325. MR 0706928
[4] Kolář I., Michor P.W., Slovák J.:
Natural Operations in Differential Geometry
. Springer, Berlin, 1993. MR 1202431
[5] Kolář I., Mikulski W.M.:
Natural lifting of connections to vertical bundles
. Rend. Circ. Math. Palermo (2), Suppl. no. 63 (2000), 97-102. MR 1758084
[6] Mikulski W.M.:
Non-existence of natural operators transforming connections on $Y\to M$ into connections on $FY\to Y$
. Arch. Math. (Brno) 41 1 (2005), 1-4. MR 2142138
| Zbl 1112.58006
[7] Mikulski W.M.:
The natural bundles admitting natural lifting of linear connections
. Demonstratio Math., to appear. MR 2223893
| Zbl 1100.58001
[8] Vondra A.:
Higher-order differential equations represented by connections on prolongations of a fibered manifold
. Extracta Math. 15 3 (2000), 421-512. MR 1825970
| Zbl 0992.34006
|
Let $X \in \mathbb{R}^{a \times b}$ and
$$\|X\|_2 = \sigma_{\max}(X) = \sqrt{\lambda_{\max} \left( X^T X \right)}$$
How can I compute $\nabla_X \|AX\|_2$, where $A \in \mathbb{R}^{c \times a}$ is some known matrix?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Let $X \in \mathbb{R}^{a \times b}$ and
$$\|X\|_2 = \sigma_{\max}(X) = \sqrt{\lambda_{\max} \left( X^T X \right)}$$
How can I compute $\nabla_X \|AX\|_2$, where $A \in \mathbb{R}^{c \times a}$ is some known matrix?
Consider a matrix and its SVD $$Y = \sum_{k=1}^r\sigma_ku_kv_k$$ and let $\,\phi=\|Y\|=\sigma_1\,$ be the spectral norm $($assuming that the singular values are ordered such that $\sigma_1>\sigma_2>\sigma_3>\ldots>\sigma_r>0\,)$
The gradient of the norm is $$\frac{\partial\phi}{\partial Y} = u_1v_1^T$$
Write the differential in terms of this gradient and perform a change of variables $Y=AX$ $$\eqalign{ d\phi &= u_1v_1^T:dY \cr &= u_1v_1^T:A\,dX \cr &= A^Tu_1v_1^T:dX \cr \frac{\partial\phi}{\partial X} &= A^Tu_1v_1^T \cr }$$ to obtain the desired gradient.
A colon is used to denote the trace/Frobenius product, i.e. $A:B={\rm tr}(A^TB),\,$ in some of the steps above.
If the first few singular values are identical, $($e.g. $\sigma_1=\sigma_2=\sigma_3)$, then the result changes slightly $$\eqalign{ \frac{\partial\phi}{\partial X} &= \sum_{k=1}^3A^Tu_kv_k^T \cr }$$
|
Quote:
Originally Posted by
yaser
Correct. The solution was also given in slide 11 of Lecture 12 (regularization).
yes my point was how do you solve this numerically - given that people will already have a good least squares code ( doing SVD on Z to avoid numerical ill conditioning), there is no need to implement (poorly) a new regularised least squares solver
you can just add a few data points at the end of your training data and feed it into your least squares solver. ie
\lambda |w|^2 = \sum_i (y_i-\sqrt(lambda)w_i)^2
ie if w is d dimensional you append to your Z matrix the additional matrix=sqrt(lambda)*eye(d) and append a d vector of zeros to your y
(eye(d) is d by d identity matrix) [ but this is much better explained in the notes i linked to]
|
Under the CEV model the stock price has the following dynamics:
$dS_t=\mu S_tdt+\sigma S_t^\gamma dW_t$, where $\sigma\geq0, $ $\gamma\geq0$.
According to Wikipedia, if $\gamma <1$ the volatility of the stock increases as the price falls.
But why is this true? Shouldn't be the exponent negative in order to have an inverse relationship between stock price and the volatility term?
|
CryptoDB Paper: New (and Old) Proof Systems for Lattice Problems
Authors: Navid Alamati Chris Peikert Noah Stephens-Davidowitz Download: DOI: 10.1007/978-3-319-76581-5_21 Search ePrint Search Google Conference: PKC 2018 Abstract: We continue the study of statistical zero-knowledge (SZK) proofs, both interactive and noninteractive, for computational problems on point lattices. We are particularly interested in the problem $$\textsf {GapSPP}$$GapSPP of approximating the $$\varepsilon $$ε-smoothing parameter (for some $$\varepsilon < 1/2$$ε<1/2) of an n-dimensional lattice. The smoothing parameter is a key quantity in the study of lattices, and $$\textsf {GapSPP}$$GapSPP has been emerging as a core problem in lattice-based cryptography, e.g., in worst-case to average-case reductions. We show that $$\textsf {GapSPP}$$GapSPP admits SZK proofs for remarkably low approximation factors, improving on prior work by up to roughly $$\sqrt{n}$$n. Specifically:There is a noninteractive SZK proof for $$O(\log (n) \sqrt{\log (1/\varepsilon )})$$O(log(n)log(1/ε))-approximate $$\textsf {GapSPP}$$GapSPP. Moreover, for any negligible $$\varepsilon $$ε and a larger approximation factor $$\widetilde{O}(\sqrt{n \log (1/\varepsilon )})$$O~(nlog(1/ε)), there is such a proof with an efficient prover.There is an (interactive) SZK proof with an efficient prover for $$O(\log n + \sqrt{\log (1/\varepsilon )/\log n})$$O(logn+log(1/ε)/logn)-approximate coGapSPP. We show this by proving that $$O(\log n)$$O(logn)-approximate $$\textsf {GapSPP}$$GapSPP is in $$\mathsf {coNP} $$coNP.In addition, we give an (interactive) SZK proof with an efficient prover for approximating the lattice covering radius to within an $$O(\sqrt{n})$$O(n) factor, improving upon the prior best factor of $$\omega (\sqrt{n \log n})$$ω(nlogn). BibTeX @inproceedings{pkc-2018-28904,
title={New (and Old) Proof Systems for Lattice Problems},
booktitle={Public-Key Cryptography – PKC 2018},
series={Public-Key Cryptography – PKC 2018},
publisher={Springer},
volume={10770},
pages={619-643},
doi={10.1007/978-3-319-76581-5_21},
author={Navid Alamati and Chris Peikert and Noah Stephens-Davidowitz},
year=2018
}
|
Loaded Dice Problem
Solution 1
Let $p_i$ be the probability of $i,$ $i=1,\ldots,6,$ coming up for the first die and $q_i$ for the second. Assuming all the sums come up with the same probability, the latter equals $\displaystyle \frac{1}{11}.$ Then $p_{\small{1}}q_{\small{1}}$ is the probability of the sum being $2$ such that $p_{\small{1}}q_{\small{1}}=\displaystyle \frac{1}{11}.$ Similarly, $p_{\small{6}}q_{\small{6}}=\displaystyle \frac{1}{11}$ is the probability of the sum being $12.$
The probability of the sum being $7$ is
$\displaystyle\begin{align} \frac{1}{11}&=\sum_{k+j=7}p_{\small{k}}q_{\small{j}}\gt p_{\small{1}}q_{\small{6}}+p_{\small{6}}q_{\small{1}}\\ &=\frac{p_{\small{1}}}{11p_{\small{6}}}+\frac{p_{\small{6}}}{11p_{\small{1}}}=\frac{1}{11}\left(\frac{p_{\small{1}}}{p_{\small{6}}}+\frac{p_{\small{6}}}{p_{\small{1}}}\right)\ge\frac{1}{11}\cdot 2, \end{align}$
by the AM-GM inequality. That's a contradiction.
Solution 2
Let $\displaystyle P(x)=\sum_{k=1}^6p_ix^{i-1}$ be a probability generating function for the first die, $\displaystyle Q(x)=\sum_{k=1}^6q_ix^{i-1}$ that for the second. Since $p_p,q_p\ne 0,$ each of $P(x),Q(x)$ is a polynomial of degree $5$ and, as such, has a real root.
On the other hand,
$\displaystyle P(x)Q(x)=\sum_{k=2}^{12}\frac{1}{11}x^{k-2}=\frac{1}{11}\cdot\frac{x^{11}-1}{x-1}.$
The contradiction comes from the fact that the cyclotomic polynomial $\displaystyle \sum_{k=0}^{10}x^{k}$ does not have real roots.
Solution 3
Let $X$ denote the sum of the two dice. Suppose, for contradiction, that there exists some $p \in (0,1)$ such that
$p = P(X=2) = P(X=3) = \cdots = P(X=12).$
On the one hand, we have
$1 = P( 2 \leq X \leq 12) = P(X=2) + P(X=3) + \cdots + P(X=12) = 11p,$
which yields $\displaystyle p = \frac{1}{11}.$
On the other hand, for a fixed $x \in \{2,3, \dots, 12 \}$, we may write
$\displaystyle \begin{align} p &= P(X = x)\\ &= \small{\frac{\# \text{ of elements in the sample space such that the sum of the two dice is } x} {\text{total } \# \text{ of elements in the sample space}}}\\ &= \frac{\text{a positive integer}}{36}. \end{align}$
We have established that
$\displaystyle \frac{1}{11} = \frac{\text{a positive integer}}{36} \quad \Rightarrow \quad 36 = 11 \times (\text{a positive integer}),$
which is absurd.
Generalization
The first two solutions are automatically extended to the dice of different shapes, e.g., all Platonic solids. However, the second solution would not work for a polyhedron with an odd number of faces, e.g., for a cube with a corner sawed off.
Acknowledgment
65462514
|
Here is a geometric explanation. See if it makes sense to you.
$(X,Y)$ is a uniform random point in the square $[0,1] \times [0,1]$.
But what is $(W,Z) \equiv (\min(X,Y), \max(X,Y))$?
Imagine folding the square along the $y=x$ line, aka $+45°$ line, folding the lower triangle onto the upper triangle. Then $(W,Z)$ is where $(X,Y)$ would end up:
If $Y>X$, the original point is in the upper triangle to begin with, and remains there (folding has no effect). In this case $(W,Z) = (X,Y)$.
If $X > Y$, the original point is in the lower triangle to begin with, and ends up in the upper triangle at $(Y,X)$ (that's the effect of folding). In this case $(W,Z) = (Y,X)$.
By symmetry, therefore, sampling $(X,Y)$ uniformly in the square and then computing $W=\min(X,Y)$ and $Z=\max(X,Y)$ is equivalent to sampling $(W,Z)$ uniformly from
just the upper triangle.
I hope it is now obvious, because a
triangle is not a square, that $W, Z$ will be dependent.
As for the sign of the correlation:
Very roughly speaking positive correlation means the area is "biased / slanted with a positive slope" and negative correlation means the area is "biased / slanted with a negative slope". I hope it is also obvious that the upper triangle implies a positive correlation.
(Incidentally, as you probably know, dependence can still mean zero correlation, and some examples would be uniformly sampling from e.g. a circle, a $45°$ rotated square, etc.)
Sorry the geometric argument is kinda vague, but I thought this kind of "intuitive" picture is what your question is asking (since you already know the answer is $1/36$ and just need an explanation which is not the actual calculation of covariance). If you want a slightly less "visual" reason, simply note that, conditioned on $W=w, Z \sim Uniform(w,1)$, so as $w$ increases $Z$ certainly tends to also increase.
Re: your multiple samples argument: that doesn't quite apply. If you sample many $(W,Z)$ in the upper triangle, then sure $\min_j W_j \rightarrow 0$ and $\max_j Z_j \rightarrow 1$, but so what? That doesn't say anything about how in one particular sample, knowing $W$ gives you info about $Z$ and vice versa. (In fact, just for fun, it is also true that $\max_j W_j \rightarrow 1$ and $\min_j Z_j \rightarrow 0$, so what do you make of that? ;) )
|
I'm currently trying to import a set of data from a file (comma or tabbed delimited, doesn't matter) for creating a table. Below is an example of this data (DH parameters in case you're curious):
Link \alpha a \theta d0 0 0 0 01 \frac{\pi}{2} 0 \theta_{1} 3402 \frac{-\pi}{2} 0 \theta_{2} 03 \frac{-\pi}{2} 0 \theta_{3} 4004 \frac{\pi}{2} 0 \theta_{4} 05 \frac{\pi}{2} 0 \theta_{5} 4006 \frac{-\pi}{2} 0 \theta_{6} 07 0 0 \theta_{7} 0
I've tried both pgfplots and csvsimple without any success. From the errors I receive, they always seem to have difficulty with the special characters. Interestingly, csvsimple's example with "Weißbäck" only works if "Weißbäck" isn't in the header row.
Would anyone happen to have a workflow for import non-standard data with special symbols for tables?
*** EDIT: Simple Test Cases Below...
= Given Tex file:
\documentclass{letter}\usepackage[utf8]{inputenc}\usepackage{csvsimple}\begin{document} \csvautotabular{./csv/test.csv}\csvautotabular{./csv/test2.csv}\end{document}
= Given test.csv (to prove simple case works):
a,b,c2,2,83,6,9
= Given test2.csv:
\ss,b,c2,2,83,6,9
ERROR: ! Package csvsimple Error: File './csv/test2.csv' starts with an empty line!.
= Given test2.csv:
$\ss$,b,c2,2,83,6,9
ERROR: ! Missing \endcsname inserted. \OT1\ss l.9 \csvautotabular{./csv/test2.csv} The control sequence marked should not appear between \csname and \endcsname. ! Extra \endcsname.
= Given test2.csv (similar to csvsimple example, but special character is in header row):
\ss{},b,c2,2,83,6,9
ERROR: ! Package csvsimple Error: File './csv/test2.csv' starts with an empty line!.
= Given test2.csv (similar to csvsimple example, special character is in the body):
a,b,c1,\ss{},83,6,9
Works as expected!
|
Good morning fellow mathematicians! Today we are going to discuss an integration method, which I find to be pretty helpful and applicable in many situations. After deriving the formula we are interested in, I'm going to provide you with an example, where this very method might come in handy.
Let us consider the following integral
$$\mathcal{I}=\int\limits_{a}^{b}\frac{h(x)}{f(x)+g(x)}dx\text{ ,}$$
where $f,g,h$ are real or complex valued functions which are differentiable in the interval $[a,b]$. We would like to construct ourselves another integral $\mathcal{J}$, such that $\mathcal{I}+\mathcal{J}=(b-a)$. Let us thus define some real or complex valued function $i$, such that $h(x)+i(x)=f(x)+g(x)$ and
$$\mathcal{J}=\int\limits_{a}^{b}\frac{i(x)}{f(x)+g(x)}dx\text{ .}$$
It is now quite obvious, that $\mathcal{I}+\mathcal{J}$ is going to give us $(b-a)$, since:
$$\begin{align*}
\mathcal{I}+\mathcal{J}&=\int\limits_{a}^{b}\frac{h(x)+i(x)}{f(x)+g(x)}dx\\
&=\int\limits_{a}^{b}\frac{f(x)+g(x)}{f(x)+g(x)}dx\\
&=\int\limits_{a}^{b}dx\\
&=(b-a)\\
\end{align*}$$
The first identity has thus been established. But what about $\mathcal{I}-\mathcal{J}$? Let us now place a restriction on $f(x)+g(x)$. This restriction has to do with $f$ and $g$ being differentiable by assumption (and hence also their sum) and is going to connect their sum with the other two functions $h$ and $i$. We consider the two cases:
$$\dfrac{\text{d}}{\text{d}x}[f(x)+g(x)]=\pm[h(x)-i(x)]$$
Then we get for the difference of both integrals and by introducing the substitution $t=f(x)+g(x)$, which immediately implies $\text{d}t=\pm[h(x)-i(x)]\text{d}x$:
$$\begin{align*}
\mathcal{I}-\mathcal{J}&=\int\limits_{a}^{b}\frac{h(x)-i(x)}{f(x)+g(x)}dx\\
&=\int\limits_{f(a)+g(a)}^{f(b)+g(b)}\pm\frac{dt}{t}\\
&=\pm\text{ln}\left(\frac{f(b)+g(b)}{f(a)+g(a)}\right)
\end{align*}$$
With those calculations out of the way, we can now add both equations together, and thus solve our system of equations to get a value for $\mathcal{I}$:
$$2\mathcal{I}=(b-a)\pm\text{ln}\left(\frac{f(b)+g(b)}{f(a)+g(a)}\right)$$
$$\iff\mathcal{I}=\frac{(b-a)}{2}\pm\frac{1}{2}\text{ln}\left(\frac{f(b)+g(b)}{f(a)+g(a)}\right)$$
That was quite some work! But we have successfully arrived at a final expression for the integral in question. Now for the applications: Let $f(x)=i(x)=\text{cos}(x)$ and $g(x)=h(x)=\text{sin}(x)$ on $[0,\frac{\pi}{2}]$. I'm leaving it as an exercise to the reader to check, if all restrictions on the functions do indeed hold in this case and that we have to choose the negative branch when doing the substitution. It does follow now, that:
$$\begin{align*}
\mathcal{I}&=\int\limits_{0}^{\frac{\pi}{2}}\frac{\text{sin}(x)}{\text{cos}(x)+\text{sin}(x)}dx\\
&=\frac{(\frac{\pi}{2}-0)}{2}-\frac{1}{2}\text{ln}\left(\frac{\text{cos}(\frac{\pi}{2})+\text{sin}(\frac{\pi}{2})}{\text{cos}(0)+\text{sin}(0)}\right)\\
&=\frac{\pi}{4}
\end{align*}$$
As you might notice, we were able to calculate the value of this integral pretty fast this way! Feel free to try this out using just the exponential function or other ones which satisfy the given conditions.
I hope you enjoyed the first contribution to this site and up until the next post,
--have a flammable day $\int d \tau$
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
This question is an offshoot of this earlier MSE post.
Citing Banks, et. al.: "Let us call an integer $n$ a
Descartes number if $n$ is odd, and if $n = km$ for two integers $k, m > 1$ such that $\sigma(k)(m + 1) = 2n.$"
From the same paper, we have the divisibility constraints $$2k - \sigma(k) \mid k$$ and $$2k - \sigma(k) \mid \sigma(k).$$
Following the answer to this MSE post, is it possible to prove that $$\gcd(k,\sigma(k)) = 2k - \sigma(k)?$$
Here is my attempt:
$$\sigma(k)(m + 1) = 2n = 2km \Longleftrightarrow \sigma(k) = m(2k - \sigma(k))$$ $$\Longleftrightarrow 2k = \frac{(m+1)\sigma(k)}{m}=(m + 1)(2k - \sigma(k))$$
Can we now conclude that $\gcd(k, \sigma(k)) = 2k - \sigma(k)$?
|
Theory Seminar: Function-Inversion Problem: Barriers and Opportunities דובר: דימה קוגן (אונ' סטנפורד) תאריך: יום רביעי, 2.1.2019, 12:30 מקום: טאוב 201
n the function-inversion problem, an algorithm gets black-box access to a
function $f:[N] \to [N]$ and takes as input a point $y \in [N]$, along with
$S$ bits of auxiliary information about $f$. After running for time $T$, the
algorithm must output an $x \in [N]$ such that $f(x) = y$, if one exists. This
problem, first studied by Hellman (1980), has manifold applications to
cryptanalysis.
Hellman’s algorithm for this problem achieves the time-space tradeoff $S = T = \tilde{O}(N^{2/3})$ when $f$ is a random function, while the best known lower bound, due to Yao (1990) shows that $ST = \tilde{\Omega}(N)$, which admits the possibility of an $S = T = \tilde{O}(N^{1/2})$ algorithm. There remains a long-standing and vexing gap between these upper and lower bounds. In this talk, I will present some new connections between function inversion and other areas of theoretical computer science. These connections suggest that making progress on either the lower-bound or upper-bound side of this problem might be challenging. Moreover, we will see how these connections—in concert with Hellman-style algorithms—improve the best upper bounds for well-studied problems in communication complexity and data structures. Joint work with Henry Corrigan-Gibbs.
|
Bernoulli Distribution Introduction
The Bernoulli distribution is our first attempt to connect data to mathematical statistics. We will learn that mathematical statistics has a deep theory about what exactly produces data. As with much of mathematics, statistics theorizes that functions are the culprits behind data.
To better understand these functions, we introduce early in this course the fact that probability density functions are the theoretical construct behind data and at the same time lead to the sample mean.
Warm Up
In this section we begin to see why statistics is broadly applicable. We do this by better explaining the connection between (named) random variables and we show how random variable’s patterns can abstractly explain simple phenomena. This section ends with the formal logic/calculus that underpins much of the world of statistics, data science, and machine learning today.
As a warm up, let’s introduce a few new words that will come up repeatedly throughout this course.
population: A broad group of nouns, who’s characteristics are of interest. For example, all US adults. sample: A subset of the population. For example, the adults in this class room. Random sample are preferred, but less easy to obtain. individual/observation: A single entity from the population/sample, for which a measurement was taken. For example, a US adult’s height. parameter: A characteristic of the population. For example, the average height of US adults. statistic: Any function of the sampled observations, hence a characteristic of the sample. For example, the mean height of this class’s US adults. dataframe: A two dimensional array that stores data. Observations are stored in the rows and named variables are stored in the columns. probability density function: A pattern, represented as a function, of a random variable. For instance, the constant function which is equal to $1/6$ for all its inputs $1,2,3,4,5,6$. discrete random variable: A random variable that can only take on a countable (possibly infinite) number of values. For example, a Uniform(1,6) random variable representing a die. categorical: A type of variable that represents names or labels, non mathable values. For instance, eye color. level: The set of values a categorical variable can take on. For example, brown, blue, green. proportion: A fraction formed as the count of things of interest divided by the total number of things. For example, the fraction of heads observed in 10 flips of a coin. independent: A probabilistic relationship between observations, where the two observations occurred without relation. For example, consecutive flips of a coin. identically distributed: Random variables that have the same probability density function (pdf), and same parameters of this pdf, are called identically distributed. For example, consecutive flips of the same coin. likelihood: A function defined by multiplying together the PDFs associated with the observed random variables. Estimating Proportions Goal of Statistics
Statistics seeks to describe characteristics of a broad group (population) using only a subset of information (sample). For instance, making statements about all of Chico’s graduates would be difficult; we’d first have to find them all and then extract data from each person. Instead, statistics uses a sample of all graduates to infer characterstics about the population of Chico’s graduates.
In proper language, a statistician uses a radom sample to calculate sample statistics which provide estimates of population parameters. Relative to the image above, the population is depicted on the left, the sample is depicted on the right, and the arrows indicate that we took a simple random sample of individuals (or observations) from the population. With these data, we make inferences about the population parameters. The discipline of statistics studies how to properly use data to make best guesses about the population. To be useful, we must carefully interpret these best guesses.
At the mathematical level, we make an association between the population’s individuals and random variables. For instance, a flip of a coin is analogous to a random variable who’s outcome is not observed until after the flip. Characteristics of the population are analogous to parameters that give structure to the functions that describe the associated random variables. Data is then theoretically generated from the population’s probability density function. The data, thus carrying information about this function, are used to estimate the population parameters. As the data side is likely to be more tangible, we’ll start there.
Data
Since this class is both a introduction to Python and a statistics course, we’ll waste no time introducing code. Let’s load two common data science packages and use them to load and store a dataset as a dataframe. We’ll then plot the data. For every analysis, big or small, that we perform in this class these steps should be the very first.
import pandas as pdimport numpy as npimport bplot as bpbp.dpi(300)
df = pd.read_csv("https://raw.githubusercontent.com/roualdes/data/master/carnivora.csv")df[['SuperFamily', 'Family']].sample(6)
SuperFamily Family 59 Feliformia Viverridae 90 Feliformia Hyaenidae 98 Feliformia Felidae 91 Feliformia Hyaenidae 94 Feliformia Felidae 29 Caniformia Mustelidae
df['SuperFamily_col'] = bp.CatColors(df['SuperFamily'])for name, gdf in df.groupby('SuperFamily'): bp.point(gdf['BW'], gdf['SB'], label=name, color=gdf['SuperFamily_col'])bp.labels(y='Body Weight (kg)', x='Birth Weight (g)', size=18)bp.legend()
<matplotlib.axes._subplots.AxesSubplot at 0x10f2ebac8>
If these data are truly a random sample (and we’re to believe they are), then the proportions of the colors (not the numbers) depict a population parameter. Here, $p$ might be the population proportion of animals from the order Carnivora that are in the Super Family Caniformia. As we don’t know what value $p$ takes on, we will estimate it with data.
As far as this class is concerned, estimating population parameters from data takes quite a bit of machinery. The first necessary piece is the (assumed) functional form that represents a proportion. A common choice for proportions is the Bernoulli distribution. The Bernoulli distribution will provide us with a function, dependent on some unkown value $p$, from which we collect data and then manipulate to estimate $p$.
Bernoulli Distribution
The probability density function of the Bernoulli distribution
for $x \in {0, 1}$ and $p \in [0, 1]$. Notice that $x$ only takes on a finite set of values. When a random variable can take on only a countable number of values, it is called a discrete random variable.
def bernoulli(x, p): return np.power(p, x) * np.power(1 - p, 1 - x)x = np.asarray([0, 1])p = 0.25df = pd.DataFrame({'x': x, 'f': bernoulli(x, p)})bp.point(df['x'], df['f'])bp.LaTeX()bp.labels(x='$x$', y='$f(x)$', size=18)
<matplotlib.axes._subplots.AxesSubplot at 0x124c98c18>
Example
Since $x$ only ever takes on two values $0$ or $1$, this matches perfectly with our binary categorical variable
SuperFamily. The trick is, the levels of the variable
SuperFamily will correspond to the values that the input $x$ of the Bernoulli random variable can take on, namely $0$ and $1$. How we map from $\{Canfiromia, Feliformia\}$ to $\{0, 1\}$ is mathematically unimportant, but convention suggests that you are interested in one of the two levels more than the other.
Symbollically, we write $X_n \sim_{iid} \text{Bernoulli}(p)$ for $n = 1, \ldots, N$. The random variables $X_n$ correspond to the sequence $0$s and $1$s that tell us which observations belong to the Super Family Caniformia. The population parameter $p$ is unwknown, but can be estimated with the data $X_n$. Notice that for Bernoulli data, the sample (because it’s applied to data) mean returns a proportion since at most the sum of $N$ $1$s is $N$.
carnivora = pd.read_csv("~/data/carnivora.csv")phat = np.round(np.mean(carnivora['SuperFamily'] == 'Caniformia'), 2)phat
0.51
Much of this class involves interpretting statistics such as the one above. We’d say based on our data, approximately $51$% of the animals in the Order Carnivora are in the Super Family Caniformia.
One the one hand, we need to recognize that this proportion is an estimate of the (population) parameter $p$ in the context of our study – sampled animals, hopefully randomly, from the Super Families Caniformia and Feliformia within a specific geographic region. On the other hand, since this is a random sample, there’s some uncertainty to this estimate.
The remainder of this class will focus on two subtle points: how do we
estimate parameters, and quantify the uncertainty in our estimates?
The answers (as far as this class is concerned) are the
likelihood, and bootstrap
respectively.
Likelihood
The likelihood function enables estimation of population parameters given a sample of data. The likelihood function is not the only means for estimating population parameters, but it is the only method we will cover in this class.
Definition
Given a random sample of independent and identically distributed data, $X_n \sim_{iid} F(\theta)$ for $n = 1, \ldots, N$, the likelihood function is
where $\mathbf{X}$ is just notation for our random sample of data, $\theta \in \mathbb{R}^d$ denotes the parameter(s) to be estimated, and $f(x\vert \theta)$ is the probability density function associated with the distribution $F$. Given the likelihood function $L$, estimates are produced by finding the value of $\theta$ that maximizes the likelihood function.
Maximum Likelihood Estimators
Assume we have data $X_n \sim_{iid} F(\theta)$. The maximum likelihood estimator (MLE) of $\theta$ is
Finding MLEs
The definitions above are intimidating upon first site. However, with a breif recap of some algebraic manipulations and a few tips on getting started, finding MLEs can often be reduced down to a rehearsal of maximization/minimization probelems of calculus 1. Let’s start with some quick exercises to refresh our minds of the algebra we’ll need to find MLEs.
How can we rewrite (simplify?) the following equations? $\log{(a \cdot b)}$? $\log{(x_1 \cdot x_2)}$? $\log{(x_1 \cdot x_2 \cdot \ldots \cdot x_n)}$? $\log{(\prod_{i=1}^n x_i)}$? $\log{(\prod_{i=1}^n f(x_i))}$? How can we rewrite (simplify?) the following equations? $\frac{d}{d \theta} {a \cdot \theta + b \cdot \theta}$? $\frac{d}{d \theta} {x_1 \cdot \theta + x_2 \cdot \theta}$? $\frac{d}{d \theta} {x_1 \cdot \theta + \ldots + x_n \cdot \theta}$? $\frac{d}{d \theta} {\sum_{i=1}^n x_i \cdot \theta }$? Evaluate the following expression,
Next, a hint for getting started on maximizing a function with nasty exponents. It’s immensely helpful (to both humans and computers) to work with the natural log of the likelihood function, cleverly named the log-likelihood function
Example
Consider $X_n \sim_{iid} \text{Bernoulli}(p)$ for $n = 1, \ldots, N$. Find the MLE of $p$ and call it $\hat{p}$.
|
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2531-2550, 2019.
Abstract
In this work we provide an estimator for the covariance matrix of a heavy-tailed multivariate distribution. We prove that the proposed estimator $\widehat{\mathbf{S}}$ admits an \textit{affine-invariant} bound of the form \[ (1-\varepsilon) \mathbf{S} \preccurlyeq \widehat{\mathbf{S}} \preccurlyeq (1+\varepsilon) \mathbf{S} \]{in} high probability, where $\mathbf{S}$ is the unknown covariance matrix, and $\preccurlyeq$ is the positive semidefinite order on symmetric matrices. The result only requires the existence of fourth-order moments, and allows for $\varepsilon = O(\sqrt{\kappa^4 d\log(d/\delta)/n})$ where $\kappa^4$ is a measure of kurtosis of the distribution, $d$ is the dimensionality of the space, $n$ is the sample size, and $1-\delta$ is the desired confidence level. More generally, we can allow for regularization with level $\lambda$, then $d$ gets replaced with the degrees of freedom number. Denoting $\text{cond}(\mathbf{S})$ the condition number of $\mathbf{S}$, the computational cost of the novel estimator is $O(d^2 n + d^3\log(\text{cond}(\mathbf{S})))$, which is comparable to the cost of the sample covariance estimator in the statistically interesing regime $n \ge d$. We consider applications of our estimator to eigenvalue estimation with relative error, and to ridge regression with heavy-tailed random design.
@InProceedings{pmlr-v99-ostrovskii19a,title = {Affine Invariant Covariance Estimation for Heavy-Tailed Distributions},author = {Ostrovskii, Dmitrii M. and Rudi, Alessandro},booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory},pages = {2531--2550},year = {2019},editor = {Beygelzimer, Alina and Hsu, Daniel},volume = {99},series = {Proceedings of Machine Learning Research},address = {Phoenix, USA},month = {25--28 Jun},publisher = {PMLR},pdf = {http://proceedings.mlr.press/v99/ostrovskii19a/ostrovskii19a.pdf},url = {http://proceedings.mlr.press/v99/ostrovskii19a.html},abstract = {In this work we provide an estimator for the covariance matrix of a heavy-tailed multivariate distribution. We prove that the proposed estimator $\widehat{\mathbf{S}}$ admits an \textit{affine-invariant} bound of the form \[ (1-\varepsilon) \mathbf{S} \preccurlyeq \widehat{\mathbf{S}} \preccurlyeq (1+\varepsilon) \mathbf{S} \]{in} high probability, where $\mathbf{S}$ is the unknown covariance matrix, and $\preccurlyeq$ is the positive semidefinite order on symmetric matrices. The result only requires the existence of fourth-order moments, and allows for $\varepsilon = O(\sqrt{\kappa^4 d\log(d/\delta)/n})$ where $\kappa^4$ is a measure of kurtosis of the distribution, $d$ is the dimensionality of the space, $n$ is the sample size, and $1-\delta$ is the desired confidence level. More generally, we can allow for regularization with level $\lambda$, then $d$ gets replaced with the degrees of freedom number. Denoting $\text{cond}(\mathbf{S})$ the condition number of $\mathbf{S}$, the computational cost of the novel estimator is $O(d^2 n + d^3\log(\text{cond}(\mathbf{S})))$, which is comparable to the cost of the sample covariance estimator in the statistically interesing regime $n \ge d$. We consider applications of our estimator to eigenvalue estimation with relative error, and to ridge regression with heavy-tailed random design.}}
%0 Conference Paper%T Affine Invariant Covariance Estimation for Heavy-Tailed Distributions%A Dmitrii M. Ostrovskii%A Alessandro Rudi%B Proceedings of the Thirty-Second Conference on Learning Theory%C Proceedings of Machine Learning Research%D 2019%E Alina Beygelzimer%E Daniel Hsu%F pmlr-v99-ostrovskii19a%I PMLR%J Proceedings of Machine Learning Research%P 2531--2550%U http://proceedings.mlr.press%V 99%W PMLR%X In this work we provide an estimator for the covariance matrix of a heavy-tailed multivariate distribution. We prove that the proposed estimator $\widehat{\mathbf{S}}$ admits an \textit{affine-invariant} bound of the form \[ (1-\varepsilon) \mathbf{S} \preccurlyeq \widehat{\mathbf{S}} \preccurlyeq (1+\varepsilon) \mathbf{S} \]{in} high probability, where $\mathbf{S}$ is the unknown covariance matrix, and $\preccurlyeq$ is the positive semidefinite order on symmetric matrices. The result only requires the existence of fourth-order moments, and allows for $\varepsilon = O(\sqrt{\kappa^4 d\log(d/\delta)/n})$ where $\kappa^4$ is a measure of kurtosis of the distribution, $d$ is the dimensionality of the space, $n$ is the sample size, and $1-\delta$ is the desired confidence level. More generally, we can allow for regularization with level $\lambda$, then $d$ gets replaced with the degrees of freedom number. Denoting $\text{cond}(\mathbf{S})$ the condition number of $\mathbf{S}$, the computational cost of the novel estimator is $O(d^2 n + d^3\log(\text{cond}(\mathbf{S})))$, which is comparable to the cost of the sample covariance estimator in the statistically interesing regime $n \ge d$. We consider applications of our estimator to eigenvalue estimation with relative error, and to ridge regression with heavy-tailed random design.
Ostrovskii, D.M. & Rudi, A.. (2019). Affine Invariant Covariance Estimation for Heavy-Tailed Distributions. Proceedings of the Thirty-Second Conference on Learning Theory, in PMLR 99:2531-2550
This site last compiled Sat, 17 Aug 2019 00:05:37 +0000
|
dpsmith December 3rd, 2017 07:50 AM
Normal Subgroups of p-groups
Let G be a p-group and N ≠ (1) be a normal subgroup of G. Then N and Z(G) have a nontrivial intersection. Is this an induction problem?
johng40 December 3rd, 2017 04:04 PM
Since you mention induction, I assume that G is finite; aside: I realized I don't know if this is true for infinite p-groups. Here's most of a solution that uses a minimal counterexample, a form of induction:
Assume false and let G be a minimal counterexample; i.e. for any p-group of order less than the order of G the statement is true. Now the center of any finite p-group is non-trivial; easy proof via the class equation. So let $Z=Z(G)$ and $Z_2/Z=Z(G/Z)$. By assumption $(NZ\cap Z_2)/Z=(N\cap Z_2)Z/Z$ is not trivial; i.e. $N\cap Z_2$ is non-trivial. Let $1\neq z\in N\cap Z_2$. Then for any $g\in G$, $[g,z]=g^{-1}z^{-1}gzZ=Z$ in $G/Z$. That is $[g,z]\in Z\cap N=<1>$ or $1\neq z\in N\cap Z$, contradiction.
dpsmith December 4th, 2017 02:48 AM
It is assumed to be a finite group - left that out. I thought the Isomorphism THM would be used - but couldn't get the key move.
dpsmith December 15th, 2017 11:33 AM
Found an Easier Way
Actually, you do not need the Isomorphism THM machinery to do this. We have that N is a normal subgroup of G. So if a is in N, every conjugate of a by an element of N is in N.
Hence, constructing a conjugacy class equation for N makes sense. Since N is bigger than (1), you now do the same proof as that which shows that a nontrivial p-group has a nontrivial center. (That is the case when N = G.)
johng40 December 16th, 2017 08:15 AM
Well done. If you want to seriously study groups, factor groups should become second nature for you.
dpsmith December 21st, 2017 05:00 PM
They are. Abstract Algebra was my favorite topic in grad school, which was decades ago. I now am getting into math history and the interesting (and sometimes crazy) people who got involved.
All times are GMT -8. The time now is 11:43 PM.
Copyright © 2019 My Math Forum. All rights reserved.
|
Mitchell Feigenbaum obituary in the
New York Times
"Mitchell J. Feigenbaum, a pioneer in the field of mathematical physics known as chaos, died on June 30 in Manhattan. He was 74." So starts Feigenbaum's obituary in the
Times, July 18, 2019. The reporter is Kenneth Chang, who surveys Feigenbaum's career (at his death he was the Toyota Professor and director of the Center for Studies in Physics and Biology at Rockefeller University) and focuses on his most important mathematical discovery, the numerical constant that now bears his name. "Dr. Feigenbaum's lifestyle and his Renaissance intellect were a poor fit to the demands of modern publish-or-perish academia. But by following his own path, he uncovered a pattern of chaos that is universal in math and in nature."
"At Los Alamos National Laboratory in New Mexico in the mid-1970s, Dr. Feigenbaum, using a programmable calculator, found what seemed at first a mathematical curiosity. A simple equation generated a sequence of numbers, which were initially trivial: the same number over and over. But as a parameter in the equation shifted, the output became more varied. First the numbers bounced back and forth between two values, then they cycled among four values, then eight, and so on, with the rate of the change quickening until the patterns lost all hint of repeating cycles. The dynamics had, in the terminology of physics, passed into the realm of deterministic chaos. That is, each number of the sequence could be computed precisely, but the resulting pattern appeared to be complex and random."
"Dr. Feigenbaum looked at another simple equation, and it exhibited the same behavior, known as period doubling. More startling, the number that characterized the rate of doubling was the same: As the periods multiplied, each doubling occurred about 4.669 times as quickly as the previous one. This number is now known as the Feigenbaum constant. Dr. Feigenbaum was able to prove why it is a universal mathematical value, much as pi —the ratio of the circumference of a circle to its diameter— is the same for all circles."
"In 1979, a French scientist, Albert J. Libchaber, observed the same cascade of period doublings in the temperature fluctuations in the center of a convecting fluid. Dr. Feigenbaum's theory of the transition from order to chaos now described phenomena in the real world."
Chang quotes Kenneth Brecher (Astronomy, Boston University): "There aren't too many fundamental constants, and he was the only living person that had one."
Mathematics and the Visual Cortex
"High-dimensional geometry of population responses in visual cortex" ran in
Nature, June 26, 2019. It is the product of a 5-person collaboration between the HHMI Janelia Laboratory (Ashburn, VA) and University College, London, led by Carsen Stringer and Marius Pachitariu. They used "resonance-scanning two-photon calcium microscopy, using 11 imaging planes spaced at 35 μm" to record the simultaneous responses of 12,578 neurons in about 1/3 cubic millimeter of the visual cortex of a mouse, while the mouse was being presented with a number of images (from 32 to 2800 according to the experiment).
"Mean responses (trial-averaged) of 65 randomly chosen neurons to 32 image stimuli." The color represents the variance from the mean, measured in standard deviations (scale bar on the right). "Stimuli were presented twice in the same order." As is clear from the two displays, most (in fact about 80%) of the neurons showed high correlation between repeats. Image from
Nature 571 361-365, used with permission.
The read-out from the targeted population of neurons can be considered as an
encoding of the visual field. The authors investigated this "population code for visual stimuli" by a method they call "cross-validated principal component analysis."
Principal Component Analysis (PCA) is very important application of elementary linear algebra. In this case, each one of the $p$ stimuli encodes as a point in $N$-dimensional space, where $N$ is the number of neurons sampled, and the $k$-th coordinate is the variance of the response to this stimulus by the $k$-th neuron. The result of PCA is a new set of coordinates for $N$-dimensional space. The first coordinate is in the direction ${\vec e}_1$ along which the $p$ points are maximally spread out (if there isn't one, then that set of points is not amenable to PCA). This will be the single coordinate that best distinguishes between the points. With that settled, project everything onto the $(N-1)$-dimension sub-space perpendicular to ${\vec e}_1$. In that sub-space locate ${\vec e}_2$ to maximize spread, as before. The coordinate in that direction is the one that
next best distinguishes between the points. Continue (project onto the subspace spanned by ${\vec e}_1$ and ${\vec e}_2$, etc.). At the end, a point ${\vec x}$ will be represented as $x_1{\vec e}_1 + x_2{\vec e}_2 + \cdots$. The coordinates $x_1, x_2, \dots$ are its principal components. For example, if all the points lie along a line in $N$-dimensional space, then $x_2, x_3, \dots$ will all be zero, and $x_1$ will locate points along that line. More generally, using only $x_1, \dots, x_n$ will give the best $n$-dimensional approximation to the distribution of those points in $N$-dimensional space.
The way the quality of that approximation increases with $n$ is an intrinsic geometric property of the original set of points. The authors report: "This method revealed that the visual population responses did not lie on any low-dimensional plane within the space of possible firing patterns. The amount of variance explained continued to increase as further dimensions were included [see next image], without saturating at any dimensionality below the maximum possible. As a control analysis, we applied cvPCA to the neural responses obtained when only 32 images were shown many times —the reliable component of these responses must, by definition, lie in a 32-dimensional subspace— and as expected we observed a saturation of the variance after 32 dimensions."
"Cumulative fraction of variance in planes of increasing dimension, for an ensemble of 2,800 stimuli (blue) and for 96 repeats of 32 stimuli (green). The dashed line indicates 32 dimensions." Image from
Nature 571 361-365, used with permission.
The authors made another, unexpected observation: "the fraction of neural variance in planes of successively larger dimensions followed a power law. ... [T]he variance of the $n$th principal component had a magnitude that was approximately proportional to $1/n$."
A log-log plot of the magnitude of the variance of the $n$th principal component plotted as a function of $n$ (this is an example of an
eigenspectrum). The black line shows the linear fit of $1/n^{\alpha}$, $\alpha = 1.04$.
Qualitatively, as the authors remark, "this reflects successively less variance in dimensions that encode finer stimulus features." But there is a subtler, mathematical reason for this phenomenon, which they were able to tease out. "Power-law eigenspectra are observed in many scientific domains, and are related to the smoothness of the underlying functions. For example, if a function of one variable is differentiable, its Fourier spectrum must decay asymptotically faster than a power law of exponent 1 [i.e. strictly faster than $1/n$; online reference from TUMünchen]. ... We therefore theorized that the variance power law might be related to smoothness of the neural responses. We showed mathematically that if the sensory stimuli presented can be characterized by $d$ parameters, and if the mapping from these parameters to (noise-free) neural population responses is differentiable, then the population eigenspectrum must decay asymptotically faster than a power law of exponent $\alpha = 1 + 2/d$. Conversely, if the eigenspectrum decays slower than this, a smooth neural code is impossible: its derivative tends to infinity with the number of neural dimensions, and the neural responses must lie on a fractal rather than a differentiable manifold." More simply: "If the eigenspectrum were to decay slower than $n^{-1-2/d}$ then the neural code would emphasize fine stimulus features so strongly that it could not be differentiable." This requires quite a bit of analysis (see their Supplementary material, 2).
The authors conclude: "Neural representations with close-to-critical power-law eigenspectra may provide the brain with codes that are as efficient and flexible as possible while still allowing robust generalization."
Much ado about PEMDAS
PEMDAS (mnemonic:
Please Excuse My Dear Aunt Sally) is taught to children in certain schools so they can decipher mathematical expressions involving several operations, where the answer may depend on the order in which they are performed. Precedence is taken first by Parentheses, then by Exponentiation, then by Multiplication and Division (equal precedence), and finally Addition and Subtraction (equal). Operations of equal precedence are to executed in left-to-right order.
The current flap over PEMDAS originated on Twitter and seems first to have been reported in
Popular Mechanics (July 31, 2019). Andrew Daniels wrote "This Simple Math Problem Drove Our Entire Staff Insane. Can You Solve It?" He posts the tweet $8\div 2(2+2)=?$ and comments " ...this maddening math problem has gone viral, following in the grand tradition of such traumatic events as The Dress and Yanny/Laurel. These kinds of conundrums are purposely meant to divide and conquer, and predictably, the seemingly simple problem posed in the offending tweet — $8\div 2(2+2)$ — practically caused a civil war in the Popular Mechanics office ... " He goes on to document the way the staff wasted their time that day. Towards the end he called the AMS, and posts "A Brief Statement from Mike Breen, the Public Awareness Officer for the American Mathematical Society, Whose Job Is to 'Try to Tell People How Great Math Is,'" where Mike explains how, by the rules, the answer is 16. "But the way it's written, it's ambiguous. ... I wouldn't hit someone on the wrist with a ruler if they said 1." Daniels' posting was picked up the same day by Frank Miles of the Fox News Network. "Viral math problem baffles many on Internet: Can you solve $8\div 2(2+2)$? The equation went online this week on Twitter causing major confusion over the right answer."
By August 2, the commotion had reached the
New York Times. Steven Strogatz takes up the question and carefully explains how the rules mandate the answer 16. But he is somewhat apologetic about it: "Now realize, following Aunt Sally is purely a matter of convention. In that sense, PEMDAS is arbitrary. Furthermore, in my experience as a mathematician, expressions like $8\div 2\times4$ look absurdly contrived. No professional mathematician would ever write something so obviously ambiguous. We would insert parentheses to indicate our meaning and to signal whether the division should be carried out first, or the multiplication." Many readers disagreed with Strogatz, who came back to the question in the
Times on August 5. "After reading through the many comments on the article, I realized most of these respondents were using a different (and more sophisticated) convention than the elementary PEMDAS convention I had described in the article. In this more sophisticated convention, which is often used in algebra, implicit multiplication is given higher priority than explicit multiplication or explicit division, in which those operations are written explicitly with symbols like $\times * /$ or $ \div$. Under this more sophisticated convention, the implicit multiplication in $2(2 + 2)$ is given higher priority than the explicit division in $8\div 2(2 + 2)$. In other words, $2(2+2)$ should be evaluated first. Doing so yields $8\div2(2 + 2) = 8\div8 = 1.$ His analysis was summarized in the subtitle: "The confusion (likely intentional) boiled down to a discrepancy between the math rules used in grade school and in high school." On August 6 (viral, or what?) another piece in the
New York Times. Kenneth Chang contributes "Why Mathematicians Hate That Viral Equation". [Image of two lemurs with an abacus]. "It's formatted to confuse people, and there are no interesting underlying concepts." One last salvo from Kenneth Chang, in the
Times on August 21: How Many Triangles Are There? Here's How to Solve the Puzzle, where he explains how to solve a puzzle he proposed on the 6th; this one does have "interesting underlying concepts." Re PEMDAS: an earlier and cleverer viral tweet, reported in the Hindustani Times on July 16, involved the equation $230-220\times 0.5 =?$ And "You probably won't believe it but the answer is 5!"
Tony Phillips
Stony Brook University
tony at math.sunysb.edu
|
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1158-1160, 2019.
Abstract
We study the logistic bandit, in which rewards are binary with success probability $\exp(\beta a^\top \theta) / (1 + \exp(\beta a^\top \theta))$ and actions $a$ and coefficients $\theta$ are within the $d$-dimensional unit ball. While prior regret bounds for algorithms that address the logistic bandit exhibit exponential dependence on the slope parameter $\beta$, we establish a regret bound for Thompson sampling that is independent of $\beta$. Specifically, we establish that, when the set of feasible actions is identical to the set of possible coefficient vectors, the Bayesian regret of Thompson sampling is $\tilde{O}(d\sqrt{T})$. We also establish a $\tilde{O}(\sqrt{d\eta T}/\Delta)$ bound that applies more broadly, where $\Delta$ is the worst-case optimal log-odds and $\eta$ is the “fragility dimension,” a new statistic we define to capture the degree to which an optimal action for one model fails to satisfice for others. We demonstrate that the fragility dimension plays an essential role by showing that, for any $\epsilon > 0$, no algorithm can achieve $\mathrm{poly}(d, 1/\Delta)\cdot T^{1-\epsilon}$ regret.
@InProceedings{pmlr-v99-dong19a,title = {On the Performance of Thompson Sampling on Logistic Bandits},author = {Dong, Shi and Ma, Tengyu and Van Roy, Benjamin},booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory},pages = {1158--1160},year = {2019},editor = {Beygelzimer, Alina and Hsu, Daniel},volume = {99},series = {Proceedings of Machine Learning Research},address = {Phoenix, USA},month = {25--28 Jun},publisher = {PMLR},pdf = {http://proceedings.mlr.press/v99/dong19a/dong19a.pdf},url = {http://proceedings.mlr.press/v99/dong19a.html},abstract = {We study the logistic bandit, in which rewards are binary with success probability $\exp(\beta a^\top \theta) / (1 + \exp(\beta a^\top \theta))$ and actions $a$ and coefficients $\theta$ are within the $d$-dimensional unit ball. While prior regret bounds for algorithms that address the logistic bandit exhibit exponential dependence on the slope parameter $\beta$, we establish a regret bound for Thompson sampling that is independent of $\beta$. Specifically, we establish that, when the set of feasible actions is identical to the set of possible coefficient vectors, the Bayesian regret of Thompson sampling is $\tilde{O}(d\sqrt{T})$. We also establish a $\tilde{O}(\sqrt{d\eta T}/\Delta)$ bound that applies more broadly, where $\Delta$ is the worst-case optimal log-odds and $\eta$ is the “fragility dimension,” a new statistic we define to capture the degree to which an optimal action for one model fails to satisfice for others. We demonstrate that the fragility dimension plays an essential role by showing that, for any $\epsilon > 0$, no algorithm can achieve $\mathrm{poly}(d, 1/\Delta)\cdot T^{1-\epsilon}$ regret.}}
%0 Conference Paper%T On the Performance of Thompson Sampling on Logistic Bandits%A Shi Dong%A Tengyu Ma%A Benjamin Van Roy%B Proceedings of the Thirty-Second Conference on Learning Theory%C Proceedings of Machine Learning Research%D 2019%E Alina Beygelzimer%E Daniel Hsu%F pmlr-v99-dong19a%I PMLR%J Proceedings of Machine Learning Research%P 1158--1160%U http://proceedings.mlr.press%V 99%W PMLR%X We study the logistic bandit, in which rewards are binary with success probability $\exp(\beta a^\top \theta) / (1 + \exp(\beta a^\top \theta))$ and actions $a$ and coefficients $\theta$ are within the $d$-dimensional unit ball. While prior regret bounds for algorithms that address the logistic bandit exhibit exponential dependence on the slope parameter $\beta$, we establish a regret bound for Thompson sampling that is independent of $\beta$. Specifically, we establish that, when the set of feasible actions is identical to the set of possible coefficient vectors, the Bayesian regret of Thompson sampling is $\tilde{O}(d\sqrt{T})$. We also establish a $\tilde{O}(\sqrt{d\eta T}/\Delta)$ bound that applies more broadly, where $\Delta$ is the worst-case optimal log-odds and $\eta$ is the “fragility dimension,” a new statistic we define to capture the degree to which an optimal action for one model fails to satisfice for others. We demonstrate that the fragility dimension plays an essential role by showing that, for any $\epsilon > 0$, no algorithm can achieve $\mathrm{poly}(d, 1/\Delta)\cdot T^{1-\epsilon}$ regret.
Dong, S., Ma, T. & Van Roy, B.. (2019). On the Performance of Thompson Sampling on Logistic Bandits. Proceedings of the Thirty-Second Conference on Learning Theory, in PMLR 99:1158-1160
This site last compiled Sat, 17 Aug 2019 00:05:37 +0000
|
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
2017, Universitario. Storia dell'arte, ISBN 9788859617211, Volume 3, 230 pages
Book
2008, Testi e studi, ISBN 9788859604082, Volume 22, 130
Book
2006, Testi e studi, ISBN 8859601525, Volume 17., 100
Book
Journal of High Energy Physics, ISSN 1029-8479, 4/2019, Volume 2019, Issue 4, pp. 1 - 36
The resonant structure of the doubly Cabibbo-suppressed decay D +→K − K + K + is studied for the first time. The measurement is based on a sample of...
Particle and resonance production | Charm physics | Hadron-Hadron scattering (experiments) | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | SYSTEM | HIGH-STATISTICS | PHYSICS, PARTICLES & FIELDS | Luminosity | Resonance scattering | Tensors | Amplitudes | Decay | Physics - High Energy Physics - Experiment
Particle and resonance production | Charm physics | Hadron-Hadron scattering (experiments) | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | SYSTEM | HIGH-STATISTICS | PHYSICS, PARTICLES & FIELDS | Luminosity | Resonance scattering | Tensors | Amplitudes | Decay | Physics - High Energy Physics - Experiment
Journal Article
5. Combinations of single-top-quark production cross-section measurements and $|f_{\rm LV}V_{tb}|$ determinations at $\sqrt{s}=7$ and 8 TeV with the ATLAS and CMS experiments
02/2019
JHEP 05 (2019) 088 This paper presents the combinations of single-top-quark production cross-section measurements by the ATLAS and CMS Collaborations, using...
Physics - High Energy Physics - Experiment
Physics - High Energy Physics - Experiment
Journal Article
6. Observation of Light-by-Light Scattering in Ultraperipheral Pb+Pb Collisions with the ATLAS Detector
Physical review letters, ISSN 0031-9007, 08/2019, Volume 123, Issue 5, pp. 052001 - 052001
Journal Article
PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 01/2019, Volume 122, Issue 1, pp. 011802 - 011802
Journal Article
10/2018
The production of $\Upsilon(nS)$ mesons ($n=1,2,3$) in $p$Pb and Pb$p$ collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{NN}}=8.16$ TeV is...
Journal Article
PLoS ONE, ISSN 1932-6203, 07/2013, Volume 8, Issue 7, pp. e69661 - e69661
Zebrafish are gaining momentum as a laboratory animal species for the study of anxiety-related disorders in translational research, whereby they serve a...
ALCOHOL | SCOTOTAXIS | MULTIDISCIPLINARY SCIENCES | LIGHT/DARK PREFERENCE TEST | FISH | STIMULI | DANIO-RERIO RESPONDS | ETHANOL | PHENOTYPES | MODEL | WITHDRAWAL | Ethanol - pharmacology | Animals | Anti-Anxiety Agents - pharmacology | Anxiety - physiopathology | Behavior, Animal - physiology | Zebrafish | Anxiety - drug therapy | Robotics | Robotics industry | Research | Analysis | Fishes | Robots | Neurosciences | Laboratories | Shelters | Manufacturing engineering | Alcohol | Habituation | Rodents | Light | Physiology | Anxiety | Behavior | Paradigms | Automation | Ethanol | Fatigue | Gene expression | Aerospace engineering | Studies | Brain research | Habituation (learning) | Anxieties | Industrial robots | Stimuli | Index Medicus
ALCOHOL | SCOTOTAXIS | MULTIDISCIPLINARY SCIENCES | LIGHT/DARK PREFERENCE TEST | FISH | STIMULI | DANIO-RERIO RESPONDS | ETHANOL | PHENOTYPES | MODEL | WITHDRAWAL | Ethanol - pharmacology | Animals | Anti-Anxiety Agents - pharmacology | Anxiety - physiopathology | Behavior, Animal - physiology | Zebrafish | Anxiety - drug therapy | Robotics | Robotics industry | Research | Analysis | Fishes | Robots | Neurosciences | Laboratories | Shelters | Manufacturing engineering | Alcohol | Habituation | Rodents | Light | Physiology | Anxiety | Behavior | Paradigms | Automation | Ethanol | Fatigue | Gene expression | Aerospace engineering | Studies | Brain research | Habituation (learning) | Anxieties | Industrial robots | Stimuli | Index Medicus
Journal Article
PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 07/2019, Volume 123, Issue 3
Journal Article
Physical Review Letters, ISSN 0031-9007, 03/2019, Volume 122, Issue 21
Phys. Rev. Lett. 122, 211803 (2019) A search for charge-parity ($C\!P$) violation in $D^0 \to K^- K^+$ and $D^0 \to \pi^- \pi^+$ decays is reported, using $pp$...
Physics - High Energy Physics - Experiment
Physics - High Energy Physics - Experiment
Journal Article
PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 06/2019, Volume 122, Issue 23, pp. 231802 - 231802
We report a measurement of the mass difference between neutral charm-meson eigenstates using a novel approach that enhances sensitivity to this parameter. We...
PHYSICS, MULTIDISCIPLINARY | Sensitivity enhancement | Eigenvectors | Parameter sensitivity | Large Hadron Collider | Particle collisions | Charm (particle physics) | Physics - High Energy Physics - Experiment
PHYSICS, MULTIDISCIPLINARY | Sensitivity enhancement | Eigenvectors | Parameter sensitivity | Large Hadron Collider | Particle collisions | Charm (particle physics) | Physics - High Energy Physics - Experiment
Journal Article
PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 06/2019, Volume 122, Issue 23
Journal Article
Journal of High Energy Physics, ISSN 1029-8479, 6/2019, Volume 2019, Issue 6, pp. 1 - 28
The first untagged decay-time-integrated amplitude analysis of B s 0 → K S 0 K ± π ∓ decays is performed using a sample corresponding to 3.0 fb−1 of pp...
B physics | Branching fraction | CP violation | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Amplitudes | Decay
B physics | Branching fraction | CP violation | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Amplitudes | Decay
Journal Article
Physical review letters, ISSN 0031-9007, 05/2019, Volume 122, Issue 21, pp. 211803 - 211803
Journal Article
Journal of High Energy Physics, ISSN 1126-6708, 04/2019, Volume 2019, Issue 4, pp. 1 - 18
Journal Article
|
Applied/ACMS/absS18 Contents 1 ACMS Abstracts: Spring 2018 1.1 Thomas Fai (Harvard) 1.2 Michael Herty (RWTH-Aachen) 1.3 Lee Panetta (Texas A&M) 1.4 Francois Monard (UC Santa Cruz) 1.5 Haizhao Yang (National University of Singapore) 1.6 Eric Keaveny (Imperial College London) 1.7 Anne Gelb (Dartmouth) 1.8 Molei Tao (Georgia Tech) 1.9 Boualem Khouider (UVic) 1.10 Anru Zhang (UW-Madison, statistics) ACMS Abstracts: Spring 2018 Thomas Fai (Harvard) The Lubricated Immersed Boundary Method
Many real-world examples of fluid-structure interaction, including the transit of red blood cells through the narrow slits in the spleen, involve the near-contact of elastic structures separated by thin layers of fluid. The separation of length scales between these fine lubrication layers and the larger elastic objects poses significant computational challenges. Motivated by the challenge of resolving such multiscale problems, we introduce an immersed boundary method that uses elements of lubrication theory to resolve thin fluid layers between immersed boundaries. We apply this method to two-dimensional flows of increasing complexity, including eccentric rotating cylinders and elastic vesicles near walls in shear flow, to show its increased accuracy compared to the classical immersed boundary method. We present preliminary simulation results of cell suspensions, a problem in which near-contact occurs at multiple levels, such as cell-wall, cell-cell, and intracellular interactions, to highlight the importance of resolving thin fluid layers in order to obtain the correct overall dynamics.
Michael Herty (RWTH-Aachen) Opinion Formation Models and Mean field Games Techniques
Mean-Field Games are games with a continuum of players that incorporate the time dimension through a control-theoretic approach. Recently, simpler approaches relying on reply strategies have been proposed. Based on an example in opinion formation modeling we explore the link between differentiability notions and mean-field game approaches. For numerical purposes a model predictive control framework is introduced consistent with the mean-field game setting that allows for efficient simulation. Numerical examples are also presented as well as stability results on the derived control.
Lee Panetta (Texas A&M) Traveling waves and pulsed energy emissions seen in numerical simulations of electromagnetic wave scattering by ice crystals
The numerical simulation of single particle scattering of electromagnetic energy plays a fundamental role in remote sensing studies of the atmosphere and oceans, and in efforts to model aerosol "radiative forcing" processes in a wide variety of models of atmospheric and climate dynamics, I will briefly explain the main challenges in the numerical simulation of single particle scattering and describe how work with 3-d simulations of scattering of an incident Gaussian pulse, using a Pseudo-Spectral Time Domain method to numerically solve Maxwell’s Equations, led to an investigation of episodic bursts of energy that were observed at various points in the near field during the decay phase of the simulations. The main focus of the talk will be on simulations in dimensions 1 and 2, simple geometries, and a single refractive index (ice at 550 nanometers). The periodic emission of pulses is easy to understand and predict on the basis of Snell’s laws in the 1-d case considered. In much more interesting 2-d cases, simulations show traveling waves within the crystal that give rise to pulsed emissions of energy when they interact with each other or when they enter regions of high surface curvature. The time-dependent simulations give a more dynamical view of "photonic nanojets" reported earlier in steady-state simulations in other contexts, and of energy release in "morphology-dependent resonances."
Francois Monard (UC Santa Cruz) Inverse problems in integral geometry and Boltzmann transport
The Boltzmann transport (or radiative transfer) equation describes the transport of photons interacting with a medium via attenuation and scattering effects. Such an equation serves as the model for many imaging modalities (e.g., SPECT, Optical Tomography) where one aims at reconstructing the optical parameters (absorption/scattering) or a source term, out of measurements of intensities radiated outside the domain of interest.
In this talk, we will review recent progress on the inversion of some of the inverse problems mentioned above. In particular, we will discuss an interesting connection between the inverse source problem (where the optical parameters are assumed to be known) and a problem from integral geometry, namely the tensor tomography problem (or how to reconstruct a tensor field from knowledge of its integrals along geodesic curves).
Haizhao Yang (National University of Singapore) A Unified Framework for Oscillatory Integral Transform: When to use NUFFT or Butterfly Factorization?
This talk introduces fast algorithms of the matvec $g=Kf$ for $K\in \mathbb{C}^{N\times N}$, which is the discretization of the oscillatory integral transform $g(x) = \int K(x,\xi) f(\xi)d\xi$ with a kernel function $K(x,\xi)=\alpha(x,\xi)e^{2\pi i\Phi(x,\xi)}$, where $\alpha(x,\xi)$ is a smooth amplitude function , and $\Phi(x,\xi)$ is a piecewise smooth phase function with $O(1)$ discontinuous points in $x$ and $\xi$. A unified framework is proposed to compute $Kf$ with $O(N\log N)$ time and memory complexity via the non-uniform fast Fourier transform (NUFFT) or the butterfly factorization (BF), together with an $O(N)$ fast algorithm to determine whether NUFFT or BF is more suitable. This framework works for two cases: 1) explicite formulas for the amplitude and phase functions are known; 2) only indirect access of the amplitude and phase functions are available. Especially in the case of indirect access, our main contributions are: 1) an $O(N\log N)$ algorithm for recovering the amplitude and phase functions is proposed based on a new low-rank matrix recovery algorithm; 2) a new stable and nearly optimal BF with amplitude and phase functions in form of a low-rank factorization (IBF-MAT) is proposed to evaluate the matvec $Kf$. Numerical results are provided to demonstrate the effectiveness of the proposed framework.
Eric Keaveny (Imperial College London) Linking the micro- and macro-scales in populations of swimming cells
Swimming cells and microorganisms are as diverse in their collective dynamics as they are in their individual shapes and swimming mechanisms. They are able to propel themselves through simple viscous fluids, as well as through more complex environments where they must interact with other microscopic structures. In this talk, I will describe recent simulations that explore the connection between dynamics at the scale of the cell with that of the population in the case where the cells are sperm. In particular, I will discuss how the motion of the sperm’s flagella can greatly impact the overall dynamics of their suspensions. Additionally, I will discuss how in complex environments, the density and stiffness of structures with which the cells interact impact the effective diffusion of the population.
Anne Gelb (Dartmouth) Reducing the effects of bad data measurements using variance based weighted joint sparsity
We introduce the variance based joint sparsity (VBJS) method for sparse signal recovery and image reconstruction from multiple measurement vectors. Joint sparsity techniques employing $\ell_{2,1}$ minimization are typically used, but the algorithm is computationally intensive and requires fine tuning of parameters. The VBJS method uses a weighted $\ell_1$ joint sparsity algorithm, where the weights depend on the pixel-wise variance. The VBJS method is accurate, robust, cost efficient and also reduces the effects of false data.
Molei Tao (Georgia Tech) Explicit high-order symplectic integration of nonseparable Hamiltonians: algorithms and long time performance
Symplectic integrators preserve the phase-space volume and have favorable performances in long time simulations. Methods for an explicit symplectic integration have been extensively studied for separable Hamiltonians (i.e., H(q,p)=K(p)+V(q)), and they lead to both accurate and efficient simulations. However, nonseparable Hamiltonians also model important problems, such as non-Newtonian mechanics and nearly integrable systems in action-angle coordinates. Unfortunately, implicit methods had been the only available symplectic approach for general nonseparable systems.
This talk will describe a recent result that constructs explicit and arbitrary high-order symplectic integrators for arbitrary Hamiltonians. Based on a mechanical restraint that binds two copies of phase space together, these integrators have good long time performance. More precisely, based on backward error analysis, KAM theory, and some additional multiscale analysis, a pleasant error bound is established for integrable systems. This bound is then demonstrated on a conceptual example and the Schwarzschild geodesics problem. For nonintegrable systems, some numerical experiments with the nonlinear Schrodinger equation will be discussed.
Boualem Khouider (UVic) Using a stochastic convective parametrization to improve the simulation of tropical modes of variability in a GCM
Convection in the tropics is organized into a hierarchy of scales ranging from the individual cloud of 1 to 10 km to cloud clusters and super-clusters of 100’s km and 1000’s km, respectively, and their planetary scale envelopes. These cloud systems are strongly coupled to large scale dynamics in the from of wave disturbances going by the names of meso-scale systems, convectively coupled equatorial waves (CCEW), and intraseasonal oscillations, including the eastward propagating Madden Julian Oscillation (MJO) and poleward moving monsoon intraseasonal oscillation (MISO). Coarse resolution climate models (GCMs) have serious difficulties in representing these tropical modes of variability, which are known to impact weather and climate variability in both the tropics and elsewhere on the globe. Atmospheric rivers, for example, such the pineapple express that brings heavy rainfall to the Pacific North West, are believed to be directly connected to the MJO.
The deficiency in the GCMs is believed to be rooted from the inadequateness of the underlying cumulus parameterizations to represent the variability at the multiple spatial and temporal scales of organized convection and the associated two-way interactions between the wave flows and convection; these parameterizations are based on the quasi-equilibrium closure where convection is basically slaved to the large scale dynamics. To overcome this problem we employ a stochastic multi-cloud model (SMCM) convective parametrization, which mimics the interactions at sub-grid scales of multiple cloud types, as seen in observations. The new scheme is incorporated into the National Centers for Environmental Prediction (NCEP) Climate Forecast System version 2 (CFSv2) model (CFSsmcm) in lieu of the pre-existing simplified Arakawa-Schubert (SAS) cumulus scheme.
Significant improvements are seen in the simulation of MJO, CCEWs as well as the Indian MISO. These improvements appear in the form of improved variability, morphology and physical features of these wave flows. This particularly confirms the multicloud paradigm of organized tropical convection, on which the SMCM design was based, namely, congestus, deep and stratiform cloud decks that interact with each other to form the building block for multiscale convective systems. An adequate account for the dynamical interactions of this cloud hierarchy thus constitutes an important requirement for cumulus parameterizations to succeed in representing atmospheric tropical variability. SAS fails to fulfill this requirement evident in the unrealistic physical structures of the major intra-seasonal modes simulated by the default CFSv2.
Anru Zhang (UW-Madison, statistics) Singular value decomposition for high-dimensional high-order data
High-dimensional high-order data arise in many modern scientific applications including genomics, brain imaging, and social science. In this talk, we consider the methods, theories, and computations for tensor singular value decomposition (tensor SVD), which aims to extract the hidden low-rank structure from high-dimensional high-order data. First, comprehensive results are developed on both the statistical and computational limits for tensor SVD under the general scenario. This problem exhibits three different phases according to signal-noise-ratio (SNR), and the minimax-optimal statistical and/or computational results are developed in each of the regimes. In addition, we further consider the sparse tensor singular value decomposition which allows more robust estimation under sparsity structural assumptions. A novel sparse tensor alternating thresholding algorithm is proposed. Both the optimal theoretical results and numerical analyses are provided to guarantee the performance of the proposed procedure.
|
Edit: After thinking about it some more I came up with something much simpler than the phase-locked loop.
The problem you are having is because you are filtering with a boxcar. The boxcar filter has a lot of ripples in the frequency domain, so if you choose the wrong width you don't get good attenuation of your approximately 10Hz signal.
If you use a Butterworth filter you will get a frequency response that has no ripples.
Everything over (say) 5Hz will be attenuated by more than -3db. A Butterworth filter is also cheap to calculate.
I went to http://www-users.cs.york.ac.uk/~fisher/mkfilter/trad.html and asked for a 3rd order Butterworth low-pass filter with a cutoff frequency of 5Hz on a sample rate of 1000Hz and got the following recurrence relation:
y[n] = ( 1 * x[n- 3])
+ ( 3 * x[n- 2])
+ ( 3 * x[n- 1])
+ ( 1 * x[n- 0])
+ ( 0.9390989403 * y[n- 3])
+ ( -2.8762997235 * y[n- 2])
+ ( 2.9371707284 * y[n- 1])
You'd probably get a better result by using Matlab to design an elliptic filter (and probably one with higher order than 3). You'll get sharper attenuation beyond the cutoff frequency.
Here is my original answer about phase-locked loops:
I would try a discrete time phase-locked loop.
So something like this:
The idea is to multiply your input signal by a sinusoid of the same frequency, followed by a low pass filter. This shifts the sinusoidal part of your input signal to (nearly) zero frequency. By observing the changes to the output of the low pass filter you are getting an estimate of how far off your estimated frequency is from the actual frequency. So you feedback the adjustment.
The output of the multiplier in the picture above is:$$Be^{2\pi i \theta_n} + Ae^{2 \pi i (\theta_n + \omega n)} + Ae^{2 \pi i (\theta_n - \omega n)} + \varepsilon'(n).$$$\theta_n$ is the running sum of the estimated frequency, $\hat{\omega}_n$. When $\hat{\omega}_n = \omega$ then $\theta_n-\omega n$ is constant, so $A e^{2 \pi i (\theta_n - \omega n)}$ is constant.
The derivative you want here is the
angular derivative. Given samples from the low pass filter, $r_ne^{2\pi i \alpha_n}$ and $r_{n+1}e^{2\pi i \alpha_{n+1}}$ you really want $\alpha_{n+1}-\alpha_{n}$. Dividing low-passed sample $n+1$ by low-passed sample $n$ and taking the complex part gets you essentially the right thing.
The "adjust $\hat{\omega}_n$" box is also non-trivial. The estimated phase error is likely to be a little noisy so you might want to low-pass filter the phase error and you might want to only adjust $\hat{\omega}_n$ by a fraction of the estimated phase error, rather than by the entire phase error.
$\hat{\omega}_n$ is the estimated frequency, and $1/\hat{\omega}_n$ is the low-pass filter width you are looking for in your question. I think you can apply the same trick to the low-pass filter in the phase-locked loop. By dynamically adjusting the phase-locked loop low-pass filter width to $1/\hat{\omega}_n$ you completely eliminate $Be^{2\pi i \theta_n}$.
|
April 21st, 2015, 04:15 PM
# 1
Newbie
Joined: Apr 2015
From: Switzerland
Posts: 1
Thanks: 0
$\int\limits_{\gamma} \frac{z}{(z-1)(z-2)}$, $\gamma(\theta) = re^{i\theta}$
$\int\limits_{\gamma} \frac{z}{(z-1)(z-2)}$, $\gamma(\theta) = re^{i\theta}$, $2 < r < \infty$
For $0 < r < 2$, we can use Cauchy's integral formula and choose our holomorphic function to be $f(z) = \frac{z}{z - 2}$ since $z = 1$ is the only pole, but if $r > 2$, then both poles $z = 1$ and $z = 2$ are inside the contour so we can use partial fractions to get our integral into a form to use Cauchy's integral formula, but the solution says that because of Cauchy's theorem, $\int\limits_{\gamma} \frac{z}{(z-1)(z-2)} = 0$ when $2 < r < \infty$ because the integral is holomorphic inside the contour, and that $|z| > 2$ so since the values of $z$ are outside of the path, the integrand is holomorphic? The picture included with the solution is a circle on the complex plane centered at the origin with radius $2$ and everything outside the contour is shaded. I am confused about this; can someone clarify please?
Tags $gammatheta, $intlimitsgamma, fraczz1z2$, reitheta$
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post theta mared Geometry 1 June 15th, 2014 09:49 AM if sin(theta)= -.1234 find (theta) in [-4pi,pi] mauro125 Algebra 3 February 22nd, 2014 04:57 PM d²y/dx² and theta angle Jhenrique Calculus 1 November 23rd, 2013 10:02 AM Big Oh and Theta - Need Clarification kerrymaid Algebra 3 June 24th, 2010 10:37 AM [P*cos (theta)] - [A*sin(theta)] = 0, thus tan(theta)=(P/A)? FelisCanisOfCadog Algebra 4 March 6th, 2009 08:05 PM
|
The Fibonacci sequence reappears a bit later in Dan Brown’s book ‘The Da Vinci Code’ where it is used to login to the bank account of Jacques Sauniere at the fictitious Parisian branch of the Depository Bank of Zurich.
Last time we saw that the Hankel matrix of the Fibonacci series $F=(1,1,2,3,5,\dots)$ is invertible over $\mathbb{Z}$
\[ H(F) = \begin{bmatrix} 1 & 1 \\ 1 & 2 \end{bmatrix} \in SL_2(\mathbb{Z}) \] and we can use the rule for the co-multiplication $\Delta$ on $\Re(\mathbb{Q})$, the algebra of rational linear recursive sequences, to determine $\Delta(F)$.
For a general integral linear recursive sequence the corresponding Hankel matrix is invertible over $\mathbb{Q}$, but rarely over $\mathbb{Z}$. So we need another approach to compute the co-multiplication on $\Re(\mathbb{Z})$.
Any integral sequence $a = (a_0,a_1,a_2,\dots)$ can be seen as defining a $\mathbb{Z}$-linear map $\lambda_a$ from the integral polynomial ring $\mathbb{Z}[x]$ to $\mathbb{Z}$ itself via the rule $\lambda_a(x^n) = a_n$.
If $a \in \Re(\mathbb{Z})$, then there is a monic polynomial with integral coefficients of a certain degree $n$
\[
f(x) = x^n + b_1 x^{n-1} + b_2 x^{n-2} + \dots + b_{n-1} x + b_n \]
such that for every integer $m$ we have that
\[
a_{m+n} + b_1 a_{m+n-1} + b_2 a_{m+n-2} + \dots + b_{n-1} a_{m+1} + a_m = 0 \]
Alternatively, we can look at $a$ as defining a $\mathbb{Z}$-linear map $\lambda_a$ from the quotient ring $\mathbb{Z}[x]/(f(x))$ to $\mathbb{Z}$.
The multiplicative structure on $\mathbb{Z}[x]/(f(x))$ dualizes to a co-multiplication $\Delta_f$ on the set of all such linear maps $(\mathbb{Z}[x]/(f(x)))^{\ast}$ and we can compute $\Delta_f(a)$.
We see that the set of all integral linear recursive sequences can be identified with the direct limit
\[ \Re(\mathbb{Z}) = \underset{\underset{f|g}{\rightarrow}}{lim}~(\frac{\mathbb{Z}[x]}{(f(x))})^{\ast} \] (where the directed system is ordered via division of monic integral polynomials) and so is equipped with a co-multiplication $\Delta = \underset{\rightarrow}{lim}~\Delta_f$.
Btw. the ring structure on $\Re(\mathbb{Z}) \subset (\mathbb{Z}[x])^{\ast}$ comes from restricting to $\Re(\mathbb{Z})$ the dual structures of the co-ring structure on $\mathbb{Z}[x]$ given by
\[ \Delta(x) = x \otimes x \quad \text{and} \quad \epsilon(x) = 1 \]
From this description it is clear that you need to know a hell of a lot number theory to describe this co-multiplication explicitly.
As most of us prefer to work with rings rather than co-rings it is a good idea to begin to study this co-multiplication $\Delta$ by looking at the dual ring structure of
\[ \Re(\mathbb{Z})^{\ast} = \underset{\underset{ f | g}{\leftarrow}}{lim}~\frac{\mathbb{Z}[x]}{(f(x))} \] This is the completion of $\mathbb{Z}[x]$ at the multiplicative set of all monic integral polynomials.
This is a horrible ring and very little is known about it. Some general remarks were proved by Kazuo Habiro in his paper Cyclotomic completions of polynomial rings.
In fact, Habiro got interested is a certain subring of $\Re(\mathbb{Z})^{\ast}$ which we now know as the
Habiro ring and which seems to be a red herring is all stuff about the field with one element, $\mathbb{F}_1$ (more on this another time). Habiro’s ring is
\[
\widehat{\mathbb{Z}[q]} = \underset{\underset{n|m}{\leftarrow}}{lim}~\frac{\mathbb{Z}[q]}{(q^n-1)} \]
and its elements are all formal power series of the form
\[ a_0 + a_1 (q-1) + a_2 (q^2-1)(q-1) + \dots + a_n (q^n-1)(q^{n-1}-1) \dots (q-1) + \dots \] with all coefficients $a_n \in \mathbb{Z}$.
Here’s a funny property of such series. If you evaluate them at $q \in \mathbb{C}$ these series are likely to diverge almost everywhere,
but they do converge in all roots of unity!
Some people say that these functions are ‘leaking out of the roots of unity’.
If the ring $\Re(\mathbb{Z})^{\ast}$ is controlled by the absolute Galois group $Gal(\overline{\mathbb{Q}}/\mathbb{Q})$, then Habiro’s ring is controlled by the abelianzation $Gal(\overline{\mathbb{Q}}/\mathbb{Q})^{ab} \simeq \hat{\mathbb{Z}}^{\ast}$.Leave a Comment
|
Doing this with angles, as Jyrki suggested, is cumbersome and difficult to generalize to different dimensions. Here is an answer that's essentially a generalization of WimC's, which also fixes an error in his answer. In the end, I show
why this works, since the proof is simple and nice. The algorithm
Given a distance matrix $D_{ij}$, define$$M_{ij} = \frac {D^2_{1j}+D^2_{i1}-D^2_{ij}} 2 \,.$$One thing that is good to know in case the dimensionality of the data that generated the distance matrix is not known is that the smallest (Euclidean) dimension in which the points can be embedded is given by the rank $k$ of the matrix $M$. No embedding is possible if $M$ is not positive semi-definite.
The coordinates of the points can now be obtained by eigenvalue decomposition: if we write $M = USU^T$, then the matrix $X = U \sqrt S$ (you can take the square root element by element) gives the positions of the points (each row corresponding to one point). Note that, if the data points can be embedded in $k$-dimensional space, only $k$ columns of $X$ will be non-zero (corresponding to $k$ non-zero eigenvalues of $M$).
Why does this work?
If $D$ comes from distances between points, then there are $\mathbf x_i \in \mathbb R^m$ such that$$D_{ij}^2 = (\mathbf x_i - \mathbf x_j)^2 = \mathbf x_i^2 + \mathbf x_j^2 - 2\mathbf x_i \cdot \mathbf x_j \,.$$Then the matrix $M$ defined above takes on a particularly nice form:$$M_{ij} = (\mathbf x_i - \mathbf x_1) \cdot (\mathbf x_j - \mathbf x_1) \equiv \sum_{a=1}^m \tilde x_{ia} \tilde x_{ja}\,,$$where the elements $\tilde x_{ia} = x_{ia} - x_{1a}$ can be assembled into an $n \times m$ matrix $\tilde X$. In matrix form,$$M = \tilde X \tilde X^T \,.$$Such a matrix is called a Gram matrix. Since the original vectors were given in $m$ dimensions, the rank of $M$ is at most $m$ (assuming $m \le n$).
The points we get by the eigenvalue decomposition described above need not exactly match the points that were put into the calculation of the distance matrix. However, they can be obtained from them by a rotation and a translation. This can be proved for example by doing a singular value decomposition of $\tilde X$, and showing that if $\tilde X \tilde X^T = X X^T$ (where $X$ can be obtained from the eigenvalue decomposition, as above, $X = U\sqrt S$), then $X$ must be the same as $\tilde X$ up to an orthogonal transformation.
|
$\ A = \begin{bmatrix} 3 & 0 & 0 \\ 0 & a & a-2 \\ 0 & -2 & 0 \end{bmatrix} \\ a \in \mathbb R$
I need to find for which $\ a$ values $\ A $ will
not be diagonalizable $\ A $
I was thinking trying the elimination way so finding values which $\ A $ can be diagonalize first.
so the characteristic polynomial of $\ A $ is $\ p(t) = (\lambda-3)(\lambda^2-a(\lambda-2)-4) $
But then after trying many numbers of $\ a$ , $\ (0,1,2,-1,)$ I see that it is wrong because there are too many possible values for $\ a $ to make the matrix diagonalizable. So maybe trying to figure out which values of a will give me less eigenvalues than needed (?)
|
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.)
@Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases.
@TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good.
It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors)
Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11...
$\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474.
Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function.
The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation}
Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation}
Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation}
Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation}
Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain.
Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$
We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better)
@TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P
Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
|
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest.
Nah, I have a pretty garbage question. Let me spell it out.
I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$.
For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$.
This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin.
Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle.
Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$
$$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$
@user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure).
The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$.
@RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea.
The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described
It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation
The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible.
Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$....
The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case?
I think there you really do need some kind of PDEs to construct good charts.
I might be way overcomplicating this.
If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$?
I think so by the squeeze theorem or something.
this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$
but then we can replace all of those $U_i$'s with balls, incurring some fixed error
In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid...
@BalarkaSen what is this
ok but this does confirm that what I'm trying to do is wrong haha
In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas...
Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function?
I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation.
|
Learning Objective
Draw a Lewis electron dot diagram for an atom or a monatomic ion.
In almost all cases, chemical bonds are formed by interactions of valence electrons in atoms. To facilitate our understanding of how valence electrons interact, a simple way of representing those valence electrons would be useful.
A
Lewis electron dot diagram (or electron dot diagram or a Lewis diagram or a Lewis structure) is a representation of the valence electrons of an atom that uses dots around the symbol of the element. The number of dots equals the number of valence electrons in the atom. These dots are arranged to the right and left and above and below the symbol, with no more than two dots on a side. (It does not matter what order the positions are used.) For example, the Lewis electron dot diagram for hydrogen is simply
\[\mathbf{H}\mathbf{\cdot}\]
Because the side is not important, the Lewis electron dot diagram could also be drawn as follows:
\[\mathbf{\dot{H}}\; \; or\; \mathbf{\cdot}\mathbf{H}\; \; \; or\; \; \; \mathbf{\underset{.}H}\]
The electron dot diagram for helium, with two valence electrons, is as follows:
\[\mathbf{He}\mathbf{:}\]
By putting the two electrons together on the same side, we emphasize the fact that these two electrons are both in the 1
s subshell; this is the common convention we will adopt, although there will be exceptions later. The next atom, lithium, has an electron configuration of 1 s 22 s 1, so it has only one electron in its valence shell. Its electron dot diagram resembles that of hydrogen, except the symbol for lithium is used:
\[\mathbf{Li}\mathbf{\cdot}\]
Beryllium has two valence electrons in its 2
s shell, so its electron dot diagram is like that of helium:
\[\mathbf{Be}\mathbf{:}\]
The next atom is boron. Its valence electron shell is 2
s 22 p 1, so it has three valence electrons. The third electron will go on another side of the symbol:
\[\mathbf{\dot{B}}\mathbf{:}\]
Again, it does not matter on which sides of the symbol the electron dots are positioned.
For carbon, there are four valence electrons, two in the 2
s subshell and two in the 2 p subshell. As usual, we will draw two dots together on one side, to represent the 2 s electrons. However, conventionally, we draw the dots for the two p electrons on different sides. As such, the electron dot diagram for carbon is as follows:
\[\mathbf{\cdot \dot{C}}\mathbf{:}\]
With N, which has three
p electrons, we put a single dot on each of the three remaining sides:
\[\mathbf{\cdot}\mathbf{\dot{\underset{.}N}}\mathbf{:}\]
For oxygen, which has four
p electrons, we now have to start doubling up on the dots on one other side of the symbol. When doubling up electrons, make sure that a side has no more than two electrons.
\[\mathbf{\cdot}\mathbf{\ddot{\underset{.}O}}\mathbf{:}\]
Fluorine and neon have seven and eight dots, respectively:
\[\mathbf{:}\mathbf{\ddot{\underset{.}F}}\mathbf{:}\]
\[\mathbf{:}\mathbf{\ddot{\underset{.\: .}Ne}}\mathbf{:}\]
With the next element, sodium, the process starts over with a single electron because sodium has a single electron in its highest-numbered shell, the
n = 3 shell. By going through the periodic table, we see that the Lewis electron dot diagrams of atoms will never have more than eight dots around the atomic symbol.
Example \(\PageIndex{1}\): Lewis Dot Diagrams
What is the Lewis electron dot diagram for each element?
aluminum selenium SOLUTION
The valence electron configuration for aluminum is 3
s 23 p 1. So it would have three dots around the symbol for aluminum, two of them paired to represent the 3 selectrons:
\[\dot{Al:} \nonumber\]
The valence electron configuration for selenium is 4 s 24 p 4. In the highest-numbered shell, the n= 4 shell, there are six electrons. Its electron dot diagram is as follows:
\[\mathbf{\cdot }\mathbf{\dot{\underset{.\: .}Se}}\mathbf{:} \nonumber\]
Exercise \(\PageIndex{1}\)
What is the Lewis electron dot diagram for each element?
phosphorus argon Answer a
\[\mathbf{\cdot }\mathbf{\dot{\underset{.}P}}\mathbf{:} \nonumber\]
Answer b
\[\mathbf{:}\mathbf{\ddot{\underset{.\, .}Ar}}\mathbf{:} \nonumber\]
Summary Lewis electron dot diagrams use dots to represent valence electrons around an atomic symbol. Lewis electron dot diagrams for ions have less (for cations) or more (for anions) dots than the corresponding atom.
|
Currently, I'm trying to implement a Finite Difference (FD) method in Matlab for my thesis (Quantitative Finance). I implemented the FD method for Black-Scholes already and got correct results. However, I want to extend it to work for the SABR volatility model. Although some information on this model can be found on the internet, this mainly regards Hagan's approximate formula for EU options. I am particularly interested in the numerical solution (FD).
I have taken the following steps already:
Substitute the derivatives in the SABR PDE with their finite difference approximations. I used the so-called implicit method (https://en.wikipedia.org/wiki/Finite_difference_method#Implicit_method) for this. Possibility for variable transformation from $F$ to $x$ and $\alpha$ to $y$ is incorporated in PDE discretization, but not applied yet (hence, $\dfrac{\partial x}{\partial F}=\dfrac{\partial y}{\partial \alpha}=1$ and $\dfrac{\partial^2 x}{\partial F^2}=\dfrac{\partial^2 y}{\partial \alpha^2}=0$ in formulas below). Substituting forward difference in time and central differences in space dimensions and rewriting, gives me the following equation: $$ V_{i,j,k} = V_{i-1,j-1,k+1}[-c] + V_{i,j-1,k+1}[-d+e+g] + V_{i+1,j-1,k+1}[c] + V_{i-1,j,k+1}[-a+b+f] + V_{i,j,k+1}[2a+2d+(1-h)] + V_{i+1,j,k+1}[-a-b-f] + V_{i-1,j+1,k+1}[c] + V_{i,j+1,k+1}[-d-e-g] + V_{i+1,j+1,k+1}[-c], (1) $$ where
$a = 0.5\sigma_x^2\dfrac{1}{dx^2}(\dfrac{\partial x}{\partial F})^2d\tau$
$b = 0.5\sigma_x^2\dfrac{1}{2dx}\dfrac{\partial^2 x}{\partial F^2}d\tau$
$c = \rho\sigma_x\sigma_{y}\dfrac{\partial x}{\partial F}\dfrac{\partial y}{\partial \alpha}\dfrac{1}{2dxdy}d\tau$
$d = 0.5\sigma_{y}^2\dfrac{1}{dy^2}(\dfrac{\partial y}{\partial \alpha})^2d\tau$
$e = 0.5\sigma_y^2\dfrac{1}{2dy}\dfrac{\partial^2 y}{\partial \alpha^2}d\tau$
$f = \mu_x\dfrac{\partial x}{\partial F}\dfrac{1}{2dx}d\tau$
$g = \mu_{y}\dfrac{\partial y}{\partial \alpha}\dfrac{1}{2dy}d\tau$
$h = - rdt$
Writing in matrix notation,
$V_{k+1} = A^{-1}( V_{k} - C_{k+1} )$
Note that vector $C$ contains the values that cannot be incorporated via the $A$ matrix, as they depend on boundary grid points.
Edit June 19:Since my other post is focused on the upper boundary in $F$ dimension, lets discuss upper bound in vol direction here, since @Yian_Pap provided an answer below regarding this. Note that I corrected the cross derivative, to be:$\dfrac{\partial^2 V}{\partial F \partial \alpha}=\dfrac{V_{i+1,j+1} - V_{i+1,j-1} - V_{i-1,j+1} + V_{i-1,j-1}}{2\Delta F\Delta \alpha}$, not containing $V_{i,j}$.
Now, as vol bound I set $\dfrac{\partial V}{\partial \alpha}=0$.
Substituting second order accurate backward FD approximation,
$\dfrac{1}{\Delta \alpha}(V_{i,M}-V_{i,M-1})=0$,
since the term in front is not zero, it should hold that,
$V_{i,M}-V_{i,M-1}=0$,
hence,
$V_{i,M}=V_{i,M-1}$ (2),
This can be implemented in the coefficient matrix $A$. Given (1),
$V_{i,j,k} = z_1 V_{i-1,j-1,k+1} + z_2 V_{i,j-1,k+1} + z_3 V_{i+1,j-1,k+1}+ z_4V_{i-1,j,k+1} + z_5V_{i,j,k+1} + z_6V_{i+1,j,k+1}+ z_7V_{i-1,j+1,k+1} + z_8V_{i,j+1,k+1} + z_9V_{i+1,j+1,k+1}$,
Impose the condition (2) as follows:
$V_{i,M,k} = z_1 V_{i-1,M-1,k+1} + z_2 V_{i,M-1,k+1} + z_3 V_{i+1,M-1,k+1}+ (z_4+z_7)V_{i-1,M,k+1} + (z_5+z_8)V_{i,M,k+1} + (z_6+z_9)V_{i+1,M,k+1}$,
Main question: Am I implementing this bound correctly? Side question: When setting $\nu=0$ and $\beta=1$, $z_7$, $z_8$ and $z_9$ are equal to zero, correct? So, in this case, boundary condition for vol will not affect FD results?
Best,
Pim
|
In order to begin to make a connection between the microscopic and macroscopic worlds, we need to better understand the microscopic world and the laws that govern it. We will begin placing Newton's laws of motion in a formal framework which will be heavily used in our study of classical statistical mechanics.
First, we begin by restricting our discussion to systems for which the forces are purely
conservative. Such forces are derivable from a potential energy function \(U (r_1, \cdots , r_N)\) by differentiation:
\[ F_i = - \frac {\partial U}{\partial r_i} \]
It is clear that such forces cannot contain dissipative or friction terms. An important property of systems whose forces are conservative is that they conserve the total energy
\[ E = K + U = \frac {1}{2} \sum _{i=1}^N m_i \dot {r} ^2_i + U (r_1, \cdots , r_N ) \]
To see this, simply differentiate the energy with respect to time:
\[\frac {dE}{dt} = \sum _{i=1}^N m_i \dot {r} _i \cdot \ddot {r}_i + \sum _{i=1}^N \frac {\partial U}{\partial r_i} \cdot \dot {r} _i \]
\[=\sum _{i=1}^N \dot {r} _i \cdot F_i - \sum _{i=1}^N F_i \cdot \dot {r} _i \]
\[= 0\]
where, the second line, the facts that \(\ddot {r} _i = \frac {F_i}{m_i} \) (Newton's law) and \(F_i = - \frac {\partial U}{\partial r_i} \) (conservative force definition) have been used. This is known as the law of
conservation of energy. Lagrangianformulation. The Lagrangianfunction, \(L\), for a system is defined to be the difference between the kinetic and potential energies expressed as a function of positions and velocities. In order to make the nomenclature more compact, we shall introduce a shorthand for the complete set of positions in an \(N\)-particle system: \(r \equiv r_1, \cdots , r_N \) and for the velocities: \(\dot {r} \equiv \dot {r}_1, \cdots , \dot {r} _N \). Then, the Lagrangian is defined as follows:
\[ L (r, \dot {r} ) = K - U = \sum _{i=1}^N \frac {1}{2} m_i \dot {r}^2_i - U (r_1, \cdots , r_N )\]
In terms of the Lagrangian, the classical equations of motion are given by the so called
Euler-Lagrange equation:
\[ \frac {d}{dt} \left ( \frac {\partial L}{\partial \dot {r} _i} \right ) - \frac {\partial L}{\partial r_i} = 0 \]
The equations that result from application of the Euler-Lagrange equation to a particular Lagrangian are known as the
equations of motion. The solution of the equations of motion for a given initial condition is known as a trajectory of the system. The Euler-Lagrange equation results from what is known as an action principle. We shall defer further discussion of the action principle until we study the Feynman path integral formulation of quantum statistical mechanics in terms of which the action principle emerges very naturally. For now, we accept the Euler-Lagrange equation as a definition. The Euler-Lagrange formulation is completely equivalent to Newton's second law. In order to see this, note that
\[\frac {\partial L}{\partial \dot {r} _i} = m_i \dot {r} _i\]
\[\frac {\partial L}{\partial r _i}= - \frac {\partial U}{\partial r _i} = F_i \]
Therefore,
\[ \frac {d}{dt} \left ( \frac {\partial L}{\partial \dot {r} _i} \right ) - \frac {\partial L}{\partial r _i} = m_i \ddot {r} _i - F_i = 0 \]
which is just Newton's equation of motion.
An important property of the Lagrangian formulation is that it can be used to obtain the equations of motion of a system in
any set of coordinates, not just the standard Cartesian coordinates, via the Euler-Lagrange equation (see problem set #1).
|
Three Complex Numbers Satisfy Fermat's Identity For Prime Powers Problem
Solution 1
Not that $x^6=y^6=2^6=z^6.$ Also, $x+y=2=z$ and $\displaystyle \frac{1}{x}+\frac{1}{y}=\frac{1}{2}=\frac{1}{z}.$
Now, all prime numbers are in the form $p=6m\pm 1.$ Assume $p=6m+1.$ Then
$\begin{align} x^{p}+y^{p}&=x\cdot x^{6m}+y\cdot y^{6m}=x\cdot (x^6)^m+y\cdot (y^6)^m\\ &=2^{6m}(x+y)=2^{6m+1}=2^p\\ &=z^p. \end{align}$
For $p=6m-1$ the derivation is practically the same:
$\displaystyle \begin{align} x^{p}+y^{p}&=\frac{1}{x}\cdot x^{6m}+\frac{1}{y}\cdot y^{6m}=\frac{1}{x}\cdot (x^6)^m+\frac{1}{y}\cdot (y^6)^m\\ &=2^{6m}(\frac{1}{x}+\frac{1}{y})=2^{6m}\cdot\frac{1}{2}=2^{6m-1}=2^p\\ &=z^p. \end{align}$
Solution 2
$\displaystyle \begin{align} f&=\left(\frac{x}{2}\right)^p+\left(\frac{y}{2}\right)^p=\left(\frac{1}{2}+i\frac{\sqrt{3}}{2}\right)^p+\left(\frac{1}{2}-i\frac{\sqrt{3}}{2}\right)^p\\ &=\exp\left(\frac{ip\pi}{3}\right)+\exp\left(\frac{-ip\pi}{3}\right)\\ &=2\cos\frac{p\pi}{3}. \end{align}$
Every prime greater than $3$ is of the form $6k+1$ or $6k-1$ where $k$ is a positive integer. Thus,
$\displaystyle f=2\cos\left(2k\pi\pm\frac{\pi}{3}\right)=2\cos\left(\frac{\pi}{3}\right)=1.$
Acknowledgment
The problem came from Ross Honsberger's Mathematical Morsels, (MAA, 1978, #20). Honsberger refers to problem E518 from AMM (1943), p. 63. Solved by J. Rosenbaum, CT.
Solution 2 is by Amit Itagi.
Remark
It must be noted that the above solutions work for all $6m\pm 1,$ $m\ge 1,$ not necessarily prime. Thus the imposition that $p$ needs to be a prime is a diversion and, therefore, a red herring.
On the Difference of Areas Area of the Union of Two Squares Circle through the Incenter Circle through the Incenter And Antiparallels Circle through the Circumcenter Inequality with Logarithms Breaking Chocolate Bars Circles through the Orthocenter 100 Grasshoppers on a Triangular Board Simultaneous Diameters in Concurrent Circles An Inequality from the 2015 Romanian TST Schur's Inequality Further Properties of Peculiar Circles Inequality with Csc And Sin Area Inequality in Trapezoid Triangles on HO From Angle Bisector to 120 degrees Angle A Case of Divergence An Inequality for the Cevians through Spieker Point via Brocard Angle An Inequality In Triangle and Without Problem 3 from the EGMO2017 Mickey Might Be a Red Herring in the Mickey Mouse Theorem A Cyclic Inequality from the 6th IMO, 1964 Three Complex Numbers Satisfy Fermat's Identity For Prime Powers Probability of Random Lines Crossing Planting Trees in a Row Two Colors - Three Points
65462454
|
Let $p(x)=x^4+ax^3+bx^2+cx+d$ where a,b,c,d are constants. If $p(1)=10$, $p(2)=20$, $p(3)=30$, compute $\frac {p(12)+p(-8)}{10}$. I have tried so far. \begin{align} a+b+c+d=&9\\8a+4b+2c+d=&4\\27a+9b+3c+d=&-51 \end{align} Manipulating these, I got $6a+b=-25$. Now, $$\frac {p(12)+p(-8)}{10}=\frac{24832+1216a+208b+4c+2d}{10}$$ $$=\frac{24832+202(6a+b)+(4a+4b+4c)+2b+2d}{10}$$ $$=\frac{19782+(36-4d)+2b+2d}{10}$$ $$=\frac{19818+2b-2d}{10}$$ How do I get rid of the $2b-2d$?
You have $p(x) = x^4 + ax^3 + bx^2 + cx + d$, and you're given that $a + b + c + d = 9$, $8a + 4b + 2c + d = 4$, and $27a + 9b + 3c + d = -51$
Now, $p(12) + p(-8) = 12^4 + 8^4 + (12^3 - 8^3) a + (12^2 + 8^2) b + (12 - 8) c + 2d = 24832 + 1216 a + 208 b + 4c + 2d = 24832 + 1216 a + 208 b + 2 (2c+d)$. You note that $2c + d = 4 - 8a - 4b$, and substitute that into the equation to get $p(12) + p(-8) = 24832 + 1216 a + 208 b + 8 - 16a - 8b = 24840 + 1200 a + 200b = 24840 + 200(6a + b).$
Plug in the $6a + b = -25$ and you get $p(12) + p(-8) = 24840 - 5000 = 19840$. Divide it by $10$ and you get $\displaystyle \frac{p(12) + p(-8)}{10} = 1984$.
Remember, $d + d \ne d$.
(2012's answer is correct, but the algebraic manipulations doesn't reveal what is happening. This answer explains why we could calculate the expression, despite not having enough information.)
By the remainder factor theorem, since $p(x) - 10x$ is a monic quartic polynomial with roots 1, 2, and 3, hence
$$p(x) - 10x = ( x-1) (x-2) ( x-3) (x-k), $$
where $k$ is some constant.
Hence, $ p(12) - 120 = 11 \times 10 \times 9 \times (12-k)$ and $p(-8) - (-80) = (-9) \times (-10) \times (-11) \times (-8-k)$.
Note that the coefficients of $ (12-k) $ and $-(-8-k)$ are the same, namely $11\times 10 \times 9$, so we can add them up to get:
$$p(12) + p(-8) = 120 + (-80) + 11 \times 10 \times 9 \times (12+8) = 19840.$$
If you want a similar problem to practice what you learnt in this problem, try this math problem on Brilliant.
As you noticed, you have three equations for four unknowns (a,b,c,d). Eliminate a, b and c expressing then as functions of d. Then compute your expression. By magics, d disappears ! Does this help ?
|
The simplest antenna is a short (total length $l$ much smaller than one wavelength $\lambda$) dipole antenna, which is shown above as two colinear conductors (e.g., wires or conducting rods). Since they are driven at the small gap between them by a current source (a transmitter), the current in the bottom conductor is 180 deg out of phase with the current in the top conductor. The radiation from a dipole depends on frequency, so we consider a driving current $I$ varying sinusoidally with angular frequency $\omega = 2 \pi \nu$: $$I = I_0 \cos(\omega t)~,$$ where $I_0$ is the peak current going into each half of the dipole. It is computationally convenient to replace the trigonometric function $\cos(\omega t)$ with its exponential equivalent, the real part of
$$e^{-i \omega t} = \cos(\omega t) - i \sin(\omega t)$$ so the driving current can be rewritten as $$I = I_0 e^{-i \omega t}$$ with the implicit understanding that only the real part of $I$ represents this current. The driving current accelerates charges in the antenna conductors, so we can use Larmor's formula to calculate the radiation from the antenna by converting from the language of charges and accelerations to time-varying currents.
It is a common misconception to believe that the velocities $v$ of individual electrons in a wire are comparable with the speed of light $c$ because electrical signals do travel down wires at nearly the speed of light. A wire filled with electrons is like a garden hose already filled with an incompressible fluid—water. When the faucet is turned on, water flows from the other end of a full hose almost immediately, even though individual water molecules are moving slowly along the hose. As a specific example, consider a current of 1 ampere flowing through a copper wire of cross section $\sigma = 1$ mm$^2 = 10^{-6}$ m$^2$. The number density of free electrons is about equal to the number density of copper atoms in the wire, $n \approx 10^{29}$ m$^{-3}$. In mks units, the charge of an electron is
$$-e \approx 4.80 \times 10^{-12} {\rm ~statcoul} \times {1 {\rm ~coul} \over 3 \times 10^9 {\rm ~statcoul}} \approx 1.60 \times 10^{-19} {\rm ~coul}$$ One Ampere is one Coulomb per second, so the number $N$ of electrons flowing past any point along the wire in one second is $$N = {I \over \vert e \vert} \approx {1 {\rm ~coul~s}^{-1} \over 1.60 \times 10^{-19}{\rm ~coul}} \approx 6.25 \times 10^{18}{\rm ~s}^{-1}$$ The average electron velocity is only $$ v \approx {N \over \sigma n} \approx { 6.25 \times 10^{18}{\rm ~s}^{-1} \over 10^{-6}{\rm ~m}^2 \times 10^{29}{\rm ~m}^{-3}} \approx 6 \times 10^{-5} {\rm ~m~s}^{-1} \ll c$$ Thus the nonrelativistic Larmor equation may be used directly to calculate the radiation from a wire.
Most practical dipoles are half-wave dipoles ($l \approx \lambda/2$) because half-wave dipoles are resonant, meaning that they provide a nearly resistive load to the transmitter. When each half of the dipole is $\lambda/4$ long, the standing-wave current is highest at the center and naturally falls as $\cos(2 \pi z / \lambda)$ to almost zero at the ends of the conductors.
|
There is these notes about Gaussian Quadrature and I am trying to understand what does the sentence "is exact for all polynomials of degree up to $2n+1$" actually mean.
Gaussian Quadrature - General $n$:
Given an interval $[a,b]$ and a natural number $n$, we want to find constants $A_i$ and $x_i\in[a,b]$ such that the approximation $$\int_a^bf(x)dx\approx\sum_{i=0}^nA_if(x_i)$$
is exact for all polynomials of degree up to $2n+1$.
My doubts:
Does the sentence mean for all $f(x)$ that has the degree $0$ to $2n+1$, we have $\int_a^bf(x)dx=\sum_{i=0}^nA_if(x_i)$ ? What does the word exact mean? And what does it mean by for all polynomials?
Thanks for the help!
|
Problem:
Let $M$ be a Riemannian manifold. Consider the function $f: M \rightarrow \mathbb{R}$ where $f(x)=\text{dist}_M^2(p,x)$, and $p \in M$ is fixed. Show that $\text{grad}(f)=-2\exp^{-1}_x(p)$ as vectors in $T_xM$. (Assuming that $\exp^{-1}$ exists and is smooth etc.)
My attempt at a proof:
We must show that $\langle -2\exp^{-1}_x(p), \cdot \rangle_x = df(\cdot)$ as 1-forms at $x$.
Let $e_1,\dots,e_n$ be an orthonormal basis of $T_pM$ and introduce normal coordinates at $p$: $x=(x^1,\dots,x^n) \leftrightarrow x=\exp_p(x^i e_i)$. If $q=\exp_p(v)$ then $\text{dist}_M (p,q)=||v||_p$ and so $f(x)=||x^i e_i||^2_p=\sum (x^i)^2$, where the second equality follows from the fact that the metric is the identity at $p$ (in these coordinates).
So $df=\sum 2x^i dx^i$. $(*)$
On the other hand, the geodesic "from x to p" is the same as the geodesic "from p to x", except that the direction is reversed. It should therefore be true that $p=\exp_x(-S_{p \rightarrow x}x^i e_i )=\exp_x(-x^i S_{p \rightarrow x} e_i)$ where $S_{p \rightarrow x}$ is the parallel transport from $p$ to $x$. Hence $\exp^{-1}_x(p)=-\sum x^i S_{p \rightarrow x} e_i $. Hence $\langle -2\exp^{-1}_x(p), \cdot \rangle_x=\langle 2x^i S_{p \rightarrow x}e_i, \cdot \rangle = 2x^i \langle S_{p \rightarrow x} e_i, \cdot \rangle = 2x^i dx_i$ which gives the result on comparison with $(*)$. $\square$
My question:
When writing that out, I felt like I was manipulating symbols without really understanding what they mean. For example, when I write $x^i$ I'm not sure whether I mean the coordinate function $x^i$ or its particular value at the point $x$. I was also concerned that when I talk about parallel transport I should really have been talking about the derivative of $\exp$. There's a lemma in Do Carmo which states $\langle (D\exp_p)_v(v),D\exp_p)_v(w)\rangle=\langle v,w \rangle$ for $v \in T_pM$ and $w \in T_v(T_pM) \simeq T_pM$. I can see this might be useful, though I'm not sure precisely how.
I would appreciate it very much if you checked to see if this proof is correct and "expand it out", particularly the second half) including more details to make clear what is going on.
|
I want to find Euler-Lagrange equation for the following:
$$J(u) = \int \left( \frac{\psi(x) u + \dot{u}}{\psi(x)u - \dot{u}} \right)dx, \text{where} \ \psi(x) \ \text{is an explicit function of} \ x.$$
First, I have made the following substitution:
$$y = \frac{\dot{u}}{u} \implies \int \left( \frac{\psi(x) u + \dot{u}}{\psi(x)u - \dot{u}} \right)dx = \int \left( \frac{\psi(x) + y}{\psi(x) - y} \right)dx$$
This substitution should reduce the Euler-Lagrange equation to a first-order differential equation.
I know that the Euler-Lagrange equation, in general, looks like this:
$$\frac{d}{dt}(f_{\dot{x}}) - f_x = 0$$
Should the Euler-Lagrange for this particular functional look like this:
$$f(\psi(x), y, x) = \frac{\psi(x) + y}{\psi(x) - y} \implies \frac{d}{dx}(f_{\psi(x)}) - f_y = 0$$
|
I will start with examples in order to more easily explain what I'm after.
The first example shows how I would like four equations arranged. The vertical alignment of the equal signs in each row is perfect, of course, because each is a single line in an align structure. But, not all four equations can be numbered. The other down side is lack of control of the horizontal spread without using a parbox.
The second example shows how I would like them numbered, but sacrifices the vertical alignment of the equal signs in each row, because they each have different vertical heights.
The third example is almost perfect. By swapping two of the equations and aligning the parboxes by [b], in this arrangement, the heights match better, but up close you can see the second row is very slightly off. It's not bad and it's the current solution I'm using.
In the second and third examples, the horizontal spread is exactly how I want it, being controlled by the parbox widths.
Here's the code:
\documentclass[10pt,a5paper]{book}\usepackage[paperheight=9.5cm,paperwidth=13cm, margin=6mm]{geometry}\usepackage{amsmath}\usepackage{mathtools}\newcommand{\pd}{\partial}\newcommand{\gep}{\epsilon}\newcommand{\gm}{\mu}\newcommand{\gr}{\rho}\thispagestyle{empty}\begin{document}Example 1\begin{subequations}\begin{align} \vec{\nabla}\times \vec{E} +\frac{\pd \vec{B}}{\pd t} & = 0 & \vec{\nabla}\cdot\vec{B} & = 0 \label{eqn:hB} \\ \vec{\nabla}\cdot\vec{E} &= \frac{\gr}{\gep_0} & \vec{\nabla}\times \vec{B} - \gm_0 \gep_0 \frac{\pd \vec{E}}{\pd t} &= \gm_0 \vec{j}\label{eqn:ihB}\end{align}\end{subequations}Example 2\begin{subequations}\parbox[c]{\textwidth*7/16}{\begin{align} \vec{\nabla}\times \vec{E} +\frac{\pd \vec{B}}{\pd t} & = 0 \\ \vec{\nabla}\cdot\vec{B} & = 0 \label{eqn:hB}\end{align}}\parbox[c]{\textwidth*7/16}{\begin{align} \vec{\nabla}\cdot\vec{E} &= \frac{\gr}{\gep_0} \\ \vec{\nabla}\times \vec{B} - \gm_0 \gep_0 \frac{\pd \vec{E}}{\pd t} &= \gm_0 \vec{j}\label{eqn:ihB}\end{align}}\end{subequations}Example 3\begin{subequations}\parbox[b]{\textwidth*7/16}{\begin{align} \vec{\nabla}\times \vec{E} +\frac{\pd \vec{B}}{\pd t} & = 0 \\ \vec{\nabla}\cdot\vec{E} &= \frac{\gr}{\gep_0}\end{align}}\parbox[b]{\textwidth*7/16}{\begin{align} \vec{\nabla}\cdot\vec{B} & = 0 \label{eqn:hB} \\ \vec{\nabla}\times \vec{B} - \gm_0 \gep_0 \frac{\pd \vec{E}}{\pd t} &= \gm_0 \vec{j}\label{eqn:ihB}\end{align}}\end{subequations}\end{document}
The AMS align stuff does horizontal alignment of multiple equations above each other in a column well, it also does vertical alignment of multiple equations beside each other in a row, but how does one get multiple equation numbers on one line?
Is there a solution that gives full control of all three aspects simultaneously: multiple equation numbers on one line, horizontal alignment and vertical alignment?
Addendum
As there is more than one answer already, I'm adding some notes here, so as not to have to repeat myself in comments.
I will use phantoms and other structural tweaks when nothing else will do and that happens more than occasionally, unfortunately. If the answer to my question is that tweaks are the only solution, then so be it. But it seems to me that creating an align structure that allows having more than one equation number horizontally would solve all of this with zero tweaks. Is anyone from AMS development listening?
I appreciate all the suggestions below, because in lieu of having a built in solution, tweaks is all I have left and it's good to see how others solve this stuff.
An extra note on the semantics of vertical and horizontal alignment: I have at least two software programs that use these alignment terms to refer to the direction in which you move the elements to obtain alignment. Hence, vertical alignment takes elements in a row and moves them up or down to line them up and horizontal alignment takes elements in a column and moves them left or right to line them up. It also makes sense to me that the basic direction of the elements themselves could be used to define these terms, that is, above each other in a vertical line and beside each other in a horizontal line, giving the opposite definition to the above. The former definition refers to the method, the latter refers to the result.
|
This is related to the question Hall-Littlewood functions and functions on the nilpotent cone, and arises in the construction of Coulomb branches of gauge theories. The motivation is explained at the bottom.
Let us prepare some notation. Let $\lambda$ be a dominant coweight of $GL(N)$, i.e., tuples of (not necessarily positve) integers $\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_N$. We set $|\lambda| = \lambda_1+\dots+\lambda_N$, and define $b_\lambda(t)$ (denoted by $P_{U(N)}(t,\lambda)$ in 1403.0585) by $$ \prod_{i\in\mathbb Z} \varphi_{m_i(\lambda)}(t), \qquad \varphi_r(t) = (1-t)(1-t^2)\cdots (1-t^r), $$ where $m_i(\lambda)$ denotes the number of times $i$ occurs as a part of $\lambda$. This is the inverse of the Poincare polynomial of the classifying space of the stabilizer of $\lambda$ in $GL(N)$ in view of 1601.03586.
Let $\lambda^1$, $\lambda^2$, $\dots$, $\lambda^{N-1}$ be dominant coweights of $GL(N-1)$, $GL(N-2)$, $\dots$, $GL(1)$. We define $ \Delta(\lambda,\lambda^1,\dots,\lambda^{N-1}) $ by $$ \frac12\left( \sum_{j=1}^{N-1} \sum_{i,i'} |\lambda^{j-1}_i - \lambda^{j}_{i'}| \right) - \sum_{j=1}^{N-1} \sum_{i < i'} |\lambda^j_i - \lambda^j_{i'}|, $$ where $\lambda^0 = \lambda$. We fix $\lambda$ and consider $$ H[T_{(1^N)}(SU(N)](t,x_1,\dots,x_N,\lambda) := x_1^{|\lambda|} \sum_{\lambda^1,\lambda^2,\dots,\lambda^{N-1}} t^{\Delta(\lambda,\lambda^1,\dots,\lambda^{N-1})} \times \prod_{j=1}^{N-1} \left(\frac{x_{j+1}}{x_j}\right)^{|\lambda^j|} \frac1{b_{\lambda^j}(t)}. $$ This is the monopole formula in 1403.0585 for the special case $\rho=(1^N)$. Now 1403.0585 claims that $$ H[T_{(1^N)}(SU(N)](t,x_1,\dots,x_N,\lambda) = t^{\frac{(N-1)|\lambda|}2 - n(\lambda)} R_\lambda(x_1,\dots,x_N;t) \prod_{i\neq i'} \frac1{1 - x_i^{-1}x_{i'} t} $$ where $$ n(\lambda) = \sum (i-1)\lambda_i, $$ $$ R_\lambda(x_1,\dots,x_N;t) = \sum_{w\in S_N} w\left( x_1^{\lambda_1}\cdots x_N^{\lambda_N} \prod_{i<i'} \frac{x_i - tx_{i'}}{x_i-x_{i'}}\right). $$ As far as I understand, this is checked numerically in many cases, but no rigorous proof is given.
I together with Braverman, Finkelberg am trying to give a geometric proof of this result, but we are wondering whether this is known, or has a purely combinatorial proof.
Here is the motivation. In 1503.03676 (see also 1601.03586), I conjectured that the space of sections of a line bundle over the cotangent bundle of the flag variety for $SL(N)$ is realized by the Borel-Moore homology of a certain space, whose Poincare polynomial has a combinatorial expression, known as a monopole formula by Cremonesi, et al. Since the former is given by Hall-Littlewood polynomials, we arrive at a combinatorial expression of Hall-Littlewood polynomials as above.
|
The
beta of a plasma, symbolized by β, is the ratio of the plasma pressure ( p = n k B T) to the magnetic pressure ( p mag = B²/2 μ 0). The term is commonly used in studies of the Sun and Earth's magnetic field, and in the field of fusion power designs.
In the fusion power field, plasma is often confined using large superconducting magnets that are very expensive. Since the temperature of the fuel scales with pressure, reactors attempt to reach the highest pressures possible. The costs of large magnets roughly scales like
β. Therefore beta can be thought of as a ratio of money out to money in for a reactor, and beta can be thought of (very approximately) as an economic indicator of reactor efficiency. To make an economically useful reactor, betas better than 5% are needed. ½
The same term is also used when discussing the interactions of the solar wind with various magnetic fields. For example, the beta in the corona of the Sun is about 1%.
Background Fusion basics
Nuclear fusion occurs when the nuclei of two atoms approach closely enough for the nuclear force to pull them together into a single larger nucleus. The strong force is opposed by the electrostatic force created by the positive charge of the nuclei's protons, pushing the nuclei apart. The amount of energy that is needed to overcome this repulsion is known as the Coulomb barrier. The amount of energy released by the fusion reaction when it occurs may be greater or less than the Coulomb barrier. Generally, lighter nuclei with a smaller number of protons and greater number of neutrons will have the greatest ratio of energy released to energy required, and the majority of fusion power research focusses on the use of deuterium and tritium, two isotopes of hydrogen.
Even using these isotopes, the Coulomb barrier is large enough that the nuclei must be given great amounts of energy before they will fuse. Although there are a number of ways to do this, the simplest is to simply heat the gas mixture, which, according to the Maxwell–Boltzmann distribution, will result in a small number of particles with the required energy even when the gas as a whole is relatively "cool" compared to the Coulomb barrier energy. In the case of the D-T mixture, rapid fusion will occur when the gas is heated to about 100 million degrees.
[1] Confinement
This temperature is well beyond the physical limits of any material container that might contain the gasses, which has led to a number of different approaches to solving this problem. The main approach relies on the nature of the fuel at high temperatures. When the fusion fuel gasses are heated to the temperatures required for rapid fusion, they will be completely ionized into a plasma, a mixture of electrons and nuclei forming a globally neutral gas. As the particles within the gas are charged, this allows them to be manipulated by electric or magnetic fields. This gives rise to the majority of controlled fusion concepts.
Even if this temperature is reached, the gas will be constantly losing energy to its surroundings (cooling off). This gives rise to the concept of the "confinement time", the amount of time the plasma is maintained at the required temperature. However, the fusion reactions might deposit their energy back into the plasma, heating it back up, which is a function of the density of the plasma. These considerations are combined in the Lawson criterion, or its modern form, the fusion triple product. In order to be efficient, the rate of fusion energy being deposited into the reactor would ideally be greater than the rate of loss to the surroundings, a condition known as "ignition".
Magnetic confinement fusion approach
In magnetic confinement fusion (MCF) reactor designs, the plasma is confined within a vacuum chamber using a series of magnetic fields. These fields are normally created using a combination of electromagnets and electrical currents running through the plasma itself. Systems using only magnets are generally built using the stellarator approach, while those using current only are the pinch machines. The most studied approach since the 1970s is the tokamak, where the fields generated by the external magnets and internal current are roughly equal in magnitude.
In all of these machines, the density of the particles in the plasma is very low, often described as a "poor vacuum". This limits its approach to the triple product along the temperature and time axis. This requires magnetic fields on the order of tens of Teslas, currents in the megaampere, and confinement times on the order of tens of seconds.
[2] Generating currents of this magnitude is relatively simple, and a number of devices from large banks of capacitors to homopolar generators have been used. However, generating the required magnetic fields is another issue, generally requiring expensive superconducting magnets. For any given reactor design, the cost is generally dominated by the cost of the magnets. Beta
Given that the magnets are a dominant factor in reactor design, and that density and temperature combine to produce pressure, the ratio of the pressure of the plasma to the magnetic energy density naturally becomes a useful figure of merit when comparing MCF designs. In effect, the ratio illustrates how effectively a design confines its plasma. This ratio, beta, is widely used in the fusion field:
\beta = \frac{p}{p_{mag}} = \frac{n k_B T}{B^2/(2\mu_0)}
[3]
\beta is normally measured in terms of the total magnetic field. However, in any real-world design, the strength of the field varies over the volume of the plasma, so to be specific, the average beta is sometimes referred to as the "beta toroidal". In the tokamak design the total field is a combination of the external toroidal field and the current-induced poloidal one, so the "beta poloidal" is sometimes used to compare the relative strengths of these fields. And as the external magnetic field is the driver of reactor cost, "beta external" is used to consider just this contribution.
Troyon beta limit
For a stable plasma, \beta is always smaller than 1 (otherwise it would collapse).
[4] Ideally, a MCF device would want to approach this limit as closely as possible, as this would imply the minimum amount of magnetic force needed for confinement. In practice, it is difficult to come even close to this, and production machines generally operate at betas around 0.1, or 10%. The record was set by the START device at 0.4, or 40%. [5]
These low achievable betas are due to instabilities in the plasma generated through the interaction of the fields and the motion of the particles due to the induced current. As the amount of current is increased in relation to the external field, these instabilities become uncontrollable. In early pinch experiments the current dominated the field components and the kink and sausage instabilities were common, today collectively referred to as "low-n instabilities". As the relative strength of the external magnetic field is increased, these simple instabilities are damped out, but at a critical field other "high-n instabilities" will invariably appear, notably the ballooning mode. For any given reactor design, there is a limit to the beta it can sustain. As beta is a measure of economic merit, a practical reactor must be able to sustain a beta above some critical value, which is calculated to be around 5%.
[6]
Through the 1980s the understanding of the high-n instabilities grew considerably. Shafranov and Yurchenko first published on the issue in 1971 in a general discussion of tokamak design, but it was the work by Wesson and Sykes in 1983
[7] and Francis Troyon in 1984 [8] that developed these concepts fully. Troyon's considerations, or the "Troyon limit", closely matched the real-world performance of existing machines. It has since become so widely used that it is often known simply as the beta limit.
The Troyon limit is given as:
\beta_{max} = \frac{\beta_N I}{a B_0}
[9]
Where
I is the plasma current, B_0 is the external magnetic field, and a is the minor radius of the tokamak (see torus for an explanation of the directions). \beta_N was determined numerically, and is normally given as 0.028 if I is measured in megaamperes. However, it is also common to use 2.8 if \beta_{max} is expressed as a percentage. [9]
Given that the Troyon limit suggested a \beta_{max} around 2.5 to 4%, and a practical reactor had to have a \beta_{max} around 5%, the Troyon limit was a serious concern when it was introduced. However, it was found that \beta_N changed dramatically with the shape of the plasma, and non-circular systems would have much better performance. Experiments on the DIII-D machine (the second D referring to the cross-sectional shape of the plasma) demonstrated higher performance,
[10] and the spherical tokamak design outperformed the Troyon limit by about 10 times. [11] Astrophysics
Beta is also sometimes used when discussing the interaction of plasma in space with different magnetic fields. A common example is the interaction of the solar wind with the magnetic fields of the Sun
[12] or Earth. [13] In this case, the betas of these natural phenomena are generally much smaller than those seen in reactor designs; the Sun's corona has a beta around 1%. [12] Active regions have much higher beta, over 1 in some cases, which makes the area unstable. [14] See also References Notes ^ Bromberg, pg. 18 ^ "Conditions for a fusion reaction", JET ^ Wesson, J: "Tokamaks", 3rd edition page 115, Oxford University Press, 2004 ^ Kenrō Miyamoto, "Plasma Physics and Controlled Nuclear Fusion", Springer, 2005 , pg. 62 ^ Alan Sykes, "The Development of the Spherical Tokamak", ICPP, Fukuoka September 2008 ^ "Scientific Progress in Magnetic Fusion, ITER, and the Fusion Development Path", SLAC Colloquium, 21 April 2003, pg. 17 ^ Alan Sykes et all, Proceedings of 11th European Conference on Controlled Fusion and Plasma Physics, 1983, pg. 363 ^ F. Troyon et all, Plasma Physics and Controlled Fusion, Volume 26, pg. 209 ^ a b Friedberg, pg. 397 ^ T. Taylor, "Experimental Achievement of Toroidal Beta Beyond That Predicted by 'Troyon' Scaling", General Atomics, September 1994 ^ Sykes, pg. 29 ^ a b Alan Hood, "The Plasma Beta", Magnetohydrostatic Equilibria, 11 January 2000 ^ G. Haerendel et all, "High-beta plasma blobs in the morningside plasma sheet", Annales Geophysicae, Volume 17 Number 12, pg. 1592-1601 ^ G. Allan Gary, "Plasma Beta Above a Solar Active region: Rethinking the Paradigm", Solar Physics, Volume 203 (2001), pg. 71–86 Bibliography
Joan Lisa Bromberg, "Fusion: Science, Politics, and the Invention of a New Energy Source", MIT Press, 1982 Jeffrey Freidberg, "Plasma Physics and Fusion Energy", Cambridge University Press, 2007
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
|
It’s been a while, so let’s include a recap : a (transitive) permutation representation of the modular group $\Gamma = PSL_2(\mathbb{Z}) $ is determined by the conjugacy class of a cofinite subgroup $\Lambda \subset \Gamma $, or equivalently, to a dessin d’enfant. We have introduced a
quiver (aka an oriented graph) which comes from a triangulation of the compactification of $\mathbb{H} / \Lambda $ where $\mathbb{H} $ is the hyperbolic upper half-plane. This quiver is independent of the chosen embedding of the dessin in the Dedeking tessellation. (For more on these terms and constructions, please consult the series Modular subgroups and Dessins d’enfants).
Why are quivers useful? To start, any quiver $Q $ defines a noncommutative algebra, the
path algebra $\mathbb{C} Q $, which has as a $\mathbb{C} $-basis all oriented paths in the quiver and multiplication is induced by concatenation of paths (when possible, or zero otherwise). Usually, it is quite hard to make actual computations in noncommutative algebras, but in the case of path algebras you can just see what happens.
Moreover, we can also
see the finite dimensional representations of this algebra $\mathbb{C} Q $. Up to isomorphism they are all of the following form : at each vertex $v_i $ of the quiver one places a finite dimensional vectorspace $\mathbb{C}^{d_i} $ and any arrow in the quiver [tex]\xymatrix{\vtx{v_i} \ar[r]^a & \vtx{v_j}}[/tex] determines a linear map between these vertex spaces, that is, to $a $ corresponds a matrix in $M_{d_j \times d_i}(\mathbb{C}) $. These matrices determine how the paths of length one act on the representation, longer paths act via multiplcation of matrices along the oriented path.
A
necklace in the quiver is a closed oriented path in the quiver up to cyclic permutation of the arrows making up the cycle. That is, we are free to choose the start (and end) point of the cycle. For example, in the one-cycle quiver
[tex]\xymatrix{\vtx{} \ar[rr]^a & & \vtx{} \ar[ld]^b \\ & \vtx{} \ar[lu]^c &}[/tex]
the basic necklace can be represented as $abc $ or $bca $ or $cab $. How does a necklace act on a representation? Well, the matrix-multiplication of the matrices corresponding to the arrows gives a square matrix in each of the vertices in the cycle. Though the dimensions of this matrix may vary from vertex to vertex, what does not change (and hence is a property of the necklace rather than of the particular choice of cycle) is the
trace of this matrix. That is, necklaces give complex-valued functions on representations of $\mathbb{C} Q $ and by a result of Artin and Procesi there are enough of them to distinguish isoclasses of (semi)simple representations! That is, linear combinations a necklaces (aka super-potentials) can be viewed, after taking traces, as complex-valued functions on all representations (similar to character-functions).
In physics, one views these functions as potentials and it then interested in the points (representations) where this function is extremal (minimal) : the
vacua. Clearly, this does not make much sense in the complex-case but is relevant when we look at the real-case (where we look at skew-Hermitian matrices rather than all matrices). A motivating example (the Yang-Mills potential) is given in Example 2.3.2 of Victor Ginzburg’s paper Calabi-Yau algebras.
Let $\Phi $ be a super-potential (again, a linear combination of necklaces) then our commutative intuition tells us that extrema correspond to zeroes of all partial differentials $\frac{\partial \Phi}{\partial a} $ where $a $ runs over all coordinates (in our case, the arrows of the quiver). One can make sense of differentials of necklaces (and super-potentials) as follows : the partial differential with respect to an arrow $a $ occurring in a term of $\Phi $ is defined to be the
path in the quiver one obtains by removing all 1-occurrences of $a $ in the necklaces (defining $\Phi $) and rearranging terms to get a maximal broken necklace (using the cyclic property of necklaces). An example, for the cyclic quiver above let us take as super-potential $abcabc $ (2 cyclic turns), then for example
$\frac{\partial \Phi}{\partial b} = cabca+cabca = 2 cabca $
(the first term corresponds to the first occurrence of $b $, the second to the second). Okay, but then the vacua-representations will be the representations of the quotient-algebra (which I like to call the
vacualgebra)
$\mathcal{U}(Q,\Phi) = \frac{\mathbb{C} Q}{(\partial \Phi/\partial a, \forall a)} $
which in ‘physical relevant settings’ (whatever that means…) turn out to be
Calabi-Yau algebras.
But, let us return to the case of subgroups of the modular group and their quivers. Do we have a natural
super-potential in this case? Well yes, the quiver encoded a triangulation of the compactification of $\mathbb{H}/\Lambda $ and if we choose an orientation it turns out that all ‘black’ triangles (with respect to the Dedekind tessellation) have their arrow-sides defining a necklace, whereas for the ‘white’ triangles the reverse orientation makes the arrow-sides into a necklace. Hence, it makes sense to look at the cubic superpotential $\Phi $ being the sum over all triangle-sides-necklaces with a +1-coefficient for the black triangles and a -1-coefficient for the white ones. Let’s consider an index three example from a previous post [tex]\xymatrix{& & \rho \ar[lld]_d \ar[ld]^f \ar[rd]^e & \\ i \ar[rrd]_a & i+1 \ar[rd]^b & & \omega \ar[ld]^c \\ & & 0 \ar[uu]^h \ar@/^/[uu]^g \ar@/_/[uu]_i &}[/tex]
In this case the super-potential coming from the triangulation is
$\Phi = -aid+agd-cge+che-bhf+bif $
and therefore we have a noncommutative algebra $\mathcal{U}(Q,\Phi) $ associated to this index 3 subgroup. Contrary to what I believed at the start of this series, the algebras one obtains in this way from dessins d’enfants are
far from being Calabi-Yau (in whatever definition). For example, using a GAP-program written by Raf Bocklandt Ive checked that the growth rate of the above algebra is similar to that of $\mathbb{C}[x] $, so in this case $\mathcal{U}(Q,\Phi) $ can be viewed as a noncommutative curve (with singularities).
However, this is not the case for all such algebras. For example, the vacualgebra associated to the second index three subgroup (whose fundamental domain and quiver were depicted at the end of this post) has growth rate similar to that of $\mathbb{C} \langle x,y \rangle $…
I have an outlandish conjecture about the growth-behavior of all algebras $\mathcal{U}(Q,\Phi) $ coming from dessins d’enfants :
the algebra sees what the monodromy representation of the dessin sees of the modular group (or of the third braid group). I can make this more precise, but perhaps it is wiser to calculate one or two further examples…
|
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response)
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details
http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
|
Two Equilateral Triangles on Sides of a Square Problem
Solution 1
Consider the counterclockwise rotation $r$ around $C$ through $60^{\circ}.$ $r(E)=D,$ $r(F)=B,$ and let $r(A)=A'.$ Then $\Delta AA'C$ is equilateral. In particular, $AA'=CA'.$ It follows that $AA'CD$ is a kite so that $A'D$ is the perpendicular bisector of $AC.$ But so is $BD,$ making $A',B,D$ collinear.
Rotations conserve collinearity, so that $A=r^{-1}(A'),$ $F=r^{-1}(B),$ and $E=r^{-1}(D)$ are collinear.
Illustration to Solution 1
Solution 2
$\displaystyle \frac{1}{2}\div\frac{1-\sqrt{3}}{2}=\frac{1+\sqrt{3}}{2}\div\frac{1}{2}$ with $K$ midpoint of $AD$ and $I$ extension of $AD$ such that $\angle AIE=90^{\circ}.$ Thus $\displaystyle \frac{AK}{KF}=\frac{AI}{IE}.$
Therefore $A,F,E$ are collinear.
Solution 3
Angles of $\Delta AFB$ are $75-75-30,$ so the $\angle AFB=75^{\circ}.$ $\angle ADE=15^{\circ}$ and $|AD|=|DE|$ so that $\angle DEA = 15^{\circ},$ making $\angle AEC = 45^{\circ}$ and, in turn, $\angle CFE = 45^{\circ}.$ $\angle AFB + \angle BFC + \angle CFE = 180^{\circ},$ therefore $A,$ $F,$ and $E$ are collinear.
Solution 4
Assume $+X$ axis along $AB$ and $+Y$ axis along $AD$. WLOG, let the side of the square be of unit length. Let $s=\sin\angle FBC$ and $c=\cos\angle FBC$.
$\begin{align} \vec{AF}&=(1-s)\hat{x}+c\hat{y} , \\ \vec{AE}&=c\hat{x}+(1+s)\hat{y}, \\ |\vec{AF}\times\vec{AE}|&=|(1-s)(1+s)-c^2|=|1-(s^2+c^2)|=0. \end{align}$
Note, I have not used $60^{\circ}$ anywhere. Thus, this proof can be generalized to the following case:
$BF=DE=AB$ and $\angle FBC=\angle EDC$.
The equilateral triangle condition is not necessary.
Solution 5
Solution 6
(Solved with rational trigonometry.)
Let the quadrance of $AB = Q.$ Then the spread between $BF$ and $BC$ is $\displaystyle \frac{3}{4}$ by the cross law. Then the $\displaystyle s(BA, BF) = \frac{1}{4} = s(AD, DE)$ by sum of spreads in right triangle. By cross law $Q(AF) = Q\cdot (2-\sqrt{3})$ and $Q(AE) = Q(2+\sqrt{3}),$ and $Q(FE) =2.$
$Q(AF),$ $Q(AE),$ $Q(FE)$ form a quad triple, therefore $A, E, F$ are collinear.
Some ambiguity: I denoted the quadrance of a side by $Q,$ and also use $Q(\text{segment})$ to denote the quadrance of that segment.
For reference
Solution 7
Take coordinates of square as $A(0,0),$ $B(1,0),$ $C(1,1),$ $D(0,1).$ Then, $F$ is $\displaystyle \left(\frac{1}{2},\frac{\sqrt{3}}{2}\right)$ and $\displaystyle E\left(1+\frac{\sqrt{3}}{2},\frac{1}{2}\right).$ Area of $\Delta DFE$ equals $0.$ Hence, the three points are collinear.
Acknowledgment
The problem (and the solution) come from Michel Bataille's column "FOCUS ON..." in the Canadian
Crux Mathematicorum (v 44, n 1, pp 25-27). Solution 2 is by Marcos Carreira and Luca Moroni; Solution 3 is by Ibrahim Akalin; Solution 4 is by Amit Itagi; Solution 5 is by Calendari Matemàtic; Solution 6 is by HIGHOctane; Indiana Jones.
The illustration to Solution 1 is by Crawl.
65462393
|
Square is a regular quadrilateral. All the four sides and angles of a square are equal. The four angles are 90 degrees each, that is, right angles. A square may also be considered as a special case of rectangle wherein the two adjacent sides are of equal length.
In this section, we will learn about the square formulas – a list of the formula related to squares which will help you compute its area, perimeter, and length of its diagonals. They are enlisted below:
\(Area\; of\; a\; Square\; = a^{2}\) Perimeter of a Square =4a \(Diagonal\; of\; a\; Square\; = a\sqrt{2}\)
Where ‘a’ is the length of a side of the square.
Properties of a Square The lengths of all its four sides are equal. The measurements of all its four angles are equal. The two diagonals bisect each other at right angles, that is, 90°. The opposite sides of a square are both parallel and equal in length. The lengths of diagonals of a square are equal.
Derivations:
Consider a square with the lengths of its side and diagonal are
a and d units respectively. Formula for area of a square: Area of a square can be defined as the region which is enclosed within its boundary. As we mentioned, a square is nothing a rectangle with its two adjacent sides being equal in length. Hence, we express area as:
The area of a rectangle = Length × Breadth
Here,
\(Area\; of\; square\; = a \times a = a^{2}\)
The formula for the perimeter of a square: Perimeter of the square is the length of its boundary. The sum of the length of all sides of a square represents its boundary. Hence, the formula can be given by:
Perimeter = length of 4 sides
Perimeter = a+ a + a + a
Perimeter of square = 4a The formula for diagonal of a square: A diagonal is a line which joins two opposite sides in a polygon. For calculating the length diagonal of a square, we make use of the Pythagoras Theorem.
In the above figure, the diagonal’ divides the square into two right angled triangles. It can be noted here that since the adjacent sides of a square are equal in length, the right angled triangle is also isosceles with each of its sides being of length ‘a’.
Hence, we can conveniently apply the Pythagorean theorem on these triangles with base and perpendicular being ‘a’ units and hypotenuse being’ units. So we have:
\(d^{2}=a^{2}+a^{2}\)
Or \(d=\sqrt{2a^{2}}=a\sqrt{2}\;units\) Solved examples: Question 1: A square has one of its sides measuring 23 cm. Calculate its area, perimeter, and length of its diagonal. Solution: Given,
Side of the square = 23 cm
Area of the square:
Area of the square =
\(a^{2}=23^{2}=529 cm^{2}\) Perimeter of the square: Perimeter of the square= 4a = 4 × 23 = 92 cm Diagonal of a square:
Diagonal of a square = \(a\sqrt{2}=23\sqrt{2\;cm}=32.52cm\)
Question 2: A rectangular floor is 50 m long and 20 m wide. Square tiles, each of 5 m side length, are to be used to cover the floor. Find the total number of tiles which will be required to cover the floor. Solution: Given,
Length of the floor = 50 m
Breadth = 20 m
Area of the rectangular floor = length x breadth = 50 m x 20 m = 1000 sq. m
Side of one tile = 5 m
Area of one such tile = side x side = 5 m x 5 m = 25 sq. m
\(No.\; of\; tiles\; needed = \frac{Area\; of\; floor}{Area\; of\; one\; tile}=\frac{1000}{25}=40\; tiles\)
To solve more problems on the topic, download Byju’s -The Learning App.
|
Notation Hiding constants
Unless explicitly stated otherwise, \(O(\cdot)\)-notation hides absolute multiplicative constants. Concretely, every occurrence of \(O(x)\) is a placeholder for some function \(f(x)\) that satisfies \(\forall x\in \R.\, \abs{f(x)}\le C\abs{x}\) for some absolute constant \(C>0\). Similarly, \(\Omega(x)\) is a placeholder for a function \(g(x)\) that satisfies \(\forall x\in \R.\, \abs{g(x)} \ge \abs{x}/C\) for some absolute constant \(C>0\).
Vectors
All vectors are column vectors unless specified otherwise. In particular, the notation \((a,b,c)\) is short hand for a column vector with entries \(a,b,c\in \R\),. We denote the coordinate basis of \(\R^n\) by \(\set{e_i}_{i\in [n]}\). For a vector \(v\in\R^n\), we let \(\transpose v\) be the corresponding row vector.
Inner products and norms
For vectors \(u,v\in \R^n\) with \(u=(u_1,\ldots,u_n)\) and \(v=(v_1,\ldots,v_n)\), we define the inner product of \(u\) and \(v\), unless specified otherwise, \[ \iprod{u,v}=\transpose u v=\sum_{i=1}^n u_i \cdot v_i\,. \] The (Euclidean) norm of a vector \(v\) is \(\norm{v}=\iprod{v,v}^{1/2}\). For \(p\ge 1\), we define the \(\ell^p\)-norm of \(v\), \[ \norm{v}_p = \Paren{\sum_{i=1}^n \abs{v_i}^p}^{1/p}\,. \] For \(p=\infty\), we take the limit, so that \[ \norm{v}_\infty =\max_{i\in [n]} \abs{v_i}\,. \]
Kronecker product
For two matrices \(A\) and \(B\), their Kronecker product is the matrix A\(\otimes B\) with entries \((A\otimes B)_{ii',jj'} = A_{i,j} B_{i',j'}\). This operation also applies to row and column vectors (viewed as matrices with only one column or one row). We use the notation \(A^{\otimes k}=A\otimes \cdots \otimes A\) (\(k\)-times) for the \(k\)-fold tensor power of a matrix \(A\).
Matrices
For matrices with more than two indices, we separate row and column indices by a comma. For example if \(A\) is a linear combination of matrices of the form \(e_i \transpose{(e_j \otimes e_k)}\), we denote the entries of \(A\) by \(A_{i,jk}\). (Note that this convention is consistent with the above notation for Kronecker products.)
Traces
The trace is cyclic, that is, for all matrices \(A\in \R^{m\times n}\) and \(B\in \R^{n\times m}\), \[ \Tr AB = \Tr BA \,. \] A consequence of this property is that for \(x,y\in \R^{n}\) and \(A\in \R^{n\times n}\), \[ \Tr A x \transpose y = \Tr \transpose y A x = \iprod{y, A x}\,. \]
Polynomials
Let \(\R[x]\) be the set of polynomials with real coefficients in variables \(x=(x_1,\ldots,x_n)\). For \(d\in \N\), let \(\R[x]_{\le d}\) be the set of polynomials of degree at most \(d\).
|
Large time behavior of ODE type solutions to nonlinear diffusion equations
1.
Mathematical Institute, Tohoku University, Aoba, Sendai 980-8578, Japan
2.
Graduate School of Mathematical Sciences, The University of Tokyo, 3-8-1 Komaba, Meguro-ku, Tokyo 153-8914, Japan
$ \begin{equation} \left\{ \begin{array}{ll} \partial_t u = \Delta u^m+u^\alpha & \quad\mbox{in}\quad{\bf R}^N\times(0,\infty),\\ u(x,0) = \lambda+\varphi(x)>0 & \quad\mbox{in}\quad{\bf R}^N, \end{array} \right. \end{equation} $
$ m>0 $
$ \alpha\in(-\infty,1) $
$ \lambda>0 $
$ \varphi\in BC({\bf R}^N)\,\cap\, L^r({\bf R}^N) $
$ 1\le r<\infty $
$ \inf_{x\in{\bf R}^N}\varphi(x)>-\lambda $
$ \zeta' = \zeta^\alpha $
$ (0,\infty) $
$ +\infty $
$ t\to\infty $ Keywords:ODE type solutions, nonlinear diffusion equation, large time behavior, the higher order asymptotic expansions, Gauss kernel. Mathematics Subject Classification:Primary: 35B40, 35K55. Citation:Junyong Eom, Kazuhiro Ishige. Large time behavior of ODE type solutions to nonlinear diffusion equations. Discrete & Continuous Dynamical Systems - A, doi: 10.3934/dcds.2019229
References:
[1]
J. Aguirre and M. A. Escobedo, Cauchy problem for $u_t - \Delta u = u^p$ with $0 < p < 1$, Asymptotic behaviour of solutions,
[2] [3]
J. A. Carrillo and G. Toscani,
Asymptotic $L^1$-decay of solutions of the porous medium equation to self-similarity,
[4] [5] [6] [7]
K. Ishige, M. Ishiwata and and T. Kawakami,
The decay of the solutions for the heat equation with a potential,
[8] [9]
K. Ishige and T. Kawakami,
Asymptotic expansions of solutions of the Cauchy problem for nonlinear parabolic equations,
[10]
K. Ishige, T. Kawakami and K. Kobayashi,
Asymptotics for a nonlinear integral equation with a generalized heat kernel,
[11]
K. Ishige and K. Kobayashi,
Convection-diffusion equation with absorption and non-decaying initial data,
[12] [13] [14] [15]
O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva,
[16]
L. A. Peletier and J. Zhao,
Large time behaviour of solutions of the porous media equation with absorption: the fast diffusion case,
[17]
R. Suzuki,
Asymptotic behavior of solutions of quasilinear parabolic equations with supercritical nonlinearity,
[18] [19] [20]
J. L. Vázquez,
[21]
show all references
References:
[1]
J. Aguirre and M. A. Escobedo, Cauchy problem for $u_t - \Delta u = u^p$ with $0 < p < 1$, Asymptotic behaviour of solutions,
[2] [3]
J. A. Carrillo and G. Toscani,
Asymptotic $L^1$-decay of solutions of the porous medium equation to self-similarity,
[4] [5] [6] [7]
K. Ishige, M. Ishiwata and and T. Kawakami,
The decay of the solutions for the heat equation with a potential,
[8] [9]
K. Ishige and T. Kawakami,
Asymptotic expansions of solutions of the Cauchy problem for nonlinear parabolic equations,
[10]
K. Ishige, T. Kawakami and K. Kobayashi,
Asymptotics for a nonlinear integral equation with a generalized heat kernel,
[11]
K. Ishige and K. Kobayashi,
Convection-diffusion equation with absorption and non-decaying initial data,
[12] [13] [14] [15]
O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva,
[16]
L. A. Peletier and J. Zhao,
Large time behaviour of solutions of the porous media equation with absorption: the fast diffusion case,
[17]
R. Suzuki,
Asymptotic behavior of solutions of quasilinear parabolic equations with supercritical nonlinearity,
[18] [19] [20]
J. L. Vázquez,
[21]
[1]
Kin Ming Hui, Soojung Kim.
Asymptotic large time behavior of singular solutions of the fast diffusion equation.
[2]
Jean-Claude Saut, Jun-Ichi Segata.
Asymptotic behavior in time of solution to the nonlinear Schrödinger equation with higher order anisotropic dispersion.
[3] [4] [5] [6]
Nakao Hayashi, Elena I. Kaikina, Pavel I. Naumkin.
Large time behavior of solutions to the generalized derivative nonlinear Schrödinger equation.
[7]
Nakao Hayashi, Pavel I. Naumkin.
Asymptotic behavior in time of solutions to the derivative nonlinear Schrödinger equation revisited.
[8] [9] [10]
Joana Terra, Noemi Wolanski.
Large time behavior for a nonlocal diffusion equation with
absorption and bounded initial data.
[11] [12]
Kazuhiro Ishige, Asato Mukai.
Large time behavior of solutions of the heat equation with inverse square potential.
[13]
Peter V. Gordon, Cyrill B. Muratov.
Self-similarity and long-time behavior of solutions of the
diffusion equation with nonlinear absorption and a boundary source.
[14] [15]
Engu Satynarayana, Manas R. Sahoo, Manasa M.
Higher order asymptotic for Burgers equation and Adhesion model.
[16] [17]
Peng Jiang.
Global well-posedness and large time behavior of classical solutions to the diffusion approximation model in radiation hydrodynamics.
[18]
Shifeng Geng, Lina Zhang.
Large-time behavior of solutions for
the system of compressible adiabatic flow through porous media with
nonlinear damping.
[19]
Guofu Lu.
Nonexistence and short time asymptotic behavior of source-type solution for porous medium equation with convection in one-dimension.
[20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
Consider the simple integral:
\[ I = \lim_{\lambda\rightarrow\infty}\int_{-\infty}^{\infty}dx\;e^{-\lambda f(x)} \]
Assume \(f (x) \) has a global minimum at \(\underline {x = x_0} \), such that \(f' (x_0) = 0 \). If this minimum is well separated from other minima of \(f (x) \) and the value of \(f (x) \) at the global minimum is significantly lower than it is at other minima, then the dominant contributions to the above integral, as \(\lambda \rightarrow \infty \) will come from the integration region around \(\underline {x_0} \). Thus, we may expand \(f (x) \) about this point:
\[ f(x) = f(x_0) + f'(x_0)(x-x_0) + {1 \over 2}f''(x_0)(x-x_0)^2 + \cdots \]
Since \(f' (x_0) = 0 \), this becomes:
\[ f(x) \approx f(x_0) + {1 \over 2}f''(x_0)(x-x_0)^2 \]
Inserting the expansion into the expression for \(I\) gives
\(I\)
\(\lim_{\lambda\rightarrow\infty}e^{-\lambda f(x_0)}\int_{-\infty}^{\infty}dx\;e^{-{\lambda \over 2}f''(x_0)(x-x_0)^2}\)
\(\lim_{\lambda\rightarrow\infty}\left[{2\pi \over \lambda f''(x_0)}\right]^{1/2}e^{-\lambda f(x_0)}\)
Corrections can be obtained by further expansion of higher order terms. For example, consider the expansion of \(f (x) \) up to fourth order:
\[ f(x) \approx f(x_0) + {1 \over 2}f''(x_0)(x-x_0)^2 + {1 \over 6}f'''(x_0)(x-x_0)^3+ {1 \over 24}f^{(iv)}(x_0)(x-x_0)^4 \]
Substituting this into the integrand and further expanding the exponential would give, as the lowest order nonvanishing correction:
\[ I = \lim _{\lambda \rightarrow \infty } e^{-\lambda f(x_0) } \int _{-\infty}^{\infty} dx e^{\frac {-\lambda}{2}} f'' (x_0) (x - x_0)^2 \left [ 1 - {\lambda \over 24} f^{(iv)} (x_0) (x - x_0 )^4 \right ] \]
This approximation is known as the
stationary phase or saddle point approximation. The former may seem a little out-of-place, since there is no phase in the problem, but that is because we formulated it in such a way as to anticipate its application to the path integral. But this is only if \(\lambda \) is taken to be a real instead of an imaginary quantity.
The application to the path integral follows via a similar argument. Consider the path integral expression for the density matrix:
\[ \rho(x,x';\beta) = \int_{x(0)=x}^{x(\beta\hbar)=x'}{\cal D}[x]e^{-S_{\rm E}[x]/\hbar} \]
We showed that the classical path satisfying
is a stationary point of the Euclidean action , i.e., . Thus, we can develop a stationary phase or saddle point approximation for the density matrix by introducing an expansion about the classical path according to
where the correction , satisfying has been expanded in a complete set of orthonormal functions , which are orthonormal on the interval andsatisfy as well as the orthogonality condition:
Setting all the expansion coefficients to 0 recovers the classical path. Thus, we may expand the action (the ``E'' subscript will henceforth be dropped from this discussion) with respect to the expansion coefficients:
Since
the expansion can be worked out straightforwardly by substitution and subsequent differentiation:
where the fourth and eighth lines are obtained from an integration by parts. Let us write the integral in the last line in the suggestive form:
which emphasizes the fact that we have matrix elements of the operator with respect to the basis functions. Thus, the expansion for can be written as
and the density matrix becomes
where . is an overall normalization constant. The integral over the coefficients becomes a generalized Gaussian integral, which brings down a factor of :
where the last line is the abstract representation of the determinant. The determinant is called the
Van Vleck-Pauli-Morette determinant.
If we choose the basis functions to be eigenfunctions of the operator appearing in the above expression, so that they satisfy
Then,
and the determinant can be expressed as a product of the eigenvalues. Thus,
The product must exclude any 0-eigenvalues.
Incidentally, by performing a Wick rotation back to real time according to , the saddle point or stationary phase approximation to the real-time propagator can be derived. The derivation is somewhat tedious and will not be given in detail here, but the result is
where satisfies
and is an integer that increases by 1 each time the determinant vanishes along the classical path. is called the Maslov index. It is important to note that because the classical paths satisfy an endpoint problem, rather than an initial value problem, there can be more than one solution. In this case, one must sum the result over classical paths:
|
In 1923, Louis de Broglie, a French physicist, proposed a hypothesis to explain the theory of the atomic structure.By using a series of substitution de Broglie hypothesizes particles to hold properties of waves. Within a few years, de Broglie's hypothesis was tested by scientists shooting electrons and rays of lights through slits. What scientists discovered was the electron stream acted the same was as light proving de Broglie correct.
Definition of Wave-Particle Duality
The behaviors of the electron does not allow for it to be observable as a particle and as a wave. The two sided nature of the electron is known as the Wave-Particle Duality: The property of particles behaving as waves and the property of waves behaving as particles as well as waves. Although the duality is not very effective in large matter. The wave characteristic of the electron implicates many of the electron's particle behaviors.
Planck's Hypothesis of the Quantum Theory states that energy is emitted in quanta, little packets of energy, instead of a continuous emission. He stated that energy emitted is related to the frequency of the light emitted. Planck's hypothesis states that a quantum of energy was related to the frequency by his equation \(E = h\nu\).
Waves & Particles Behaviors of Light
An easy way to prove the duality between a particle and a wave is to observe light. At the time, many scientists believed that light is a wave. Since light is like waves, it has the ability to diffract, reflect, refract, and interfere etc . . . Yet, light behaved strangely at certain times, and scientists were befuddled until . . .
Diffraction Interference
Albert Einstein's theory of
photoelectric effect contributed greatly to De Broglie's Theory and was a proof that waves and particles could overlap. Light can also be observed as a particle known as photon. When light is shown on certain objects, the electrons will be released. Certain amounts of energy is needed to remove an electron from the surface of a substance. So, if a photon of greater energy than that of an electron hits a solid that electron will be emitted.
The following picture also describes threshold Vo, where one photon did not have enough intensity to throw off an electron.
When the electrons are release, they also release kinetic energy. Classical wave theory states the greater the intensity the greater the energy. Since energy of a wave is directly proportional to its amplitude, it was puzzling for scientists to find brighter lights(higher intensity) did not affect its overall kinetic energy.
However, scientists did discover that frequency of light effectively changed the amount of kinetic energy. Since certain objects do not emit electrons under certain frequencies, a threshold, V
0, is used. This threshold was used to describe the amount of kinetic energy needed for a photon to throw off an electron. They arrived at a linear relation for frequency and kinetic energy given by the rough sketch of
The slope of this line was confirm to be Planck's Constant, h = 6.63 x 10
-34
Using the graph, we are given the same equation as before: E
k = hv. Since the energy of waves and energy of light do not coincide, we can rule that light is a particle that contains the property of waves. De Broglie Wavelength
De Broglie derived his equation using well established theories through the following series of substitutions:
1. De Broglie first used Einstein's famous equation relating matter and energy:
\[ E = mc^2 \]
E= energy, m = mass, c = speed of light
2. Using Planck's theory which states every quantum of a wave has a discrete amount of energy given by Planck's equation:
\[ E= h \nu\]
E = energy, h = Plank's constant(6.62607 x 10
-34 J s), υ = frequency 3. Since de Broglie believes particles and wave have the same traits, the two energies would be the same:
\[ mc^2 = h\nu\]
4. Because real particles do not travel at the speed of light, De Broglie subsituted v, velocity, for c, the speed of light.
\[ mv^2 = h\nu \]
5. Through the equation \(\lambda\), de Broglie substituted \( v/\lambda\) for \(\nu\) and arrived at the final expression that relates wavelength and particle with speed.
\[ mv^2 = \dfrac{hv}{\lambda} \]
Hence:
\[ \lambda = \dfrac{hv}{mv^2} = \dfrac{h}{mv} \]
Although De Broglie was credited for his hypothesis, he had no actual experimental evidence for his conjecture. In 1927, Clinton J. Davisson and Lester H. Germer shot electron particles onto onto a nickel crystal. What they see is the diffraction of the electron similar to waves diffractions against crystals(x-rays). In the same year, an English physicist, George P. Thomson fired electrons towards thin metal foil providing him with the same results as Davisson and Germer.
Quantum Theory Made Easy Video: Dr. Quantum De Broglie's Theory can be seen in Young's Double Slit Experiment explained by the following: Problems
1. The de Broglie wavelength of an electron is 2.0 x 10
-16, find its velocity.
2. A particle with the speed of 2.1 x 10
7, its de Broglie wavelength is 6.5x10 -14. What is the mass of the particle?
3. Find the energy of a particle that weighs .000300 g and has a de Broglie wavelength of 1.9 x 10
-36 m.
4. Determine all of the following frequency, wavelengths, and energy if one is given:
a. frequency = 105 MHz
b. wavelength = 527 nm
c. energy = 3.20 x 10
-17 J
d. frequency = 34.2 x 10
15 Hz Answers:
1. 3.6 x 10
-12
2. 4.9 x 10
-28 kg
3. 4.1 x 10
5 J
4.
a. wavelength: 2.86m, energy: 6.96 x 10
-19 J
b. frequency: 5.69 x 10
14 Hz, energy: 3.77 x 10 -19 J
c. frequency: 4.74 x 10
16 Hz, wavelength: 6.33nm
d. wavelength: 8.77 nm, energy: 2.27 x 10
-17 J References Cutnell, John D. & Kenneth W. Johnson, Physics Sixth Edition. Souther Illinois University at Carbondale. Petrucci, Ralph H., William S. Harwood, F. Geoffrey Herring, & Jeffry D. Madura, General Chemistry, Principles and Modern Appplications Ninth Edition, Upper Saddle River, New Jersey 07458 Smoot, Robert C. Chemistry, A Modern Course, Columbus, Ohio Useful Links Contributors Duy Nguyen (UCD)
|
Answer
325
Work Step by Step
Applying the proper formula, we find: $$\sum _{k=1}^nk=\frac{1}{2}n\left(n+1\right) \\ \frac{1}{2}\cdot \:25\left(25+1\right) \\ 325$$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
Abstract
Let $(\sigma_{1}, \ldots, \sigma_d)$ be a finite sequence of independent random permutations, chosen uniformly either among all permutations or among all matchings on $n$ points. We show that, in probability, as $n\to\infty$, these permutations viewed as operators on the $n-1$ dimensional vector space $\{(x_1,\ldots, x_n)\in \mathbb{C}^n, \sum x_i=0\}$, are asymptotically strongly free. Our proof relies on the development of a matrix version of the non-backtracking operator theory and a refined trace method.
As a byproduct, we show that the non-trivial eigenvalues of random $n$-lifts of a fixed based graphs approximately achieve the Alon-Boppana bound with high probability in the large $n$ limit. This result generalizes Friedman’s Theorem stating that with high probability, the Schreier graph generated by a finite number of independent random permutations is close to Ramanujan.
Finally, we extend our results to tensor products of random permutation matrices. This extension is especially relevant in the context of quantum expanders.
|
I have generalized gamma distribution with the following equation:
$$ f(x) = \frac{\lambda^{a\tau}\tau x^{a\tau - 1}}{\Gamma(a)}e^ {{(x\lambda)}^\tau} $$
and log-likelihood function
$$ l(a, \lambda, \tau) = a \tau n \log{\lambda} + n \log{\tau} - n \log{\Gamma(a)} + (a \tau - 1) \displaystyle\sum_{i=1}^{n} \log{x_i} - \lambda^\tau \displaystyle\sum_{i=1}^{n} x^\tau_i $$
I have several related questions in two topics. I want to test the hypothesis $ H_0: \tau = 1 $ vs. $ H_1: \tau \neq 1 $. I also have to generate the distribution of the test statistic when $ H_0 $ holds (question regarding this has been moved to another topic).
By my understanding, Wilks lambda (the test statistic) in this case should be:
$$ \lambda = 2 (l(\hat{a}, \hat{\lambda}, \hat{\tau}) - l(\tilde{a}, \tilde{\lambda}, 1)) = \\ 2 ( \hat{a} \hat{\tau} n \log{\hat{\lambda}} + n \log{\hat{\tau}} - n \log{\Gamma(\hat{a})} + (\hat{a} \hat{\tau} - 1) \displaystyle\sum_{i=1}^{n} \log{x_i} - \hat{\lambda}^{\hat{\tau}} \displaystyle\sum_{i=1}^{n} x^{\hat{\tau}}_i \\ - \tilde{a} n \log{\tilde{\lambda}} - n \log{\Gamma(\tilde{a})} + (\tilde{a} - 1) \displaystyle\sum_{i=1}^{n} \log{x_i} - \tilde{\lambda} \displaystyle\sum_{i=1}^{n} x_i ) $$
My question now is, is this correct? Is the Wilks statistic for generalized gamma distribution in this case surely chi-square distributed (and if not, why not?)? I am asking, because I am trying to generate the distribution of this statistic under the null hypothesis and I am not getting chi-square distribution (see another topic).
|
Definition:Cotangent Contents Definition
In the above right triangle, we are concerned about the angle $\theta$.
The
cotangent of $\angle \theta$ is defined as being $\dfrac {\text{Adjacent}} {\text{Opposite}}$.
Let a tangent line be drawn to touch $C$ at $A = \left({0, 1}\right)$.
Then the cotangent of $\theta$ is defined as the length of $AB$.
Let $x \in \R$ be a real number.
The real function $\cot x$ is defined as:
$\cot x = \dfrac {\cos x} {\sin x} = \dfrac 1 {\tan x}$
where:
The definition is valid for all $x \in \R$ such that $\sin x \ne 0$.
Let $z \in \C$ be a complex number.
The complex function $\cot z$ is defined as:
$\cot z = \dfrac {\cos z} {\sin z} = \dfrac 1 {\tan z}$
where:
The definition is valid for all $z \in \C$ such that $\cos z \ne 0$.
Like
tangent, the word cotangent comes from the Latin tangentus that which is touching, the present participle of tangere to touch.
It is pronounced with an equal emphasis on both the first and second syllables:
. co-tan-jent Also see Shape of Cotangent Function Cotangent is Cosine divided by Sine Cotangent is Reciprocal of Tangent Cotangent of Complement equals Tangent Results about the cotangent functioncan be found here.
|
Chapter Review Exercises True or False? Justify your answer with a proof or a counterexample.
1) The rectangular coordinates of the point \(\displaystyle (4,\frac{5π}{6})\) are \(\displaystyle (2\sqrt{3},−2).\)
2) The equations \(\displaystyle x=cosh(3t), y=2sinh(3t)\) represent a hyperbola.
Solution: True.
3) The arc length of the spiral given by \(\displaystyle r=\frac{θ}{2}\) for \(\displaystyle 0≤θ≤3π\) is \(\displaystyle \frac{9}{4}π^3\).
4) Given \(\displaystyle x=f(t)\) and \(\displaystyle y=g(t)\), if \(\displaystyle \frac{dx}{dy}=\frac{dy}{dx}\), then \(\displaystyle f(t)=g(t)+C,\) where \(\displaystyle C\) is a constant.
Solution: False. Imagine \(\displaystyle y=t+1, x=−t+1.\)
For the following exercises, sketch the parametric curve and eliminate the parameter to find the Cartesian equation of the curve.
5) \(\displaystyle x=1+t, y=t^2−1, −1≤t≤1\)
6) \(\displaystyle x=e^t, y=1−e^{3t}, 0≤t≤1\)
Solution: \(\displaystyle y=1−x^3\)
7) \(\displaystyle x=sinθ, y=1−cscθ, 0≤θ≤2π\)
8) \(\displaystyle x=4cosϕ, y=1−sinϕ, 0≤ϕ≤2π\)
Solution: \(\displaystyle \frac{x^2}{16}+(y−1)^2=1\)
For the following exercises, sketch the polar curve and determine what type of symmetry exists, if any.
9) \(\displaystyle r=4sin(\frac{θ}{3})\)
10) \(\displaystyle r=5cos(5θ)\)
Solution: Symmetric about polar axis
For the following exercises, find the polar equation for the curve given as a Cartesian equation.
11) \(\displaystyle x+y=5\)
12) \(\displaystyle y^2=4+x^2\)
Solution: \(\displaystyle r^2=\frac{4}{sin^2θ−cos^2θ}\)
For the following exercises, find the equation of the tangent line to the given curve. Graph both the function and its tangent line.
13) \(\displaystyle x=ln(t), y=t^2−1, t=1\)
14) \(\displaystyle r=3+cos(2θ), θ=\frac{3π}{4}\)
Solution: \(\displaystyle y=\frac{3\sqrt{2}}{2}+\frac{1}{5}(x+\frac{3\sqrt{2}}{2})\)
15) Find \(\displaystyle \frac{dy}{dx}, \frac{dx}{dy},\) and \(\displaystyle \frac{d^2x}{dy^2}\) of \(\displaystyle y=(2+e^{−t}), x=1−sin(t)\)
For the following exercises, find the area of the region.
16) \(\displaystyle x=t^2, y=ln(t), 0≤t≤e\)
Solution: \(\displaystyle \frac{e^2}{2}\)
17) \(\displaystyle r=1−sinθ\) in the first quadrant
For the following exercises, find the arc length of the curve over the given interval.
18) \(\displaystyle x=3t+4, y=9t−2, 0≤t≤3\)
Solution: \(\displaystyle 9\sqrt{10}\)
19) \(\displaystyle r=6cosθ, 0≤θ≤2π.\) Check your answer by geometry.
For the following exercises, find the Cartesian equation describing the given shapes.
20) A parabola with focus \(\displaystyle (2,−5)\) and directrix \(\displaystyle x=6\)
Solution: \(\displaystyle (y+5)^2=−8x+32\)
21) An ellipse with a major axis length of 10 and foci at \(\displaystyle (−7,2)\) and \(\displaystyle (1,2)\)
22) A hyperbola with vertices at \(\displaystyle (3,−2)\) and \(\displaystyle (−5,−2)\) and foci at \(\displaystyle (−2,−6)\) and \(\displaystyle (−2,4)\)
Solution: \(\displaystyle \frac{(y+1)^2}{16}−\frac{(x+2)^2}{9}=1\)
For the following exercises, determine the eccentricity and identify the conic. Sketch the conic.
23) \(\displaystyle r=\frac{6}{1+3cos(θ)}\)
24) \(\displaystyle r=\frac{4}{3−2cosθ}\)
Solution: \(\displaystyle e=\frac{2}{3}\), ellipse
25) \(\displaystyle r=\frac{7}{5−5cosθ}\)
26) Determine the Cartesian equation describing the orbit of Pluto, the most eccentric orbit around the Sun. The length of the major axis is 39.26 AU and minor axis is 38.07 AU. What is the eccentricity?
Solution: \(\displaystyle \frac{y^2}{19.03^2}+\frac{x^2}{19.63^2}=1, e=0.2447\)
27) The C/1980 E1 comet was observed in 1980. Given an eccentricity of 1.057 and a perihelion (point of closest approach to the Sun) of 3.364 AU, find the Cartesian equations describing the comet’s trajectory. Are we guaranteed to see this comet again? (Hint: Consider the Sun at point \(\displaystyle (0,0)\).)
|
Export file: Format RIS(for EndNote,Reference Manager,ProCite) BibTex Text Content Citation Only Citation and Abstract
Some results on ordinary words of standard Reed-Solomon codes
1 Mathematical College, Sichuan University, Chengdu 610064, P. R. China;
2 Department of Mathematics, Sichuan Tourism University, Chengdu 610100, P. R. China
Received: , Accepted: , Published:
uis called an ordinary word of $RS_q({\mathbb F}_q^*, k)$, k) if the error distance $d({ u}, RS_q({\mathbb F}_q^*, k))=n-\deg(u(x))$ with u( x) being the Lagrange interpolation polynomial of u.In this paper,we make use of the polynomial method and particularly,we use the König-Rados theorem on the number of nonzero solutions of polynomial equation over finite fields to show that if $q\geq 4, 2\leq{k}\leq{q-2}$,then the received word ${ u}\in{\mathbb F}_q^{q-1}$ of degree q-2 is an ordinary word of $RS_q({\mathbb F}_q^*, k)$ if and only if its Lagrange interpolation polynomial u( x) is of the form
$$u(x)=\lambda\sum\limits_{i=k}^{q-2}a^{q-2-i}x^i+f_{\leq k-1}(x)$$
with $a, \lambda\in{\mathbb F}_q^*$ and $ f_{\leq k-1}(x)\in {\mathbb F}_q[x]$ being of degree at most
k-1.This answers partially an open problem proposed by J.Y.Li and D.Q.Wan in [On the subset sum problem over finite fields,Finite Fields Appls.14(2008),911-929]. References
1. Q. Cheng and E. Murray,
On deciding deep holes of Reed-Solomon codes. In:J.Y. Cai, S.B. Cooper, H. Zhu(eds) Theory and Applications of Models of Computation. TAMC 2007, Lecture Notes in Computer Science, vol. 4484, Springer, Berlin, Heidelberg.
6. R. Lidl and H. Niederreiter, Finite fields, Encyclopedia of Mathematics and its Applications, 2 Eds., Cambridge:Cambridge University Press, 1997.
7. G. Rados,
Zur Theorie der Congruenzen höheren Grades, J. reine angew. Math., 99 (1886), 258-260.
8. G. Raussnitz,
Zur Theorie de Conguenzen höheren Grades, Math. Naturw. Ber. Ungarn., 1 (1882/83), 266-278.
12. G. Z. Zhu and D. Q. Wan.
Computing error distance of Reed-Solomon codes. In:TAMC 2012 Proceedings of the 9th Annual international conference on Theory and Applications of Models of Computation, (2012), 214-224.
© 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution Licese (http://creativecommons.org/licenses/by/4.0)
|
Ordinary Differential Equations (ODEs) describe the rate of change of dependent variables with respect to a single independent variable and are used in many fields to model behavior of the system. There are many good
C libraries available to solve (i.e., integrate systems of ODEs) and SUNDIALS available from the Lawrence Livermore National Laboratory is a one of the most popular and well-respected
C library for solving non-stiff and stiff systems of ODEs.
Currently, this package provides an interface to the
CVODE and
CVODES function (serial version) in the library which is used to solve ODEs (or Initial Value Problems) and calculate sensitivities.
The two exported functions from the package are:
CVODE - An interface to the
CVODE function in
SUNDIALS to solve a system of ODEs.
CVODES - An interface to the
CVODES function in
SUNDIALS to calculate forward sensitivites with respect to parameters of the ODE system.
In future, we plan to provide interface for the other solvers (i.e.,
IDA/IDAS and
ARCODE in the library also. Right now, this package serves as a test case for providing an interface to the
SUNDIALS library for
R users.
One of the advantage of using this package is that all the source code of the SUNDIALS library is bundled with the package itself, so it does not require the SUNDIALS library to be installed on the machine separately (which is sometimes non trivial on a Windows machine).
As described in the link above, the problem is from chemical kinetics, and consists of the following three rate equations:
\[ \begin{aligned} \frac{dy_1}{dt} &= -.04 \times y_1 + 10^4 \times y_2 \times y_3 \\ \frac{dy_2}{dt} &= .04 \times y_1 - 10^4 \times y_2 \times y_3 - 3 \times 10^7 \times y_2^2 \\ \frac{dy_3}{dt} &= 3 \times 10^7 \times y_2^2 \end{aligned} \]
with time interval from \(t = 0.0\) to \(t = 4 \times 10^{10}\) and initial conditions: \[ y_1 = 1.0 , ~y_2 = y_3 = 0 \]
The problem is stiff.
The original example , While integrating the system, also uses the rootfinding feature to find the points at which
\[ y_1 = 1 \times 10^{-4} \] or at which \[ y_3 = 0.01 \] but currently root-finding is not supported in this version. As in the original example, this package also solves the problem with the
BDF method, Newton iteration with the
SUNDENSE dense linear solver, however, without a user-supplied Jacobian routine (unlike the original example). The future versions may include an ability to provide Jacobian calculated analytically or via automatic differentiation.
CVODE uses a scalar relative tolerance and a vector absolute tolerance (which can be provided as an input). Output is printed in decades from \(t = 0.4\) to \(t = 4 \times 10^{10}\) in this example.
Differential equations can be written as an
R function or as an
Rcpp function. Differential equations function must be written as
where
t represents time,
y is the vector describing the values of states/entities of the ODE system at time
t and
p is the vector of parameters used to define the ODEs. The output of this function is a vector of rate of change of entities of
y.
The key aspect to keep in mind is that the signature of the function must be
function(t,y,p). As an example, we try to solve the
cv_Roberts_dns.c problem described above, the original code can be found here. An example of an
R function is as follows:
ODE_R <- function(t, y, p){ ## initialize the derivative vector ydot <- vector(mode = "numeric", length = length(y)) ## p (parameter vector input) is [-0.04 1e04 3e07] ydot[1] = p[1]*y[1] + p[2]*y[2]*y[3] ydot[2] = -p[1]*y[1] - p[2]*y[2]*y[3] - p[3]*y[2]*y[2] ydot[3] = p[3]*y[2]*y[2] ydot ## return ydot}
where
p is a parameter vector with the values
[-0.04 1e04 3e07].
Also, since this package using
Rcpp to bundle the C code, we can use the notation used in
Rcpp to describe the system of ODEs. The
cv_Roberts_dns problem describe above can be described in an
Rcpp function as follows (indices in
C++ start from
0, functions need to declare their return type, here
NumericVector and every expression ends in a semicolon,
;) :
#include <Rcpp.h>using namespace Rcpp;// [[Rcpp::export]]NumericVector ODE_Rcpp (double t, NumericVector y){ // Initialize ydot filled with zeros NumericVector ydot(y.length()); // p (parameter vector) is [-0.04 1e04 3e07] ydot[0] = p[0] * y[0] + p[1] * y[1] * y[2]; ydot[1] = -p[0]*y[0] - p[1]*y[1]*y[2] - p[2]*y[1]*y[1] ydot[2] = p[2] * y[1] * y[1]; return ydot;}
The above is a re-write of the
cvRoberts_dns.c example in the documentation of
CVODE. The original example can be found the document here.
The entire
R file to create right hand side of ODE function (which calculates rates of change) is as follows (also found in
/inst/examples/cv_Roberts_dns.r):
# ODEs described by an R functionODE_R <- function(t, y, p){ ## initialize the derivative vector ydot <- vector(mode = "numeric", length = length(y)) ## p (parameter vector) is [-0.04 1e04 3e07] ydot[1] = p[1]*y[1] + p[2]*y[2]*y[3] ydot[2] = -p[1]*y[1] - p[2]*y[2]*y[3] - p[3]*y[2]*y[2] ydot[3] = p[3]*y[2]*y[2] ydot ## return ydot}# ODEs can also be described using RcppRcpp::sourceCpp(code = '#include <Rcpp.h>using namespace Rcpp;// [[Rcpp::export]]NumericVector ODE_Rcpp (double t, NumericVector y){ // Initialize ydot filled with zeros NumericVector ydot(y.length()); // p (parameter vector) is [-0.04 1e04 3e07] ydot[0] = p[0] * y[0] + p[1] * y[1] * y[2]; ydot[1] = -p[0]*y[0] - p[1]*y[1]*y[2] - p[2]*y[1]*y[1] ydot[2] = p[2] * y[1] * y[1]; return ydot;}')# Generate time vector, IC and call cvode to solve the equations# R code to genrate time vector, IC and solve the equationstime_vec <- c(0.0, 0.4, 4.0, 40.0, 4E2, 4E3, 4E4, 4E5, 4E6, 4E7, 4E8, 4E9, 4E10)IC <- c(1,0,0)params <- c(0.04, 10000, 30000000)reltol <- 1e-04abstol <- c(1e-8,1e-14,1e-6)## Solving the ODEs using cvode functiondf1 <- cvode(time_vec, IC, ODE_R , params, reltol, abstol) ## using Rdf2 <- cvode(time_vec, IC, ODE_Rcpp , params, reltol, abstol) ## using Rcpp## Check that both solutions are identical# identical(df1, df2)
The final output is the
df1 matrix in which first column is time, second, third and fourth column are the values of
y1,
y2 and
y3 respectively.
> df1 [,1] [,2] [,3] [,4] [1,] 0e+00 1.000000e+00 0.000000e+00 0.00000000 [2,] 4e-01 9.851641e-01 3.386242e-05 0.01480205 [3,] 4e+00 9.055097e-01 2.240338e-05 0.09446793 [4,] 4e+01 7.158016e-01 9.185043e-06 0.28418924 [5,] 4e+02 4.505209e-01 3.222826e-06 0.54947590 [6,] 4e+03 1.832217e-01 8.943516e-07 0.81677741 [7,] 4e+04 3.898091e-02 1.621669e-07 0.96101893 [8,] 4e+05 4.936971e-03 1.984450e-08 0.99506301 [9,] 4e+06 5.170103e-04 2.069098e-09 0.99948299[10,] 4e+07 5.204927e-05 2.082078e-10 0.99994795[11,] 4e+08 5.184946e-06 2.073989e-11 0.99999482[12,] 4e+09 5.246212e-07 2.098486e-12 0.99999948[13,] 4e+10 6.043000e-08 2.417200e-13 0.99999994
Sensitivity with respect to the parameters of the ODE system can be calculated using
CVODES function. This package implements Forward Sensitivity Analysis from
CVODES function (see the example
cvRoberts_FSA_dns.c from the link here). Briefly, given the ODE system as described below
\[ \begin{aligned}\frac{dy_1}{dt} &= -p_1y_1 + p_2y_2y_3 \\ \frac{dy_2}{dt} &= p_1y_1 - p_2y_2y_3 - p_3y_2^2 \\\frac{dy_3}{dt} &= p_3y_2^2 \end{aligned}\] with the same initial conditions as above (i.e., \(y_1 = 0, y_2 = y_3 = 0\)) and \(p_1 = 0.04, \quad p_2 = 10^4, \quad p_3 = 3\times10^7\). The system of Sensitivity equations (taken from
cvs_guide.pdf) that is solved can be given by \[\begin{aligned}\frac{ds}{dt} = \left[\begin{array}{ccc}-p_1 & p_2y_3 & p_2y_2 \\p_1 & -p_2y_3-2p_3y_2 & -p_2y_2 \\0 & 2p_3y_2 & 0\end{array}\right]s_i + \frac{\partial f}{\partial p_i}, \quad s_i(t_0) = \left[\begin{array} {c} 0 \\ 0 \\ 0 \end{array}\right], \quad i = 1, 2, 3 \end{aligned}\] where \[\frac{\partial f}{\partial p_1} = \left[\begin{array} {c} -y_1 \\ y_1 \\ 0 \end{array}\right],\quad \frac{\partial f}{\partial p_2} = \left[\begin{array} {c} y_2y_3 \\ -y_2y_3 \\ 0 \end{array}\right],\quad\frac{\partial f}{\partial p_3} = \left[\begin{array} {c} 0 \\ -y_2^2 \\ y_2^2 \end{array}\right]\] In the original
CVODES interface from
SUNDIALS, the sensitivity equations can either be provided by the user or can be calculated using numerical interpolation by the solver. Here, I have only included the numerical interpolation version and currently the user cannot specify the sensitivity equations. However, in the future versions I will provide an ability to specify user-defined Jacobian as well as user-defined sensitivity equations.
Also, currently, forward sensitivities are calculated with respect to all parameters of the system. I plan to provide in future, an ability to specify specific particular parameters for which sensitivity is desired. Currently,
SIMULATENOUS and
STAGGERED methods of sensitivity calculations from the
SUNDIALS library are supported in this package.
CVODES
Once, the system of ODEs has been defined using the instructions provided above, sensitivities can be easily calculated using the
cvodes function using the function call below:
The additional arguments in
cvodes specify the senstivity calculation method to be used (
STG for
STAGGERED or
SIM for
SIMULATENOUS) and flag for error control (either
T or
F).
The output of
cvodes is a matrix with number of rows equal to the length of the time vector (
time_vec) and the number of columns being equal to length of (
y \(\times\)
p + 1). The first columns is for time. Currently, the sensitiviy of every enitity is calculated with respect to every parameter in model. For example, for the current model with
3 entities (ODEs) and
3 parameters, a total of
9 sensitivities are calculated at each output time, i.e.
y1 w.r.t
p1,
p2,
p3,
y2 w.r.t.
p1,
p2,
p3 and so on. The first 3 (
length(y)) columns give sensitivity w.r.t the first parameter, the next 3 (
length(y)) columns give sensitivity w.r.t the second parameter and so on.
In the Sensitivity Matrix output for the systems of equations described above, the first column gives output time, the next
3 columns provide sensitivity of
y1,
y2 and
y3 w.r.t first parameter (say
p1), the next three columns provide sensitivity of
y1,
y2 and
y3 w.r.t. the second parameter (
p2) and so on. The output Sensitivity Matrix is given below. The sensitivity values match with the values provided in the
CVODES documentation.
> df1 [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [1,] 0e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 [2,] 4e-01 -3.561085e-01 3.902252e-04 3.557183e-01 9.483149e-08 -2.132509e-10 -9.461823e-08 -1.573297e-11 -5.289692e-13 1.626194e-11 [3,] 4e+00 -1.876130e+00 1.792229e-04 1.875951e+00 2.961233e-06 -5.830758e-10 -2.960650e-06 -4.932970e-10 -2.762408e-13 4.935732e-10 [4,] 4e+01 -4.247395e+00 4.592812e-05 4.247349e+00 1.372964e-05 -2.357270e-10 -1.372941e-05 -2.288274e-09 -1.138015e-13 2.288387e-09 [5,] 4e+02 -5.958192e+00 3.545986e-06 5.958189e+00 2.273754e-05 -2.260807e-11 -2.273752e-05 -3.789554e-09 -4.994795e-14 3.789604e-09 [6,] 4e+03 -4.750132e+00 -5.991971e-06 4.750138e+00 1.880937e-05 2.312156e-11 -1.880939e-05 -3.134824e-09 -1.875976e-14 3.134843e-09 [7,] 4e+04 -1.574902e+00 -2.761679e-06 1.574905e+00 6.288404e-06 1.100645e-11 -6.288415e-06 -1.047876e-09 -4.536508e-15 1.047881e-09 [8,] 4e+05 -2.363168e-01 -4.584043e-07 2.363173e-01 9.450741e-07 1.832930e-12 -9.450760e-07 -1.574929e-10 -6.362045e-16 1.574935e-10 [9,] 4e+06 -2.566355e-02 -5.105587e-08 2.566361e-02 1.026491e-07 2.042044e-13 -1.026493e-07 -1.711080e-11 -6.851356e-17 1.711087e-11[10,] 4e+07 -2.597859e-03 -5.190342e-09 2.597864e-03 1.039134e-08 2.076100e-14 -1.039136e-08 -1.732552e-12 -6.930923e-18 1.732559e-12[11,] 4e+08 -2.601996e-04 -5.199259e-10 2.602002e-04 1.040802e-09 2.079717e-15 -1.040804e-09 -1.737821e-13 -6.951356e-19 1.737828e-13[12,] 4e+09 -2.648142e-05 -5.616896e-11 2.648147e-05 1.059193e-10 2.246502e-16 -1.059195e-10 -1.804535e-14 -7.218146e-20 1.804542e-14[13,] 4e+10 -2.899376e-06 -7.759920e-12 2.899383e-06 1.159764e-11 3.104024e-17 -1.159768e-11 -1.727574e-15 -6.910296e-21 1.727581e-15
In future, I intend to provide options to select specific entities and parameters with respect to which sensitivities are to be computed as the sensitivity matrix can get very large for medium to large models.
The package
sundialr provides a way to interface with the famous
SUNDIALS C library to solver initial value problems. The package allows the system of differential equations to be written in
R or using
Rcpp. Currently only a single initialization of the ODE is supported, but we plan to add repeated initializations (e.g., repeated dosing) in near future for
CVODE. For sensitivities, currently, calculation of forward sensitivities for all entities with respect to all the parameters in the model is implemented. However the ability to select specific entities and parameters for which sensitivities is to be calculated will be added soon. As a note, since this package is under active development, the interfaces of both
CVODE and
CVODES (i.e., the function signatures) may change in the future versions. Please keep this mind if you intend to use
sundialr in your applications. In near future, interface for other solvers from the
C library such as
IDA/IDAS and
ARKODE will also be added.
|
Definition:Linearly Dependent/Set Definition Let $\struct {G, +_G, \circ}_R$ be a unitary $R$-module. Let $S \subseteq G$.
That is, such that:
$\displaystyle \exists \set {\lambda_k: 1 \le k \le n} \subseteq R: \sum_{k \mathop = 1}^n \lambda_k \circ a_k = e$
where $a_1, a_2, \ldots, a_n$ are distinct elements of $S$, and where at least one of $\lambda_k$ is not equal to $0_R$.
Let $\left({\R^n,+,\cdot}\right)_{\R}$ be a real vector space.
Let $S \subseteq \R^n$.
That is, such that:
$\displaystyle \exists \left\{{\lambda_k: 1 \le k \le n}\right\} \subseteq \R: \sum_{k \mathop = 1}^n \lambda_k \mathbf v_k = \mathbf 0$
where $\left\{{\mathbf v_1, \mathbf v_2, \ldots, \mathbf v_n}\right\} \subseteq S$, and such that at least one of $\lambda_k$ is not equal to $0$.
Also see
|
Quasi-categories (or $\infty$-categories, as they are often called) are a very convenient setting for doing abstract homotopy theory. One of their amazing features is the following: Given a diagram of quasi-categories, we can form its homotopy limit, yielding a quasi-category again. For example, the inverse (homotopy) limit of the diagram
$\cdots \xrightarrow{\Omega} \mathcal{S}_* \xrightarrow{\Omega} \mathcal{S}_* \xrightarrow{\Omega} \mathcal{S}_*$
(for $\mathcal{S}_*$ the quasi-category of pointed spaces and $\Omega$ denotes the loop space) gives the quasi-category of spectra. These homotopy limits can be (abstractly) defined to be the homotopy limits in the Joyal model structure on simplicial sets, where the quasi-categories are just the fibrant objects. There is also a more explicit description given in Lurie's Higher Topos Theory (we come back to an example later in this question).
Many important examples of quasi-categories are constructed from a simplicial model category $\mathcal{M}$ in the following way: The sub simplicial category $\mathcal{M}^\circ$ of bifibrant objects forms a Bergner fibrant simplicial category and taking the coherent nerve produces a quasi-category $N(\mathcal{M}^\circ)$. Thus, the following question seems to be natural: Can we reconstruct the homotopy limit of the coherent nerves of a diagram of model categories as the coherent nerve of a "homotopy limit" of model categories?
A candidate is given in Julie Bergner's paper Homotopy limits of model categories and more general homotopy theories, Definition 3.1. We won't recall here the general definition, but only indicate it in the case that we index over a diagram with one object and a group $G$ as automorphism (i.e., we have a group action on our model category): Then an object in $holim_G \mathcal{M}$ is an object $X \in \mathcal{M}$ together with morphisms $f_g: X \to g\cdot X$ (for $g\in G$) such that $f_e = id_X$ and $f_{hg} = (h\cdot f_g)\circ f_h$ (i.e, objects with a twisted $G$-action). At least in this case, the homotopy limit has a model structure (Bergner mentions the injective one, but at least sometimes, it has also the projective one, which is Quillen equivalent); actually it is a simplicial one if $\mathcal{M}$ was a simplicial model category. Indeed, it is the simplicial subcategory of $G$-equivariant morphisms in $Fun(EG, \mathcal{M})$ where $EG$ denotes the contractible groupoid associated to $G$. Thus, more precisely, our question is:
Is $N((holim_G \mathcal{M})^\circ)$ categorically equivalent to $holim_G N(\mathcal{M}^\circ)$ as quasi-categories?
There are several pieces of evidence for this:
If I am not mistaken, the description of homotopy limits in Higher Topos Theory implies that the homotopy fixed points of a quasi-category $\mathcal{C}$ are given as $Map(N(EG), \mathcal{C})^G$ (where Map denotes the internal Hom of simplicial sets and $()^G$ denotes strict fixed points). Thus the question is equivalent to whether $Map(N(EG), N(\mathcal{M}^\circ))^G$ is categorically equivalent to $N((Fun(EG, \mathcal{M})^G)^\circ)$. By strictification of homotopy coherent diagrams, a similar statement holds if we don't take $G$-fixed points, but I don't see how to prove the statement involving the $G$-fixed points.
Even more convincingly, Julie Bergner shows in her paper that homotopy limits of model categories are compatible with homotopy limits in the complete Segal space model structure on simplicial spaces. More precisely, she shows that the classification diagram functor commutes with homotopy limits up to weak equivalence (Theorem 4.1). Now, one could get the impression that we are finished since the complete Segal space model structure and the Joyal model structure are Quillen equivalent. But this is not sufficient: one has to prove that the classification diagram functor is send under this Quillen equivalence to something weakly equivalent to the coherent nerve. Although one gets some compatibility results from the papers Quasi-categories vs. Simplicial Categories (by Andre Joyal) and Complete Segal spaces arising from simplicial categories (by Julie Bergner), I didn't quite find the right statement to make the comparision work.
As a last word of motiviation, I want to add that I stumbled upon these questions when I thought about Galois descent, where one often considers objects with twisted group actions.
|
Periodic solutions of some classes of continuous second-order differential equations
1.
Departament de Matemátiques, Universitat Autónoma de Barcelona, 08193 Bellaterra, Barcelona, Catalonia, Spain
2.
Department of Mathematics, Laboratory LMA, University of Annaba, Elhadjar, 23 Annaba, Algeria
We study the periodic solutions of the second-order differential equations of the form $ \ddot x ± x^{n} = μ f(t), $ or $ \ddot x ± |x|^{n} = μ f(t), $ where $n=4,5,...$, $f(t)$ is a continuous $T$-periodic function such that $\int_0^T {f\left( t \right)} dt\ne 0$, and $μ$ is a positive small parameter. Note that the differential equations $ \ddot x ± x^{n} = μ f(t)$ are only continuous in $t$ and smooth in $x$, and that the differential equations $ \ddot x ± |x|^{n} = μ f(t)$ are only continuous in $t$ and locally-Lipschitz in $x$.
Mathematics Subject Classification:Primary: 37G15, 37C80, 37C3. Citation:Jaume Llibre, Amar Makhlouf. Periodic solutions of some classes of continuous second-order differential equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 477-482. doi: 10.3934/dcdsb.2017022
References:
[1] [2]
A. Buică, J. Llibre and O. Yu. Makarenkov,
On Yu.A.Mitropol'skii's Theorem on periodic solutions of systems of nonlinear differential equations with nondifferentiable right-hand sides,
[3] [4] [5]
T. Carvalho, R. D. Euzébio, J. Llibre and D. J. Tonon,
Detecting periodic orbits in some 3D chaotic quadratic polynomial differential systems,
[6] [7]
Y. A. Kuznetzov,
[8]
N. G. Lloyd,
[9] [10]
R. Ortega,
The number of stable periodic solutions of time-dependent Hamiltonian systems with one degree of freedom,
show all references
References:
[1] [2]
A. Buică, J. Llibre and O. Yu. Makarenkov,
On Yu.A.Mitropol'skii's Theorem on periodic solutions of systems of nonlinear differential equations with nondifferentiable right-hand sides,
[3] [4] [5]
T. Carvalho, R. D. Euzébio, J. Llibre and D. J. Tonon,
Detecting periodic orbits in some 3D chaotic quadratic polynomial differential systems,
[6] [7]
Y. A. Kuznetzov,
[8]
N. G. Lloyd,
[9] [10]
R. Ortega,
The number of stable periodic solutions of time-dependent Hamiltonian systems with one degree of freedom,
[1]
Paola Buttazzoni, Alessandro Fonda.
Periodic perturbations of scalar second order differential equations.
[2]
Jaume Llibre, Amar Makhlouf, Sabrina Badi.
$3$ - dimensional Hopf bifurcation via averaging theory of second order.
[3]
Zhiming Guo, Xiaomin Zhang.
Multiplicity results for periodic solutions to a class of second order delay differential
equations.
[4]
Zaihong Wang.
Periodic solutions of the second order differential equations with asymmetric nonlinearities depending on the derivatives.
[5]
Wenying Feng, Guang Zhang, Yikang Chai.
Existence of positive solutions for second order differential equations arising from chemical reactor theory.
[6]
Xiaoming Wang.
Quasi-periodic solutions for a class of second order differential equations with a nonlinear damping term.
[7] [8]
Juan Campos, Rafael Obaya, Massimo Tarallo.
Favard theory and fredholm alternative for disconjugate recurrent second order equations.
[9]
Xuelei Wang, Dingbian Qian, Xiying Sun.
Periodic solutions of second order equations with asymptotical non-resonance.
[10] [11] [12]
Anna Capietto, Walter Dambrosio.
A topological degree approach to sublinear systems of second order differential equations.
[13]
Yuan Guo, Xiaofei Gao, Desheng Li.
Structure of the set of bounded solutions for a class of nonautonomous second order
differential equations.
[14]
Saroj Panigrahi, Rakhee Basu.
Oscillation results for second order nonlinear neutral differential equations with delay.
[15]
José F. Cariñena, Irina Gheorghiu, Eduardo Martínez.
Jacobi fields for second-order differential equations on Lie algebroids.
[16]
Gafurjan Ibragimov, Askar Rakhmanov, Idham Arif Alias, Mai Zurwatul Ahlam Mohd Jaffar.
The soft landing problem for an infinite system of second order differential equations.
[17]
Alessandro Fonda, Fabio Zanolin.
Bounded solutions of nonlinear second order ordinary differential equations.
[18]
Abdelkader Boucherif.
Positive Solutions of second order differential equations with integral boundary conditions.
[19]
Qiong Meng, X. H. Tang.
Multiple solutions of second-order ordinary differential equation
via Morse theory.
[20]
Kyeong-Hun Kim, Kijung Lee.
A weighted $L_p$-theory for second-order parabolic and elliptic partial differential systems on a half space.
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top]
|
HALP: High-Accuracy Low-Precision Training by Chris De Sa, Megan Leszczynski, Jian Zhang, Alana Marzoev, Chris Aberger, Kunle Olukotun, and Chris Ré Using fewer bits of precision to train machine learning models limits training accuracy—or does it? This post describes cases in which we can get high-accuracy solutions using low-precision computation via a technique called bit recentering, and our theory to explain what's going on.
Low-precision computation has been gaining a lot of traction in machine learning. Companies have even started developing new hardware architectures that natively support and accelerate low-precision operations including Microsoft's Project Brainwave and Google's TPU. Even though using low precision can have a lot of systems benefits, low-precision methods have been used primarily for inference—not for training. Previous low-precision training algorithms suffered from a
fundamental tradeoff: when calculations use fewer bits, more round-off error is added, which limits training accuracy. According to conventional wisdom, this tradeoff limits practitioners' ability to deploy low-precision training algorithms in their systems. But is this tradeoff really fundamental? Is it possible to design algorithms that use low precision without it limiting their accuracy?
It turns out that yes,
it is sometimes possible to get high-accuracy solutions from low-precision training—and here we'll describe a new variant of stochastic gradient descent (SGD) called high- accuracy low precision (HALP) that can do it. HALP can do better than previous algorithms because it reduces the two sources of noise that limit the accuracy of low-precision SGD: gradient variance and round-off error. To reduce noise from gradient variance, HALP uses a known technique called stochastic variance-reduced gradient (SVRG). SVRG periodically uses full gradients to decrease the variance of the gradient samples used in SGD. To reduce noise from quantizing numbers into a low-precision representation, HALP uses a new technique we call bit centering. The intuition behind bit centering is that as we get closer to the optimum, the gradient gets smaller in magnitude and in some sense carries less information, so we should be able to compress it. By dynamically re-centering and re-scaling our low-precision numbers, we can lower the quantization noise as the algorithm converges. HALP is provably able to produce arbitrarily accurate solutions at the same linear convergence rate as full-precision SVRG, while using low-precision iterates with a fixed number of bits. This result upends the conventional wisdom about what low-precision training algorithms can accomplish. Why was low-precision SGD limited?
First, to set the stage: we want to solve training problems of the form\[ \text{maximize } f(w) = \frac{1}{N} \sum_{i=1}^N f_i(w) \text{ over } w \in \mathbb{R}^d. \]This is the classic empirical risk minimization problem used to train many machine learning models, including deep neural networks.One standard way of solving this is with
stochastic gradient descent, which is an iterative algorithm that approaches the optimum by running\[ w_{t+1} = w_t - \alpha \nabla f_{i_t}(w_t) \]where \( i_t \) is an index randomly chosen from \( \{1, \ldots, N\} \) at each iteration.We want to run an algorithm like this, but to make the iterates \( w_t \) low-precision. That is, we want them to use fixed-point arithmetic with a small number of bits, typically 8 or 16 bits (this is small compared with the 32-bit or 64-bit floating point numbers that are standard for these algorithms).But when this is done directly to the SGD update rule, we run into a representation problem: the solution to the problem \( w^* \) may not be representable in the chosen fixed-point representation.For example, if we use an 8-bit fixed-point representation that can store the integers \( \{ -128, -127, \ldots, 127 \} \), and the true solution is \( w^* = 100.5 \) then we can't get any closer than a distance of \( 0.5 \) to the solution since we can't even represent non-integers.Beyond this, the round-off error that results from converting the gradients to fixed-point can slow down convergence.These effects together limit the accuracy of low-precision SGD. Bit Centering
When we are running SGD, in some sense what we are actually doing is averaging (or summing up) a bunch of gradient samples.The key idea behind bit centering is
as the gradients become smaller, we can average them with less error using the same number of bits.To see why, think about averaging a bunch of numbers in \([-100, 100]\) and compare this to averaging a bunch of numbers in \([-1, 1]\). In the former case, we'd need to choose a fixed-point representation that can cover the entire range \([-100, 100]\) (for example, \( \{ -128, -127, \ldots, 126, 127 \} \)), while in the latter case, we can choose one that covers \([-1, 1]\) (for example, \( \{ -\frac{128}{127}, -\frac{127}{127}, \ldots, \frac{126}{127}, \frac{127}{127} \} \)). This means that with a fixed number of bits, the delta, the difference between adjacent representable numbers, is smaller in the latter case than in the former: as a consequence, the round-off error will also be lower.
This key idea gives us a key insight. To average the numbers in range \([-1, 1]\) with less error than the ones in \([-100, 100]\), we needed to use a different fixed-point representation.This insight suggests that
we should dynamically update the low-precision representation: as the gradients get smaller, we should use fixed-point numbers that have a smaller delta and cover a smaller range.
But how do we know how to update our representation? What range do we need to cover? Well, if our objective is strongly convex with parameter \( \mu \), then whenever we take a full gradient at some point \( w \), we can bound the location of the optimum with\[ \| w - w^* \| \le \frac{1}{\mu} \| \nabla f(w) \|. \] This inequality gives us a range of values in which the solution can be located, and so
whenever we compute a full gradient, we can re-center and re-scale the low-precision representation to cover this range.This process is illustrated in the following figure.
We call this operation
bit centering. Note that even if our objective is not strongly convex, we can still perform bit-centering: now the parameter \( \mu \) becomes a hyperparameter of the algorithm. With periodic bit centering, as an algorithm converges, the quantization error decreases—and it turns out that this can let it converge to arbitrarily accurate solutions. HALP
HALP is our algorithm which runs SVRG and uses bit centering with a full gradient at every epoch to update the low-precision representation.The full details and algorithm statement are in the paper; here, we'll just present an overview of those results. First, we showed that for strongly convex, Lipschitz smooth functions (this is the standard setting under which the convergence rate of SVRG was originally analyzed), as long as the number of bits \( b \) we use satisfies\[ 2^b > O\left(\kappa \sqrt{d} \right) \]where \( \kappa \) is the
condition number of the problem, then for an appropriate setting of the step size and epoch length (details for how to set these are in the paper), HALP will converge at a linear rate to arbitrarily accurate solutions.More explicitly, for some \( 0 < \gamma < 1 \),\[ \mathbf{E}\left[ f(\tilde w_{K+1}) - f(w^*) \right] \le \gamma^K \left( f(\tilde w_1) - f(w^*) \right) \]where \( \tilde w_{K+1} \) denotes the value of the iterate after the K-th epoch.We can see this happening in the following figure.
This figure evaluates HALP on linear regression on a synthetic dataset with 100 features and 1000 examples. It compares it with base full-precision SGD and SVRG, low-precision SGD (LP-SGD), and a low-precision version of SVRG without bit centering (LP-SVRG). Notice that HALP converges to very high-accuracy solutions even with only 8-bits (although it is eventually limited by floating-point error). In this case HALP converges to an even higher-accuracy solution than full-precision SVRG because HALP uses less floating-point arithmetic and therefore is less sensitive to floating-point inaccuracy.
...and there's more!
This was only a selection of results:
there's a lot more in the paper. We showed that HALP matches SVRG's convergence trajectory--even for Deep learning models. We implemented HALP efficiently, and showed that it can run up to \( 4 \times \) faster than full-precision SVRG on the CPU. We also implemented HALP in TensorQuant, a deep learning library, and showed that it can exceed the validation performance of plain low-precision SGD on some deep learning tasks.
The obvious but exciting next step is to implement HALP efficiently on low-precision hardware, following up on our work for the next generation of compute architectures (at ISCA 2017).
|
Let $k$ be an algebraically closed field, and $f_0,\dots,f_m \in k[x_0,\dots,x_n]$ be homogeneous polynomials of the same degree. Denote by $I\subset k[x_0,\dots,x_m]$ the kernel of the homomorphism sending $x_i$ to $f_i$. Do we have the following statement?
For any $\xi\in k^{m+1}$, $\xi$ is a common zero of $I$ if and only if $\xi=(f_0(a),\dots,f_m(a))$ for some $a\in k^{n+1}$ ---(I)
I think that the direction $\Leftarrow$ is obviously true, so I would want to prove the other direction.
The state if we assume that $f_0,\dots,f_m$ have no common zeros besides $(0,\dots,0)$ since then we have a regular map $f=(f_0,\dots,f_m)$ from $\mathbb{P}^n$ to $\mathbb{P}^m$ and we have
The image of $f$ is a closed set in the Zariski topology of $\mathbb{P}^n$---(II)
let $J$ be the ideal of the image of $f$ and I want to show that $J=I$.
We have $I\subset J$ since every polynomial in $I$ vanishes on $f(\mathbb{P}^n)$. While for any $p\in J,$ we have $(p\circ f)(a)=p(f(a))=0$ for all $a\in \mathbb {P^n}$ and this is possible only when $p\circ f=0$ and thus $p\in I$ hence $J\subset I$.
I don't know how to proceed to the case that $f_0,\dots,f_m$ have a common zero apart from $(0,\dots,0)$. Also, for the case that those $f_i$ do not have a common root in $\mathbb{P^n}$, I want to find a proof not requiring (II) since (I) would imply (II) also.
Any help or hints will be appreciated.
|
Created in the early 17th century, the gas laws have been around to assist scientists in finding volumes, amount, pressures and temperature when coming to matters of gas. The gas laws consist of three primary laws: Charles' Law, Boyle's Law and Avogadro's Law (all of which will later combine into the General Gas Equation and Ideal Gas Law).
Introduction
The three fundamental gas laws discover the relationship of pressure, temperature, volume and amount of gas. Boyle's Law tells us that the volume of gas increases as the pressure decreases. Charles' Law tells us that the volume of gas increases as the temperature increases. And Avogadro's Law tell us that the volume of gas increases as the amount of gas increases. The ideal gas law is the combination of the three simple gas laws.
Ideal Gases
Ideal gas, or perfect gas, is the theoretical substance that helps establish the relationship of four gas variables, pressure (P),
volume(V), the amount of gas(n)and temperature(T). It has characters described as follow: The particles in the gas are extremely small, so the gas does not occupy any spaces. The ideal gas has constant, random and straight-line motion. No forces between the particles of the gas. Particles only collide elastically with each other and with the walls of container.
Real gas, in contrast, has real volume and the collision of the particles is not elastic, because there are attractive forces between particles. As a result, the volume of real gas is much larger than of the ideal gas, and the pressure of real gas is lower than of ideal gas. All real gases tend to perform ideal gas behavior at low pressure and relatively high temperature.
The
tells us how much the real gases differ from ideal gas behavior. compressiblity factor (Z)
\[ Z = \dfrac{PV}{nRT} \]
For ideal gases, \( Z = 1 \). For real gases, \( Z\neq 1 \).
Boyle's Law
In 1662, Robert Boyle discovered the correlation between
Pressure (P)and Volume (V) (assuming Temperature(T)and Amount of Gas(n)remain constant):
\[ P\propto \dfrac{1}{V} \rightarrow PV=x \]
where x is a constant depending on amount of gas at a given temperature.
Pressure is inversely proportional to Volume
Another form of the equation (assuming there are 2 sets of conditions, and setting both constants to eachother) that might help solve problems is:
\[ P_1V_1 = x = P_2V_2 \]
Example 1.1
A 17.50mL sample of gas is at 4.500 atm. What will be the volume if the pressure becomes 1.500 atm, with a fixed amount of gas and temperature?
Charles' Law
In 1787, French physicists Jacques Charles, discovered the correlation between
Temperature(T) and Volume(V) (assuming Pressure (P) and Amount of Gas(n) remain constant):
\[ V \propto T \rightarrow V=yT \]
where y is a constant depending on amount of gas and pressure. Volume is directly proportional to Temperature
Another form of the equation (assuming there are 2 sets of conditions, and setting both constants to eachother) that might help solve problems is:
\[ \dfrac{V_1}{T_1} = y = \dfrac{V_2}{T_2} \]
Example 1.2
A sample of Carbon dioxide in a pump has volume of 20.5 mL and it is at 40.0
\[ V_2=\dfrac{V_1 \centerdot T_2}{T_1}\]
\[ =\dfrac{20.5mL \centerdot (60+273.15K)}{40+273.15K}\]
\[ = 22.1mL \]
Avogadro's Law
In 1811, Amedeo Avogadro fixed Gay-Lussac's issue in finding the correlation between the
Amount of gas(n) and Volume(V) (assuming Temperature(T) and Pressure(P) remain constant):
\[ V \propto n \rightarrow V = zn\]
where z is a constant depending on Pressure and Temperature.
Volume(V) is directly proportional to the Amount of gas(n)
Another form of the equation (assuming there are 2 sets of conditions, and setting both constants to eachother) that might help solve problems is:
\[ \dfrac{P_1}{n_1} = z= \dfrac{P_2}{n_2}\]
Example 1.3
A 3.80 g of oxygen gas in a pump has volume of 150 mL. constant temperature and pressure. If 1.20g of oxygen gas is added into the pump. What will be the new volume of oxygen gas in the pump if temperature and pressure held constant?
V
\[ n_1= \dfrac{m_1}{M_oxygen gas} \]
\[ n_2= \dfrac{m_2}{M_oxygen gas} \]
\[ V_2=\dfrac{V_1 \centerdot n_2}{n_1}\]
\[ = \dfrac{150mL\centerdot \dfrac{5.00g}{32.0g \centerdot mol^-1} \dfrac{3.80g}{32.0g\centerdot mol^-1} \]
\[ = 197ml\]
The ideal gas law is the combination of the three simple gas laws. By setting all three laws directly or inversely proportional to Volume, you get:
\[ V \propto \dfrac{nT}{P}\]
Next replacing the directly proportional to sign with a constant(R) you get:
\[ V = \dfrac{RnT}{P}\]
And finally get the equation:
\[ PV = nRT \]
where P= the absolute pressure of ideal gas
V= the volume of ideal gas n = the amount of gas T = the absolute temperature R = the gas constant
Here, R is the called the gas constant. The value of R is determined by experimental results. Its numerical value changes with units.
R = gas constant = 8.3145 Joules · mol -1 · K-1 (SI Unit)
= 0.082057 L · atm·K-1 · mol-1
Example 1.4
At 655mm Hg and 25.0
n=?
\[ n=\frac{PV}{RT} \]
\[ =\frac{655mm Hg \centerdot \frac{1 atm}{760mm Hg} \centerdot 0.75L}{0.082057L \centerdot atm \centerdot mol^-1 \centerdot K^-1 \centerdot (25+273.15K) }\]
\[ =0.026 mol\]
Evaluation of the Gas Constant, R
You can get the numerical value of gas constant, R, from the ideal gas equation, PV=nRT. At standard temperature and pressure, where temperature is 0
oC, or 273.15 K, pressure is at 1 atm, and with a volume of 22.4140L,
\[ R= \frac{PV}{RT} \]
\[ \frac{1 atm \centerdot 22.4140L}{1 mol \centerdot 273.15K} \]
\[ =0.082057 \; L \centerdot atm \centerdot mol^{-1} K^{-1} \]
\[ R= \frac{PV}{RT} \]
\[ = \frac{1 atm \centerdot 2.24140 \centerdot 10^{-2}m^3}{1 mol \centerdot 273.15K} \]
\[ = 8.3145\; m^3\; Pa \centerdot mol^{-1} \centerdot K^{-1} \]
General Gas Equation
In an Ideal Gas situation, \( \frac{PV}{nRT} = 1 \) (assuming all gases are "ideal" or perfect). In cases where \( \frac{PV}{nRT} \neq 1 \) or if there are multiple sets of conditions (Pressure(P), Volume(V), number of gas(n), and Temperature(T)), use the General Gas Equation:
Assuming 2 set of conditions:
Initial Case: Final Case:
\[ P_iV_i = n_iRT_i \; \; \; \; \; \; P_fV_f = n_fRT_f \]
Setting both sides to R (which is a constant with the same value in each case), one gets:
\[ R= \dfrac{P_iV_i}{n_iT_i} \; \; \; \; \; \; R= \dfrac{P_fV_f}{n_fT_f} \]
If one substitutes one R for the other, one will get the final equation and the General Gas Equation:
\[ \dfrac{P_iV_i}{n_iT_i} = \dfrac{P_fV_f}{n_fT_f} \]
Standard Conditions
If in any of the laws, a variable is not give, assume that it is given. For constant temperature, pressure and amount:
Absolute Zero (Kelvin): 0 K = - 273.15 oC
2. Pressure: 1 Atmosphere (760 mmHg)
T(K) = T( oC) + 273.15 (unit of the temperature must be Kelvin)
3. Amount: 1 mol = 22.4 Liter of gas
4. In the Ideal Gas Law, the gas constant R = 8.3145 Joules · mol
-1 · K -1 = 0.082057 L · atm·K - 1 · mol - 1
The Van der Waals Equation For Real Gases
Dutch physicist Johannes Van Der Waals developed an equation for describing the deviation of real gases from the ideal gas. There are two correction terms added into the ideal gas equation. They are \( 1 +a\frac{n^2}{V^2}\), and \( 1/(V-nb) \).
Since the attractive forces between molecules do exist in real gases, the pressure of real gases is actually lower than of the ideal gas equation. This condition is considered in the van der waals equation. Therefore, the correction term \( 1 +a\frac{n^2}{V^2} \) corrects the pressure of real gas for the effect of attractive forces between gas molecules.
Similarly, because gas molecules have volume, the volume of real gas is much larger than of the ideal gas, the correction term \(1 -nb \) is used for correcting the volume filled by gas molecules.
Practice Problems If 4L of H 2gas at 1.43 atm is at standard temperature, and the pressure were to increase by a factor of 2/3, what is the final volume of the H 2gas? (Hint: Boyle's Law) If 1.25L of gas exists at 35 oC with a constant pressure of .70 atm in a cylindrical block and the volume were to be multiplied by a factor of 3/5, what is the new temperature of the gas? (Hint: Charles's Law) A ballon with 4.00g of Helium gas has a volume of 500mL. When the temperature and pressure remain constant. What will be the new volume of Helium in the ballon if another 4.00g of Helium is added into the ballon? (Hint: Avogadro's Law) Solutions 1. 2.40L
To solve this question you need to use Boyle's Law:
\[ P_1V_1 = P_2V_2 \]
Keeping the key variables in mind, temperature and the amount of gas is constant and therefore can be put aside, the only ones necessary are:
Initial Pressure: 1.43 atm Initial Volume: 4 L Final Pressure: 1.43x1.67 = 2.39 Final Volume(unknown): V 2
Plugging these values into the equation you get:
V
2=(1.43atm x 4 L)/(2.39atm) = 2.38 L 2. 184.89 K
To solve this question you need to use Charles's Law:
Once again keep the key variables in mind. The pressure remained constant and since the amount of gas is not mentioned, we assume it remains constant. Otherwise the key variables are:
Initial Volume: 1.25 L Initial Temperature: 35 oC + 273.15 = 308.15K Final Volume: 1.25L*3/5 = .75 L Final Temperature: T 2
Since we need to solve for the final temperature you can rearrange Charles's:
Once you plug in the numbers, you get: T 2=(308.15 K x .75 L)/(1.25 L) = 184.89 K 3. 1000 mL or 1L
Using Avogadro's Law to solve this problem, you can switch the equation into \( V_2=\frac{n_1\centerdot V_2}{n_2} \). However, you need to convert grams of Helium gas into moles.
\[ n_1 = \frac{4.00g}{4.00g/mol} = \text{1 mol} \]
Similarily, n
2=2 mol
\[ V_2=\frac{n_2 \centerdot V_2}{n_1}\]
\[ =\frac{2 mol \centerdot 500mL}{1 mol}\]
\[ = \text{1000 mL or 1L } \]
References Petrucci, Ralph H. General Chemistry: Principles and Modern Applications. 9th Ed. Upper Saddle River, NJ: Pearson Prentice Hall, 2007. Staley, Dennis. Prentice Hall Chemistry. Boston, MA: Pearson Prentice Hall, 2007. Olander, Donald R. "Chapter2 Equation of State." General Thermodynamics. Boca Raton, NW: CRC, 2008. Print O'Connell, John P., and J. M. Haile. "Properties Relative to Ideal Gases." Thermodynamics: Fundamentals for Applications. Cambridge: Cambridge UP, 2005. Print. Ghare, Shakuntala. "Ideal Gas Laws for One Component." Ideal Gas Law, Enthalpy, Heat Capacity, Heats of Solution and Mixing. Vol. 4. New York, NY, 1984. Print. F.
|
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response)
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details
http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
|
This isn't an answer to conjecture 1, just an elaboration of things others mentioned.
There is every reason to think that the answer to conjecture 1 is yes and even that for fixed $m \gt 1$ we have that for each odd integer $x \geq 3$ there are infinitely many $n$ with $p_{n+m}-p_n-p_m=x.$ We don't know that there is even one, however we can say with good precision how the number of solutions
should be expected to grow as $n \rightarrow \infty.$ Computations always (seem) to conform to this with good fidelity as far as checked. I'll explain that a bit and then point out that that fixing $m=2$ would likely not be the most fruitful way to find a solution.
So with $m=3$ and $p_3=5$ solving $p_{n+3}-p_n-5=15$ amounts to finding two primes $p_N$ and $p_n$ with
We do not know that $p_N-p_m=20$ infinitely often. For each even integer g define $\pi_g(X)$ to be the number of pairs $(p_N,p_n)$ with $p_N-p_n=g$ and $p_N \leq X.$ Then $\pi_2(X)$ is the number of twin primes up to $X$ We don't know that this grows without bound but can expect that it is assymptotic (in some sense which could be made precise) to $C_2\frac{X}{\ln^2X}$ and that the constant $C_2=\prod_p(1-\frac1{(p-1)^2})$ with the product over the odd primes.
There is a similar constant $C_g$ for each even $g.$ Namely $C_g=C_2\prod_p\frac{p-1}{p-2}$ where the product is over odd primes which divide $g.$
So $p_n+20$ is expected to be prime more often that $p_2$ but about as often as $p+10.$ i.e. $\pi_{20}(X) \sim \pi_{10}(X)\sim \frac{4}{3}\pi_2(X).$ That is a prediction which holds up quite well as far as checked.
For an amazing exposition of this read Heuristic Reasoning In The Theory of Numbers by Polya.
But our goal had an extra condition. Want something like to have
$p_n,p_n+6,p_n+14,p_n+20$ to all be prime but $p_n+h$ be composite for $h=2,4,8,10,12,16,18.$ It is possible to similarly predict how often that happens up to $X.$ Summing over finite number of cases would give a prediction for the number of solutions of the given problem.
$m=2$ is a little easier but I wanted to use a different value.
But where is it most fruitful to look for solutions to $p_{n+m}-p_n-p_m=x$?
Here is a graph for $p_{n+m}-p_n-p_m=101$ with $n+m\lt 200.$
I find $168$ solutions. here is a graph.
Of the $168$ solutions, $61$ of them have $m \in\{34,35,36,37,38\}$ and the ratio $\frac{n}{m}$ ranges from $3$ to $5.2$
Using $p_k \sim {k}{\ln{k}}$ one might be able to argue that for fixed $r=\frac{n}{m}$ ( really $r$ in some small range like $[3,5]$) there is a narrow range of $m$ values that would be worth searching first. Perhaps the best $r$ (given $x$) could be estimated. I would have guessed $r \sim 1$ is best, but that seems not to be the case based on this one computation. Perhaps the optimal range is past $m+n=200.$
|
The monstrous moonshine picture is the subgraph of Conway’s big picture consisting of all lattices needed to describe the 171 moonshine groups.
It consists of:
– exactly 218 vertices (that is, lattices), out of which
– 97 are number-lattices (that is of the form $M$ with $M$ a positive integer), and
– 121 are proper number-like lattices (that is of the form $M \frac{g}{h}$ with $M$ a positive integer, $h$ a divisor of $24$ and $1 \leq g \leq h$ with $(g,h)=1$).
The $97$ number lattices are closed under taking divisors, and the corresponding Hasse diagram has the following shape
Here, number-lattices have the same colour if they have the same local structure in the moonshine picture (that is, have a similar neighbourhood of proper number-like lattices).
There are 7 different types of local behaviour:
The
white numbered lattices have no proper number-like neighbours in the picture.
The
yellow number lattices (2,10,14,18,22,26,32,34,40,68,80,88,90,112,126,144,180,208 = 2M) have local structure
\[
\xymatrix{M \ar@{-}[r] & \color{yellow}{2M} \ar@{-}[r] & M \frac{1}{2}} \]
which involves all $2$-nd (square) roots of unity centered at the lattice.
The
green number lattices (3,15,21,39,57,93,96,120 = 3M) have local structure
\[
\xymatrix{& M \ar@[red]@{-}[d] & \\ M \frac{1}{3} \ar@[red]@{-}[r] & \color{green}{3M} \ar@[red]@{-}[r] & M \frac{2}{3}} \]
which involve all $3$-rd roots of unity centered at the lattice.
The
blue number lattices (4,16,20,28,36,44,52,56,72,104 = 4M) have as local structure
\[
\xymatrix{M \frac{1}{2} \ar@{-}[d] & & M \frac{1}{4} \ar@{-}[d] \\ 2M \ar@{-}[r] & \color{blue}{4M} \ar@{-}[r] & 2M \frac{1}{2} \ar@{-}[d] \\ M \ar@{-}[u] & & M \frac{3}{4}} \]
and involve the $2$-nd and $4$-th root of unity centered at the lattice.
The
purple number lattices (6,30,42,48,60 = 6M) have local structure
\[
\xymatrix{& M \frac{1}{3} \ar@[red]@{-}[d] & 2M \frac{1}{3} & M \frac{1}{6} \ar@[red]@{-}[d] & \\ M \ar@[red]@{-}[r] & 3M \ar@{-}[r] \ar@[red]@{-}[d] & \color{purple}{6M} \ar@{-}[r] \ar@[red]@{-}[u] \ar@[red]@{-}[d] & 3M \frac{1}{2} \ar@[red]@{-}[r] \ar@[red]@{-}[d] & M \frac{5}{6} \\ & M \frac{2}{3} & 2M \frac{2}{3} & M \frac{1}{2} & } \]
and involve all $2$-nd, $3$-rd and $6$-th roots of unity centered at the lattice.
The unique
brown number lattice 8 has local structure
\[
\xymatrix{& & 1 \frac{1}{4} \ar@{-}[d] & & 1 \frac{1}{8} \ar@{-}[d] & \\ & 1 \frac{1}{2} \ar@{-}[d] & 2 \frac{1}{2} \ar@{-}[r] \ar@{-}[d] & 1 \frac{3}{4} & 2 \frac{1}{4} \ar@{-}[r] & 1 \frac{5}{8} \\ 1 \ar@{-}[r] & 2 \ar@{-}[r] & 4 \ar@{-}[r] & \color{brown}{8} \ar@{-}[r] & 4 \frac{1}{2} \ar@{-}[d] \ar@{-}[u] & \\ & & & 1 \frac{7}{8} \ar@{-}[r] & 2 \frac{3}{4} \ar@{-}[r] & 1 \frac{3}{8}} \]
which involves all $2$-nd, $4$-th and $8$-th roots of unity centered at $8$.
Finally, the local structure for the central red lattices $12,24 = 12M$ is
\[
\xymatrix{ M \frac{1}{12} \ar@[red]@{-}[dr] & M \frac{5}{12} \ar@[red]@{-}[d] & M \frac{3}{4} \ar@[red]@{-}[dl] & & M \frac{1}{6} \ar@[red]@{-}[dr] & M \frac{1}{2} \ar@[red]@{-}[d] & M \frac{5}{6} \ar@[red]@{-}[dl] \\ & 3M \frac{1}{4} \ar@{-}[dr] & 2M \frac{1}{6} \ar@[red]@{-}[d] & 4M \frac{1}{3} \ar@[red]@{-}[d] & 2M \frac{1}{3} \ar@[red]@{-}[d] & 3M \frac{1}{2} \ar@{-}[dl] & \\ & 2M \frac{1}{2} \ar@[red]@{-}[r] & 6M \frac{1}{2} \ar@{-}[dl] \ar@[red]@{-}[d] \ar@{-}[r] & \color{red}{12M} \ar@[red]@{-}[d] \ar@{-}[r] & 6M \ar@[red]@{-}[d] \ar@{-}[dr] \ar@[red]@{-}[r] & 2M & \\ & 3M \frac{3}{4} \ar@[red]@{-}[dl] \ar@[red]@{-}[d] \ar@[red]@{-}[dr] & 2M \frac{5}{6} & 4M \frac{2}{3} & 2M \frac{2}{3} & 3M \ar@[red]@{-}[dl] \ar@[red]@{-}[d] \ar@[red]@{-}[dr] & \\ M \frac{1}{4} & M \frac{7}{12} & M \frac{11}{12} & & M \frac{1}{3} & M \frac{2}{3} & M} \]
It involves all $2$-nd, $3$-rd, $4$-th, $6$-th and $12$-th roots of unity with center $12M$.
No doubt this will be relevant in connecting moonshine with non-commutative geometry and issues of replicability as in Plazas’ paper Noncommutative Geometry of Groups like $\Gamma_0(N)$.
Another of my pet follow-up projects is to determine whether or not the monster group $\mathbb{M}$ dictates the shape of the moonshine picture.
That is, can one recover the 97 number lattices and their partition in 7 families starting from the set of element orders of $\mathbb{M}$, applying some set of simple rules?
One of these rules will follow from the two equivalent notations for lattices, and the two different sets of roots of unities centered at a given lattice. This will imply that if a number lattice belongs to a given family, certain divisors and multiples of it must belong to related families.
If this works out, it may be a first step towards a possibly new understanding of moonshine.
Similar Posts: A forgotten type and roots of unity (again) the monstrous moonshine picture – 2 Moonshine’s green anaconda Roots of unity and the Big Picture the monster dictates her picture The Big Picture is non-commutative nc-geometry and moonshine? A tetrahedral snake Snakes, spines, threads and all that looking for the moonshine picture
|
Monotonicity and symmetry of solutions to fractional Laplacian equation
School of Mathematical Sciences, Shanghai Jiaotong University, Shanghai 200240, China
$0 < \alpha < 2$
$\Omega$
$\mathbb R^{n}$
$\begin{equation}\left\{\begin{array}{ll}(-\Delta)^{\alpha/2} u(x)=f(x,u,\nabla{u}),~u(x)>0,&\qquad x\in{\Omega}, \\u(x)\equiv0,&\qquad x\notin{\Omega}.\end{array}\right. \tag{1}\label{p1}\end{equation}$ method of moving planesfor the fractional Laplacian to obtain the monotonicity and symmetry of the positive solutions of a semi-linear equation involving the fractional Laplacian. By using the integral definition of the fractional Laplacian, we first introduce various maximum principles which play an important role in the process of moving planes. Then we establish the monotonicity and symmetry of positive solutions of the semi-linear equations involving the fractional Laplacian. Keywords:Monotonicity, symmetry, fractional Laplacian, Dirichlet problem, positive solutions, direct method of moving planes for fractional Laplacian. Mathematics Subject Classification:Primary: 35S15, 35B06, 35J61. Citation:Tingzhi Cheng. Monotonicity and symmetry of solutions to fractional Laplacian equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 3587-3599. doi: 10.3934/dcds.2017154
References:
[1] [2]
H. Berestycki, L. Caffarelli and L. Nirenberg, Symmetry for elliptic equations in a halfspace, in
[3]
H. Berestycki, L. Caffarelli and L. Nirenberg,
Monotonicity for elliptic equations in unbounded Lipschitz domains,
[4]
H. Berestycki, L. Caffarelli and L. Nirenberg,
Inequalities for second order elliptic equations with applications to unbounded domains I,
[5] [6]
H. Berestycki and L. Nirenberg,
Monotonicity, symmetry and antisymmetry of solutions of semilinear elliptic equations,
[7]
C. Brandle, E. Colorado, de Pablo A. and U. Sanchez,
A concave-convex elliptic problem involving the fractional Laplacian,
[8]
H. Brezis and L. A. Peletier,
Asymptotics for elliptic equations involving critical growth. Partial differential equations and the calculus of variations, Vol. Ⅰ, Progr,
[9] [10] [11] [12] [13] [14] [15] [16] [17]
C. V. Coffman,
Uniqueness of the ground state solution for $\Delta u-u+u^3$ and a variational characterization of other solutions,
[18] [19] [20] [21]
B. Gidas, W. M. Ni and L. Nirenberg, Symmetry of positive solutions of nonlinear elliptic equations in $\mathbb{R}^n$, in
[22]
H. G. Kaper and M. K. Kwong,
Uniqueness of non-negative solutions of a class of semilinear elliptic equations,
[23] [24]
C. Li,
Monotonicity and symmetry of solutions of fully nonlinear elliptic equations on bounded domains,
[25]
C. Li,
Monotonicity and symmetry of solutions of fully nonlinear elliptic equations on unbounded domains,
[26] [27] [28]
L. Zhang and T. Cheng, Liouville theorems involving the fractional Laplacian on the upper half Euclidean space, submitted to Acta Applicandae Mathematicae.Google Scholar
show all references
References:
[1] [2]
H. Berestycki, L. Caffarelli and L. Nirenberg, Symmetry for elliptic equations in a halfspace, in
[3]
H. Berestycki, L. Caffarelli and L. Nirenberg,
Monotonicity for elliptic equations in unbounded Lipschitz domains,
[4]
H. Berestycki, L. Caffarelli and L. Nirenberg,
Inequalities for second order elliptic equations with applications to unbounded domains I,
[5] [6]
H. Berestycki and L. Nirenberg,
Monotonicity, symmetry and antisymmetry of solutions of semilinear elliptic equations,
[7]
C. Brandle, E. Colorado, de Pablo A. and U. Sanchez,
A concave-convex elliptic problem involving the fractional Laplacian,
[8]
H. Brezis and L. A. Peletier,
Asymptotics for elliptic equations involving critical growth. Partial differential equations and the calculus of variations, Vol. Ⅰ, Progr,
[9] [10] [11] [12] [13] [14] [15] [16] [17]
C. V. Coffman,
Uniqueness of the ground state solution for $\Delta u-u+u^3$ and a variational characterization of other solutions,
[18] [19] [20] [21]
B. Gidas, W. M. Ni and L. Nirenberg, Symmetry of positive solutions of nonlinear elliptic equations in $\mathbb{R}^n$, in
[22]
H. G. Kaper and M. K. Kwong,
Uniqueness of non-negative solutions of a class of semilinear elliptic equations,
[23] [24]
C. Li,
Monotonicity and symmetry of solutions of fully nonlinear elliptic equations on bounded domains,
[25]
C. Li,
Monotonicity and symmetry of solutions of fully nonlinear elliptic equations on unbounded domains,
[26] [27] [28]
L. Zhang and T. Cheng, Liouville theorems involving the fractional Laplacian on the upper half Euclidean space, submitted to Acta Applicandae Mathematicae.Google Scholar
[1] [2] [3] [4]
Leyun Wu, Pengcheng Niu.
Symmetry and nonexistence of positive solutions to fractional
[5] [6]
Xudong Shang, Jihui Zhang, Yang Yang.
Positive solutions of nonhomogeneous fractional Laplacian problem with critical exponent.
[7]
Tadeusz Kulczycki, Robert Stańczy.
Multiple solutions for Dirichlet nonlinear BVPs involving fractional Laplacian.
[8]
Ran Zhuo, Wenxiong Chen, Xuewei Cui, Zixia Yuan.
Symmetry and non-existence of solutions for a nonlinear system involving the fractional Laplacian.
[9]
Ran Zhuo, Yan Li.
Nonexistence and symmetry of solutions for Schrödinger systems involving fractional Laplacian.
[10]
Selma Yildirim Yolcu, Türkay Yolcu.
Sharper estimates on the eigenvalues of Dirichlet fractional Laplacian.
[11]
Vladimir Georgiev, Koichi Taniguchi.
On fractional Leibniz rule for Dirichlet Laplacian in exterior domain.
[12] [13]
Dengfeng Lü, Shuangjie Peng.
On the positive vector solutions for nonlinear fractional Laplacian systems with linear coupling.
[14]
Rongrong Yang, Zhongxue Lü.
The properties of positive solutions to semilinear equations involving the fractional Laplacian.
[15]
Lishan Lin.
A priori bounds and existence result of positive solutions for fractional Laplacian systems.
[16]
Mikko Kemppainen, Peter Sjögren, José Luis Torrea.
Wave extension problem for the fractional Laplacian.
[17]
Phuong Le.
Symmetry of singular solutions for a weighted Choquard equation involving the fractional $ p $-Laplacian.
[18] [19] [20]
Salvatore A. Marano, Nikolaos S. Papageorgiou.
Positive solutions to a Dirichlet problem with $p$-Laplacian
and concave-convex nonlinearity depending on a parameter.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
Ltoh is a customizable LaTeX to HTML converter. It handles text,tables, and hypertext links.
ltoh is a large Perl script, and hence is(almost completely) platform independent.
ltoh is customizable in thatyou can specify how to translate a given LaTeX2
ltoh will give a friendlywarning.
See the
ltoh web page for documentation, thelatest release, and how to contact the author (see the bottom of the webpage). Naturally, the HTML version of document was generated using
ltoh, and in my opinion looks better than the LaTeX2
Ltoh has two main restrictions. First,
ltoh does
not handlemath equations, which in general are difficult to display in HTML.[Some have resorted to converting the latex equations intoPostscript (PS), converting the PS to a bitmapped figure, and thedisplaying the figure in HTML. This is all too difficult for me.]Second,
ltoh requires La/Tex macro parameters to be delimited bybraces; in practice,
ltoh might be unsuitable for most existing TeXcode.
Surprisingly, I often preview my LaTeX2
ltoh instead ofrunning
latex,
dvips, and
ghostview.
Ltoh is distributed as either a zip file or a gzipped tar file (about 75K).
Both distributions contain the following files.
ltoh.pl The perl script that does everything ltoh.specs The default specifications. readme.html Generated by ltoh readme.dvi LaTeX2 readme.ps Uses Times Roman readme.txt Text version (generated from netscape) README rq-ltoh.specs An example of my specifications rq209.sty Allow use of new LaTeX2
Ltoh version 97e requires the following system software.
perl -v
to see the version of Perl you have.
Additionally, the default
ltoh specifications isbased on standard new latex macros. Finally, to make full use of HTMLtables, future versions of
ltoh are likely to support multiple rows inthe table packages only found in the new latex.
ltoh relies on unique matching braces to delimit arguments to the latexmacros. In particular, the font family and size commands in old latexdo not use braces to delimit arguments. Thus,
ltoh\ does not (andprobably never will) handle old latex 2.09 font specifications.Instead, you must use the LaTeX2
(Old latex) Normal but switch \bf to {bold \it then italics, back to} bold \normalfont then normal. (New latex) Normal but switch \textbf{to bold \textit{then italics, back to} bold} then normal.
Produces:
Normal but switch to boldthen normal. then italics, back tobold
Using the old latex syntax,
ltoh cannot determine when the bold anditalic fonts stop being active.
If you have the new latex on your system, use it. If you must use an old latex file, convert it to look like new latex as much as possible.
{\ and
XYZ ... }
\ to
XYZ ... \normalfont
\text.
XYZ...
To use this file, put
in your latex files. The file
rq209.sty additionally definesthe font size macros
\fsize/.../
Tiny
\fsize which take a single brace-delimited argument. Forexapmle, use
Huge
\fsizesmall{some text} instead of
{ \small some text }. (This author wrote
rq209.sty back in1994 because the office computer ran the old latex but the home Linuxmachine ran the new latex.)
Alternatively, write and use your own definitions of the \text
XYZ font change macros.
(One final note.) The old latex convention is simply a poor technical chioce. The current philosophy for document specifications (and even programming languages) is that parameters/arguments/blocks are clearly delimited syntactically. The use of matching braces by latex2e conforms to the the SGML syntax, as does HTML which ubiquitously uses matching begin and end tags.
To generate the HTML file
xyz.html from the latex file
xyz.tex, assuming ltoh is in your path, run:
prompt> ltohxyz.tex or prompt> perl fullpath-of-ltoh.pl xyz.tex
(I have not tested
ltoh on a Win32 machine, yet...) On a Win32 machine,which cannot automatically start Perl to execute the
ltoh, you wouldprobably run
prompt> perl ltoh.pl xyz.tex
There are five types of
ltoh specifications. Please note the names.
b/e)]
\begin{XYZ} and matching
\end{XYZ}command.
comm)]
\par,
\item or
\hrule.
{})]
\simplemacro{
...} ... HTML-postable
For example, use a simple-macro specification to translate the latexmacro
\textbf{ ... } (switch to bold face) into the HTML
<strong> ... </strong>.
{)]
N}
\swallow{arg-1} discards itssingle (possibly long) argument. In the corresponding HTML, we also``use'' the argument, by discarding it.
:=)]
The first four specications are known as translations specifications.
The four types of translation specifications have the same form. Do not use leading whitespace. Here is the general form and an example of each type.
:type :latex-macro-name:HTML-start-code:HTML-end-code:reserved/not-used :b/e :\begin{itemize}:<UL>:</>: :comm :\hrule:<hr>:: +comm +\homepage+http://www.best.com/~quong++ :{} :\textbf:<STRONG>:</>: :{2} :\rqhttp#1#2:<a href="#2"> #1 </a>::
Each specification contains six parts.
\homepage macroexpands to HTML containing a colon, so a colon cannot be the delimiterand I have used a plus. I do not recommend using a space/tab as thedelimiter, as multiple spaces/tabs are easy to overlook.
As an example of an optional regular expression, the LaTeX2
\hspace takes an optional
*argument, and then a required horizontal length argument. In thegenerated HTML, we want to ignore the entire
\hspacemacro, and so I use the following
ltoh spec.
:comm :\hspace[*]?\{[^\}]+\}:::
\beginXXX expands to
In an arg-macro specification, using the LaTeX2
\#1, usebraces as in #{1}.) Thus, a macro that swaps the order of itsparameters would be written as
:{2} :\swap_two:#2#1::
As another example, the LaTeX2
\makebox command takes anoptional alignment parameter (one of
[l],
[c] or
[r]) followed by text to be put into the box. I use thefollowing
ltoh spec to ignore the alignment parameter and to print thetext out unadorned.
:{1} :\makebox[^{]*#1:#1::
As a convenience, using
</> in the
HTML end codeexpands to the end tag(s) in reverse order of the corresponding HTMLbegin code. For example, I want a LaTeX2
\section to showup as a green <H2> header in HTML, so I specify
:{} :\section:<hr><H2><FONT color=green>:</>:
which is equivalent to
:{} :\section:<hr><H2><FONT color=green>:</FONT></H2></HR>:
The following table summarizes the effects of the various specifications, and the parts of the spefications used.
Type macro name HTML start HTML end input output comm \abc XYZ not-used \abc XZY b/e \begin{abc} XYZ ijk \begin{abc}...\end{abc} XZY ... ijk {} \abc XYZ ijk \abc{...} XYZ... ijk {2} \abc X#2Y#1Z not-used \abc{===}{+++} X+++Y===Z
As a final example, here's how generate links in HTML. I define a latexmacro
\rqhttp and a corresponding
ltoh specficiation.Because the tilde is accessible only in math mode, I have had to definea latex macro (
\rqtilde) for it, too.
(latex macro)
\def\rqtilde{\ensuremath{\tilde{\;}}\xspace}
\def\rqhttp#1#2{#1 (\texttt{#2})}(
ltohspec)
:comm :\rqtilde:~::
:2 :\rqhttp#1#2:<a href="#2"> #1 </a>::
In LaTeX2
\rqhttp macro as follows.
See the \rqhttp{\ltoh webpage}{http://www.best.com/\rqtilde{}quong/ltoh}.
The resulting dvi output from latex and the HTML from
ltoh look like
(Latex) See the ltoh web page (http://www.best.com/~quong/ltoh).
(HTML) See the <A HREF="http://www.best.com/~quong/ltoh"> ltoh web page </A>
Finally, good example of using
ltoh specifiers is the default
ltoh spec file
ltoh.specs that comes with this release.
[Aside: Technically, the
simple-macro specifier is not needed,as its functionality can be duplicated with an
arg-macro.Namely,
:{} :\macro:HTML-begin:HTML-end::
can be duplicated via
:{1} :\macro:HTML-begin#1HTML-end:::.
Nonetheless, use of a simple-macro { } specifiction is preferred,because its processing is much simpler. With a simple-macro,
ltoh doesnot have to extract and pass the parameter, and hence it is less likelyto break than an arg-macro.]
An assignment specification has two nearly identical forms. The double quotes are optional and can be used to imbed leading spaces into the string-value. The whitespace surrounding string-value is removed.
variable-name := string-value variable-name := "string-value"
title := The readme for ltoh
Here are the currently used built-in variables.
variable Default Description
title
none
Title of the resulting HTML file, viathe
<TITLE> tag.
You must define this variable.Set
title in the latex file itself. (It drives me nuts whenweb pages don't have titles)
url
none
URL of the home page of the author.
author
none
Author of the document.
email
none
Email address to which comments should be sent
htmlfile_spec
$BASE.html
Name of HTML filegenerated. The ltoh variable
$BASE is thelatex file name stripped of the directory and suffix components.
The
url,
author, and
ltoh handles the LaTeX2
tabular and
tabularxenvironments. Column alignments are read and passed onto thecorresponding HTML. The known column alignments must be one of `` l c rp X''. If you define your own column alignment, it will not beunderstood.
ltoh handles the LaTeX2
multicolumn macro reasonably well.The column alignment is read and passed onto the corresponding HTML.
I plan to support the
\multirow macro soon.
ltoh ignores extraneous LaTeX2
\@, but there is a smallchance a complicated multiple column alignment spec will break thiscode.
The generated HTML table has a border if one or more dividing lines inthe LaTeX2
As of version 97e,
ltoh reads specifications from $(i)$various specification-files and $(ii)$ from the LaTeX2
(In version 97e, you should do one of the following when running
ltoh.
prompt> perl install-dir/ltoh.pl file.tex.
(csh)
alias ltoh perl install-dir/ltoh
or (bash)
alias ltoh=perl install-dir/ltoh
prompt> ltoh.pl file.tex
prompt> ltoh file.tex
This mess is relative symbolic links. Yes. Given an arbitraryinvocation of
ltoh involving symbolic links, I cannot currentlydetermine where the
ltoh.pl script actually resides (the
install-dir). Once I implement this code, the setup won't becomplicated.
However, if none of the preceding spec files were,
ltoh tries to read
/usr/local/bin/ltoh.specs and that fails, tries
/usr/bin/ltoh.specs. If both of these still fail,
ltoh
quits.
.
%-ltoh-
ltoh-specification
ltoh strips the leading
%-ltoh- and processes theremainder of the line.
If nothing else, set the
title variable this way. For example,here's how this LaTeX2
\documentclass[]{article} ... various latex commands like \usepackage %-ltoh- title := Ltoh, a customizable LaTeX to HTML converter %-ltoh- :comm:\ltoh:<font color=green><tt>ltoh</tt></font>:: ... \begin{document} ... the body of the document
It is not difficult to break
ltoh, though there are often easy fixes byrestructuring your LaTeX2
-o ofile generate HTML into file ofile -I specfile read specifications from specfile -w N set the warning level to N (for debugging)
\begin{tabular} or the owning
\multicolumn. This assumption is very reasonable andcircumventing this restriction is difficult.
Ltoh first reads the entire LaTeX2
ltoh might run out of memory if processing ahuge LaTeX2
June 1996 Version 96a. Preliminary fully hard-coded (not customizable) version. Purely regular expression based. Unable to handle nested braces. Ugly, but it worked. Sort of. July 1-15 1996 Version 96b. First working version. Able to handle commands with multiple arguments and nested arguments. Took me a lot longer than I had expected to get this working. Jan 27-29 1997 Version 97a. Stops processing at
\end{document}. Convert double back slashes\\to
<br>, which should have done a long time ago.Fixed bug involving macros with only one parameter.
Feb 1997 Version 97b. Added HTML
<p> tags whenevertwo or more consecutive blank lines are seen.
Mar 11-15 1997 Version 97c. Much improved handling of specialcharacters such as {, }, <, > and @. In particular, bare braceswhich mean nothing in latex are stripped from the HTML. Improvedparagraph detection handling. (OK, OK, "Improved ..." really means"fixed bugs in ..."). No longer generates HTML comments for latexcomments, by default. Version 97c was meant to be first public release,but the tables this readme.tex document broke ltoh badly. Mar 19-20 1997 Version 97d. Complete rewrite of the table handlingcode. Latex column alignment specifications are understood and passedonto to the HTML. Multiple columns specified via either
\multicolumn or
\mc (which is my personalabbreviation macro) are handled properly. We try to ignore extraneousLaTeX2
@., but there is there is a small chance multiplecolumns
In particular,
Mar 25-31 1997 Version 97e. Official release. Clean up source a bit for release. Minor improvements on tables (allow end of a row to be on a separate line), paragraphs, specification files and handling special characters (allow for multiple chars on one line).
You may use
ltoh freely, under the following conditions, which arecovered under a BSD-style license.
Here's the official license as of 31 Mar 97.
# Copyright (c) 1996, 1997 Russell W Quong. # # In the following, the "author" refers to "Russell Quong." # # Permission to use, copy, modify, distribute, and sell this software and # its documentation for any purpose is hereby granted without fee, provided # that the following conditions are met: # 1. Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # 2. All advertising materials mentioning features or use of this software # must display the following acknowledgement: # This product includes software developed by Russell Quong. # 3. All HTML generated by ltoh must retain a visible notice that it # was generated by ltoh and contain a link to the ltoh web page # # Any or all of these provisions can be waived if you have specific, # prior permission from the author. # # THE SOFTWARE IS PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND, # EXPRESS, IMPLIED OR OTHERWISE, INCLUDING WITHOUT LIMITATION, ANY # WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. # # IN NO EVENT SHALL RUSSELL QUONG BE LIABLE FOR ANY SPECIAL, # INCIDENTAL, INDIRECT OR CONSEQUENTIAL DAMAGES OF ANY KIND, OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER OR NOT ADVISED OF THE POSSIBILITY OF DAMAGE, AND ON ANY # THEORY OF LIABILITY, ARISING OUT OF OR IN CONNECTION WITH THE USE OR # PERFORMANCE OF THIS SOFTWARE.
(The motivation section belongs right after the introduction, but mostpeople probably just want to get on with using
ltoh. So this sectionhas been relegated here. Ah well...)
Although other LaTeX2
Fundamentally,
ltoh is a specialized macro processor that reads macro
specifications and generates HTML accordingly. A specificationindicates how to convert a specific LaTeX2
My orginal goals in writing
ltoh were
Thanks to VA Research for letting the author work on
ltoh.
|
Once we have a one-to-one function, we can evaluate its inverse at specific inverse function inputs or construct a complete representation of the inverse function in many cases.
Inverting Tabular Functions
Suppose we want to find the inverse of a function represented in table form. Remember that the domain of a function is the range of the inverse and the range of the function is the domain of the inverse. So we need to interchange the domain and range.
Each row (or column) of inputs becomes the row (or column) of outputs for the inverse function. Similarly, each row (or column) of outputs becomes the row (or column) of inputs for the inverse function.
Example 5: Interpreting the Inverse of a Tabular Function
A function [latex]f\left(t\right)[/latex] is given below, showing distance in miles that a car has traveled in [latex]t[/latex] minutes. Find and interpret [latex]{f}^{-1}\left(70\right)[/latex].
[latex]t\text{ (minutes)}[/latex] 30 50 70 90 [latex]f\left(t\right)\text{ (miles)}[/latex] 20 40 60 70 Solution
The inverse function takes an output of [latex]f[/latex] and returns an input for [latex]f[/latex]. So in the expression [latex]{f}^{-1}\left(70\right)[/latex], 70 is an output value of the original function, representing 70 miles. The inverse will return the corresponding input of the original function [latex]f[/latex], 90 minutes, so [latex]{f}^{-1}\left(70\right)=90[/latex]. The interpretation of this is that, to drive 70 miles, it took 90 minutes.
Alternatively, recall that the definition of the inverse was that if [latex]f\left(a\right)=b[/latex], then [latex]{f}^{-1}\left(b\right)=a[/latex]. By this definition, if we are given [latex]{f}^{-1}\left(70\right)=a[/latex], then we are looking for a value [latex]a[/latex] so that [latex]f\left(a\right)=70[/latex]. In this case, we are looking for a [latex]t[/latex] so that [latex]f\left(t\right)=70[/latex], which is when [latex]t=90[/latex].
Try It 5
Using the table below, find and interpret (a) [latex]\text{ }f\left(60\right)[/latex], and (b) [latex]\text{ }{f}^{-1}\left(60\right)[/latex].
[latex]t\text{ (minutes)}[/latex] 30 50 60 70 90 [latex]f\left(t\right)\text{ (miles)}[/latex] 20 40 50 60 70 Evaluating the Inverse of a Function, Given a Graph of the Original Function
We saw in Functions and Function Notation that the domain of a function can be read by observing the horizontal extent of its graph. We find the domain of the inverse function by observing the
vertical extent of the graph of the original function, because this corresponds to the horizontal extent of the inverse function. Similarly, we find the range of the inverse function by observing the horizontal extent of the graph of the original function, as this is the vertical extent of the inverse function. If we want to evaluate an inverse function, we find its input within its domain, which is all or part of the vertical axis of the original function’s graph. How To: Given the graph of a function, evaluate its inverse at specific points. Find the desired input on the y-axis of the given graph. Read the inverse function’s output from the x-axis of the given graph. Example 6: Evaluating a Function and Its Inverse from a Graph at Specific Points
A function [latex]g\left(x\right)[/latex] is given in Figure 5. Find [latex]g\left(3\right)[/latex] and [latex]{g}^{-1}\left(3\right)[/latex].
Solution
To evaluate [latex]g\left(3\right)[/latex], we find 3 on the
x-axis and find the corresponding output value on the y-axis. The point [latex]\left(3,1\right)[/latex] tells us that [latex]g\left(3\right)=1[/latex].
To evaluate [latex]{g}^{-1}\left(3\right)[/latex], recall that by definition [latex]{g}^{-1}\left(3\right)[/latex] means the value of
x for which [latex]g\left(x\right)=3[/latex]. By looking for the output value 3 on the vertical axis, we find the point [latex]\left(5,3\right)[/latex] on the graph, which means [latex]g\left(5\right)=3[/latex], so by definition, [latex]{g}^{-1}\left(3\right)=5[/latex]. Try It 6
Using the graph in Example 6, (a) find [latex]{g}^{-1}\left(1\right)[/latex], and (b) estimate [latex]{g}^{-1}\left(4\right)[/latex].
Finding Inverses of Functions Represented by Formulas
Sometimes we will need to know an inverse function for all elements of its domain, not just a few. If the original function is given as a formula— for example, [latex]y[/latex] as a function of [latex]x\text{-\hspace{0.17em}}[/latex] we can often find the inverse function by solving to obtain [latex]x[/latex] as a function of [latex]y[/latex].
How To: Given a function represented by a formula, find the inverse. Make sure [latex]f[/latex] is a one-to-one function. Solve for [latex]x[/latex]. Interchange [latex]x[/latex] and [latex]y[/latex]. Example 7: Inverting the Fahrenheit-to-Celsius Function
Find a formula for the inverse function that gives Fahrenheit temperature as a function of Celsius temperature.
Solution
[latex]\begin{cases}\hfill{ C }=\frac{5}{9}\left(F - 32\right)\hfill \\ C\cdot \frac{9}{5}=F - 32\hfill \\ F=\frac{9}{5}C+32\hfill \end{cases}[/latex]
By solving in general, we have uncovered the inverse function. If
then
In this case, we introduced a function [latex]h[/latex] to represent the conversion because the input and output variables are descriptive, and writing [latex]{C}^{-1}[/latex] could get confusing.
Try It 7
Solve for [latex]x[/latex] in terms of [latex]y[/latex] given [latex]y=\frac{1}{3}\left(x - 5\right)\\[/latex]
Example 8: Solving to Find an Inverse Function
Find the inverse of the function [latex]f\left(x\right)=\frac{2}{x - 3}+4\\[/latex].
Solution
So [latex]{f}^{-1}\left(y\right)=\frac{2}{y - 4}+3\\[/latex] or [latex]{f}^{-1}\left(x\right)=\frac{2}{x - 4}+3\\[/latex].
Example 9: Solving to Find an Inverse with Radicals
Find the inverse of the function [latex]f\left(x\right)=2+\sqrt{x - 4}[/latex].
Solution
So [latex]{f}^{-1}\left(x\right)={\left(x - 2\right)}^{2}+4[/latex].
The domain of [latex]f[/latex] is [latex]\left[4,\infty \right)[/latex]. Notice that the range of [latex]f[/latex] is [latex]\left[2,\infty \right)[/latex], so this means that the domain of the inverse function [latex]{f}^{-1}[/latex] is also [latex]\left[2,\infty \right)[/latex].
Try It 8
What is the inverse of the function [latex]f\left(x\right)=2-\sqrt{x}?[/latex] State the domains of both the function and the inverse function.
|
As Paul Garrett intimates in his answer, the notion of cuspform is fundamental, but can be subtle in its meaning and implications. Here are a few different points of view that might help:
From the point of view of someone interested in the relationship between modular forms and algebraic number theory, Hecke eigenforms correspond to (certain) two-dimensional representations of the absolute Galois group $\mathrm{Gal}(\overline{\mathbb Q}/\mathbb Q)$. Under this correspondence, the Eisenstein series correspond to
reducible two-dimensional representations (i.e. those which are the sum of two characters), while the cuspidal eigenforms correspond to irreducible representations.
A different answer, related to quadratic forms:
The $\theta$ series of a positive definite quadratic form will is a modular form. If we expand it in terms of a basis of Hecke eigenforms, it will have some Eisenstein contributions, which can be more-or-less computed explicitly, and then some cuspform contributions, which in general aren't describable by an explicit formula.
However, we know that the Hecke eigenvalues, and hence the $q$-expansion coefficients, of a cuspform grow at a much slower rate than those of an Eisenstein series. (There are elementary bounds, due to Hecke, tighter bounds,due to Rankin, and then the ultimate Ramanujan--Petersson bound, proved by Deligne.) Just from this, we can derive asymptotics as $n \to \infty$ for the number of representations of $n$ by a given quadratic forms. (See e.g. Serre's
Course in arithemtic for concrete examples.)
In weight two, as Agol notes in his answer, cuspforms correspond to holomorphic differential forms on the modular curve, while all modular forms correspond to differential forms that are holomorphic except for possible simple poles at the cusps.
Just to tie in a little with Paul Garrett's answer, in the spectral theoretic point of view on automorphic forms (not something you see in the beginning literature on holomorphic modular forms that number theory students tend to read, but something that is usually introduced in books that take a more representation-theoretic veiw-point), cuspforms (on a given group, and on its Levi subgroups) are the basic building blocks in terms of which the rest of the spectral theory is developed.
|
In examples/large_deformation/hyperelastic.py a rotation by displacements is applied. By using a similar function the vectors defining the force couples could be defined for dw_surface_ltr (IMHO). Does it make sense?r.----- Reply message -----From: "Andre Smit" <freev...(a)gmail.com>To: <sfepy...(a)googlegroups.com>Subject: TorqueDate: Sat, Dec 18, 2010 05:10What is the best way to apply a torque load to a model?--Andre--You received this message because you are subscribed to the Google Groups "sfepy-devel" group.To post to this group, send email to sfepy...(a)googlegroups.com.To unsubscribe from this group, send email to sfepy-devel...(a)googlegroups.com.For more options, visit this group at http://groups.google.com/group/sfepy-devel?hl=en.
I am currrently looking for FEM packages to help me solve a system ofbeams and columns, basically a collection of 1D bernoulli/timoshenkoline elements.I started reading SfePy docs and i am getting the idea that doing theabove is not really possible here, am i right?Are only 2D area elements permitted in SfePy?Or is there any direct support for solving 1D line elements too..CheersNimish
Dear SfePy users,Is it possible to evaluate a solution not only in the FEM mesh node, but inany arbitrary point in the domain with the given (x, y, z) coordinates?For example, consider Dirichlet problem for Poisson equation. We applyessential boundary conditions on the surface nodes and after the problemhas been solved we have the solution vector, i.e. vector of values in theFEM mesh nodes. But I want to know the solution in point v(x, y, z) that isnot FEM mesh node. What is the best way to obtain solution in this point v?Sincerely,Alec Kalinin
I'm working on modeling a next-generation X-ray mirror for which theshape can be actively controlled by use of many thin piezo-electricactuators mounted on the mirror surface. The mirror is basically aglass conical paraboloid with a 1 meter radius and 200 micronthickness (e.g. http://en.wikipedia.org/wiki/X-ray_optics). Ourproject is currently using a proprietary FEA package, but the modelsetup and turnaround time is slow, in part because there is only onepart-time engineer who can run it.SfePy looks like a great package and we're hoping that it could beused to automate running a large number of different cases. I'vespent some time reading the documentation but I have a few questionsthat I hope can be answered before going too much further. I want toapologize in advance if some of my wording is imprecise, I have aphysics background but this topic is a bit outside my realm...- Is SfePy appropriate for this problem?- If a specify a grid with about 800 x 400 points (azimuthal, axial)and about 10 boundary conditions (corresponding to mount points), whatis the rough order of magnitude of time to compute the solution? Isit seconds, minutes, hours, or days?- The linear elastic examples show a problem with a specifieddisplacement. How do I specify an input force? The piezo essentiallyprovides a tensile force along the surface.- Is there a way to specify the problem and solve in cylindricalcoordinates? This is the natural coordinate system.- How do I specify 6-DOF constraints which correspond to the mirrormounts?Thanks in advance for any help!Tom Aldcroft
Hi all,I've just discovered SfePy, I'm looking for an example that can be changedfor Darcy flow in porous media:\vec{q} = - K\nabla \phiwhere K is the hydraulic conductivity and \phi the hydraulic head. Couldyou please point me to some examples or relevant documentation?Cheers
Hello sfepy users!I am using sfepy to do thermal simulations of (BIG!) electric resistors.Nothing fancy, but i would like to be able to use temperature dependentthermal conductivities as my system gets very hot.At the moment I am employing the Laplace weak term:int(s * \nabla q *\nabla p)where s is the thermal conductivity, q is the test field parameter and p isthe temperature fieldWhat i want is for s to depend on the temperature. I wonder which strategyto use:1: To limit myself to a linear s, i.e. s(q)=s0+\alpha * q.In that case i guess i can do this:int(s(q) * \nabla q *\nabla p) = int(s0 * \nabla q *\nabla p) +int(\alpha * q * \nabla q *\nabla p)Unfortunately the second term is not implemented. Looking at thesource, this would be some work to implement.2: To iterate and after each iteration assign new values of s at eachpoint. This would not be a big deal when doingtime dependent simulations, but steady-state would be much slower tocalculate I guess? An added plus would bethat arbitrary c(q) could be used.3: Ask for help here, and go "Ahhh! Of course!", when you state theobvious, easily implementable and fantastic solution.I have only used sfepy for about a week, and this is the first time i ammessing around with weak formulation FEM, so thisquestion might be really silly, but I ask anyway: Is the strategy abovecompletely crazy or am I on the right track?Kind regardsBjarke Dalslet
I have responded to the first message (which got posted with some delay, as Ihave to approve first-time posts of new mailing list members as an anti-spammeasure).r.On 10/26/2012 11:15 AM, Bjarke Dalslet wrote:> Hello sfepy users.>> I am doing thermal simulations of (very big) electric resistors in sfepy.> Nothing fancy, but as the resistors get very hot, i would like to use a> thermal conductivity that depends on the temperature.>> At the moment i am using the laplace term:> int( c \nabla q \nabla p)> where c is the thermal conductivity, q is the test (temperature field) and> p is the temperature field.>> I have come up with two possible approaches to make c dependent on q:>> 1: restrict c(q) to a linear function c(q) = c_0 + \alpha q.> I think this would allow me to do this:> int( c(q) \nabla q \nabla p) = int( c_0 \nabla q \nabla p) + int( \alpha> q \nabla q \nabla p)> That would allow fast steady state solutions, although the second term> is not yet implementet (and judging from the laplace source, it seems like> some work to do it).>> 2: Iteratively seek the solution, i.e. adjusting c(x,y,z) after each> timestep. This would be fine when looking at transients, but for steady> state it would be much slower than solution 1.>> I am new to sfepy and also to weak form FEM, so I ask anybody with> experience in this: Are the above approaches silly, and does a better one> exist?>> Kind regards> Bjarke Dalslet>
Hello SfePy,May be I am wrong, but it seems that there is an issue with the minus signbefore volume integral in Poisson problem definition.I am trying to solve Poisson problem for the simple analytical function$u(x) = x^2$, so the $\Delta u = 2$ and the week form is $\int_\Omega u v\, d\Omega = \int_\Omega 2 v \, d\Omega$.In SfePy I used the following definition:dw_laplace.i1.Omega(m.val, v, u) = dw_volume_integrate.i1.Omega(f.val, v)but I got a quite big relative error: 1e-1But in case I add the minus sign:dw_laplace.i1.Omega(m.val, v, u) = -dw_volume_integrate.i1.Omega(f.val, v)the accuracy of the solution becomes very good, the relative error: 1e-3So my question, why it is necessary to add minus sign before volumeintegral? Yes, I know that the classical Poisson problem is $u(x) = -f(x)$.May be we assume minus sign implicitly?The test script demonstrated a problem is in the attachment.Sincerely,Alec
Hello SfePy!I am solving the Poisson problem and during my solution I got the warning:Warning:/usr/local/lib/python2.7/dist-packages/sfepy/fem/fields_base.py:1258:RuntimeWarning: invalid value encountered in dividedata_vertex /= nod_vol[:,nm.newaxis]The code for this warning:def post_process(out, pb, state, extend = False):# evaluate gradient in nodesgrad_data = pb.evaluate('ev_grad.i1.Omega(u)', mode = 'qp')grad_field = H1NodalVolumeField('grad', np.float64, (3, ),pb.domain.regions['Omega'])grad_var = FieldVariable('grad', 'parameter', grad_field, 3,primary_var_name='(set-to-None)')*grad_var.data_from_qp(grad_data, pb.integrals['i1']) *This warning is appeared only for the meshes with superfluous vertices. Thecode and the meshes is in the attachment. I think this warning my berelated to the issue with superfluous vertices. See early discussion [1]and git commit [2][1]https://groups.google.com/forum/?fromgroups=#!topic/sfepy-devel/3Qae2M5bBps[2]https://github.com/sfepy/sfepy/commit/4ef0f5dbc7fac15942ba2031debf1296a35...Sincerely,Alec
Hello SfePy users,I am solving a Poisson's equation with a free term $b(x)$: $\Delta u(x) =b(x), \quad x \in Omega$. I take "diffusion/poisson_functions.py" as thebase script for my task. But something in this script is not clear for me.1. In the script we have two definitions: $p$ is a given function and $f$is a load parameter. What function does correspond to the free term $b(x)$?2. The known function $f$ (in the code it is named as "load") is defined inmaterials, but the function $p$ is also known, but defined in the"variables" section. Usually in variable section we define functions to befind during solution. Why we define known function in the variable section?3. For the evaluation of the function $f$ ("load") we have a pythonfunction "get_pars(ts, coors, mode=None, **kwargs)". In this pythonfunction we evaluate $f$ only if "mode == qp" condition is true.For the evaluation of the function $p$ we also have a python function"get_load_variable(ts, coors, region=None)". But in this function we do notuse "mode" condition.Why those function $f$ and $p$ are evaluated in different ways?Thanks,Alec
|
I have a question. First, I know that convergence in measure of a sequence of functions $f_n$ is different than convergence a.e., wich means there are sequences that converge in measure but not a.e. but this excercise got me in doubt if there is some kind of duality between convergence in measure and convergence a.e.
Excercise: Let $f, f_n \in L_1$, and suppose that $f_n \rightarrow f$ a.e. Show that $||f_n -f||_1 \rightarrow 0$ if and only if $||f_n||_1 \rightarrow ||f||_1$ (Note that this result also hold if "a.e." is replaced by "in measure") My attempt at Proof:
$\big($$\Leftarrow$ : If $||f_n||_1 \rightarrow ||f||_1$ and $f_n \rightarrow f$ a.e. then, $$\lim\int |f_n -f| \leq \lim \int |f_n| + |f| = 2\int|f|$$
Now applying Fatou's Lemma:
$$0 = \int \lim|f_n - f| = \int \underline{\lim}|f_n - f| \leq \underline{\lim}\int |f_n - f|$$ $$\leq \overline{lim} \int |f_n - f| \leq \int \overline{\lim}|f_n - f| = 0$$
$\big($$\Rightarrow$ : $$0 \leq \lim \int ||f_n| - |f|| \leq \lim\int |f_n - f| \leq \int \lim |f_n - f|= 0$$ Wich completes the proof for $f_n \rightarrow f$ a.e.
Now, and this part I'm not that sure, if $f_n \rightarrow f$ in measure, suppose $\lim \int |f_n - f| \neq 0$, that is there is some $\delta > 0$ and a subsequence $f_{n_k}$ such that $\lim \int |f_{n_k} - f| \geq \delta$. But we know that every sequence wich converges in measure to some $f$ has a subsequence that converges a.e., in this case there exists a sub-subsequences $f_{n_{k_j}} \rightarrow f$ a.e. But now $0 = \lim \int |f_{n_{k_j}} - f| \geq \delta$ wich is a contradiction.
Can we generalize the process above? The question is:
For example is P is property on $f_n$ a sequence of functions, can we assure that a result will hold if we replace "a.e." by "in measure" in general? Can you provide a counter-example to 2)?
If $f_n \rightarrow f$ in measure and P is valid for $f_n$, then if $f_n \rightarrow f$ a.e. it would mean $f_n \rightarrow f$ in measure too so P would still be valid.
But if $f_n \rightarrow f$ a.e. and P is valid for the sequence, can we arrive that is valid if we replace convergence a.e. by convergence in measure?
|
After the answers by joshphysics and user37496, it seems to me that a last remark remains.
The quantum relevance of the universal covering Lie group in my opinion is (also) due to a fundamental theorem by Nelson. That theorem relates
Lie algebras of symmetric operators with unitary representations of a certain Lie group generated by those operators. The involved Lie group, in this discussion, is always a universal covering.
In quantum theories one often encounters a set of operators $\{A_i\}_{i=1,\ldots, N}$ on a common Hilbert space ${\cal H}$ such that:
(1) They are symmetric (i.e. defined on a dense domain $D(A_i)\subset {\cal H}$ where $\langle A\psi|\phi\rangle = \langle \psi|A\phi\rangle$)
and
(2) they enjoy the
commutation relations of some Lie algebra $\ell$:$$[A_i,A_j]= \sum_{k=1}^N iC^k_{ij}A_k$$on a common invariant domain ${\cal D}\subset {\cal H}$.
As is known, given an abstract Lie algebra $\ell$ there is (up to Lie group isomorphisms) a unique
simply connected Lie group ${\cal G}_\ell$ such that its Lie algebra coincide with $\ell$. ${\cal G}_\ell$ turns out to be the universal covering of all the other Lie groups whose Lie algebra is $\ell$ itself.
All those groups, in a neighbourhood of the identity are isomorphic to a corresponding neighbourhood of the identity of ${\cal G}_\ell$. (As an example just consider the simply connected $SU(2)$ that is the universal covering of $SO(3)$) so that they share the same Lie algebra and are locally identical and differences arise far from the neutral element.
If (1) and (2) hold, the natural question is:
Is there a strongly continuous unitary representation ${\cal G} \ni g \mapsto U_g$ of some Lie group $\cal G$ just admitting $\ell$ as its Lie algebra, such that $$U_{g_i(t)} = e^{-it \overline{A_i}}\:\: ?\qquad (3)$$
Where $t\mapsto g_i(t)$ is the one-parameter Lie subgroup of $\cal G$ generated by (the element $a_i$ of $\ell$ corresponding to) $A_i$ and $\overline{A_i}$ is some self-adjoint extension of $A_i$.
If it is the case, $\cal G$ is a continuous symmetry group for the considered physical system, the self adjoint opertors $\overline{A_i}$ represent physically relevant observables. If time evolution is included in the center of the group (i.e. the Hamiltonian is a linear combination of the $A_i$s and commutes with each of them) all these observables are
conserved quantities.Otherwise the situation is a bit more complicated, nevertheless one can define conserved quantities parametrically depending on time and belonging to the Lie algebra of the representation (think of the boost generator when $\cal G$ is $SL(2,\mathbb C)$).
Well, the fundamental theorem by Nelson has the following statement.
THEOREM (Nelson)
Consider a set of operators $\{A_i\}_{i=1,\ldots, N}$ on a common Hilbert space ${\cal H}$ satisfying (1) and (2) above. If ${\cal D}$ in (2) is a dense subspace such that the symmetric operator$$\Delta := \sum_{i=1}^N A_i^2$$is essentially self-adjoint on $\cal D$ (i.e. its adjoint is self-adjoint or, equivalently, $\Delta$ admits a unique self-adjoint extension, or equivalently its closure $\overline{\Delta}$ is self-adjoint), then:
(a) Every $A_i$ is essentially self-adjoint on $\cal D$,
and
(b) there exists a strongly continuous unitary representation on $\cal H$ of the unique simply connected Lie group ${\cal G}_\ell$ admitting $\ell$ as Lie algebra, completely defined by the requirements:$$U_{g_i(t)} = e^{-it \overline{A_i}}\:\:,$$ where $t\mapsto g_i(t)$ is the one-parameter Lie subgroup of ${\cal G}_\ell$ generated by (the element $a_i$ of $\ell$ corresponding to) $A_i$ and $\overline{A_i}$ is the unique self-adjoint extension of $A_i$ coinciding to $A_i^*$ and with the closure of $A_i$.
Notice that the representation is automatically unitary and not projective unitary: No annoying phases appear.
The simplest example is that of operators $J_x,J_y,J_z$. It is easy to prove that $J^2$ is essentially self adjoint on the set spanned by vectors $|j,m, n\rangle$. The point is that one gets this way unitary representations of $SU(2)$ and not $SO(3)$, since the former is the unique simply connected Lie group admitting the algebra of $J_k$ as its own Lie algebra.
As another application, consider $X$ and $P$ defined on ${\cal S}(\mathbb R)$ as usual. The three symmetric operators $I,X,P$ enjoy the Lie algebra of Weyl-Heisenberg Lie group. Moreover $\Delta = X^2+P^2 +I^2$ is essentially self adjoint on ${\cal S}(\mathbb R)$, because it admits a dense set of analytic vectors (the finite linear combinations of eigenstates of the standard harmonic oscillator). Thus these operators admit unique self-adjoint extensions and are generators of a unitary representation of the (simply connected) Weyl-Heisenberg Lie group. This example holds also replacing $L^2$ with another generic Hilbert space $\cal H$ and $X,P$ with operators verifying CCR on an dense invariant domain where $X^2+P^2$ (and thus also $X^2+P^2 +I^2$) is essentially self adjoint. It is possible to prove that the existence of the unitary rep of the Weyl-Heisenberg Lie group, if the space is irreducible, establishes the existence of a unitary operator from ${\cal H}$ to $L^2$ transforming $X$ and $P$ into the standard operators. Following this way one builds up an alternate proof of Stone-von Neumann's theorem.
As a last comment, I stress that usually ${\cal G}_\ell$ is This post imported from StackExchange Physics at 2014-04-12 19:04 (UCT), posted by SE-user V. Moretti
not the group acting in the physical space and this fact may create some problem: Think of $SO(3)$ that is the group of rotations one would like to represent at quantum level, while he/she ends up with a unitary representation of $SU(2) \neq SO(3)$. Usually nothing too terrible arises this way, since the only consequence is the appearance of annoying phases as explained by Josh, and overall phases do not affect states. Nevertheless sometimes some disaster takes place: For instance, a physical system cannot assume quantum states that are coherent superpositions of both integer and semi-integer spin. Otherwise an internal phase would take place after a $2\pi$ rotation. What is done in these cases is just to forbid these unfortunate superpositions. This is one of the possible ways to realize superselection rules.
|
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
|
The thermal de Broglie wavelength is roughly the average de Broglie wavelength of the gas particles in an ideal gas at the specified temperature. It is defined as
\[\Lambda= \sqrt{\frac{h^2}{2\pi mk_BT}}\]
where
his the Planck constant mis the mass \(k_B\) is the Boltzmann constant Tis the temperature. References ↑ Louis-Victor de Broglie "On the Theory of Quanta" Thesis (1925) Related reading Zijun Yan, "General thermal wavelength and its applications", Eur. J. Phys. 21pp. 625-631 (2000)
|
We say that $\Omega$ is a star-shaped domain (with respect to the origin) of $\mathbb R ^n$ if :
$$\Omega = \{x\in \mathbb R ^n : \left \| x \right \| < g(\frac{x}{\left \| x \right \|})\}\; \text{and}\;\; \partial \Omega = \{x\in \mathbb R ^n : \left \| x \right \| = g(\frac{x}{\left \| x \right \|})\} $$ with $g$ is a continuous, positive function on the unit sphere S.
I showed that there is a $\mathcal C^1$ diffeomorphism between $\Omega$ and the unit ball (Euclidean norm $\left \| . \right \|_{2}$). $$\begin{array}{ccccc} \Phi & : & B & \to & \Omega \\ & & y & \mapsto & y\;h(\frac{y}{\left \| y \right \|}) \\ \end{array}$$ $\Phi$ have some properties:
• $\Phi$ is well defined.
• $\Phi(\partial B)=\partial \Omega$.
• $\Phi$ is a bijection.
• $\Phi$ is a smooth function.
Now I would like to show the existence of a Lipschitzian bijection between this domain $\Omega$ and a cube in $\mathbb R ^n$ (norm $\left \| . \right \|_{\infty}$).
I appreciate your answers and your help.
|
Or: “How photons and electrons say hello”
Low energy — Photoelectric effect This is the first one you learn: a photon knocks an electron out of its atomic orbit. It is most likely to occur at low energies… as you move up in energy it becomes more likely that the photon will be scattered rather than absorbed. Medium energy — Compton scattering While in the photoelectric effect the energy of the incoming photons is absorbed completely by the electrons, at higher energies the photon will instead bounce off the electron, leaving some of its energy/momentum behind in the recoil.
Using relativistic energy/momentum formulas, you can derive the wavelength shift \(\lambda’-\lambda = \frac{h}{m_e c}\left(1-\cos\theta\right)\) (higher wavelength ⇒ lower energy).
High energy — Pair production γ → e -+ e +looks pretty reasonable, right? If the photon had enough energy, it could account for the mass of the created electron-positron pair, and charge is certainly conserved, so why not? Well, consider this: Given the conservation of momentum, output energy will be minimized by having the two electrons each take half the photon’s original momentum. But this gives \(E_\mathrm{out}=2\sqrt{\left(\frac{pc}{2}\right)^2+(m_e c^2)^2}>pc=E_\mathrm{in}\), so even in this best case we don’t have enough energy to support the electron’s/positron’s momentum.
All that means is that we need some other ingredient in the mix. One good option is an atomic nucleus… When the photon gets near, it can allow the nucleus to absorb some of its momentum, to make electron-positron pair production possible. This is why pair-production is a form of light-matter interaction, rather than just something light does on its own.
(And no matter what, this is definitely going to be a high-energy interaction: the incoming photon must have AT LEAST \(pc>2m_e c^2\).)
|
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
|
In the book
The Geometry of Domains in Spaces by Krantz and Parks, the authors proved the weak $(1,1)$-type estimate of the maximal function $M_\mu f$, where $\mu$ is a Radon measure, using their version of the Besicovitch covering theorem.
Let $d$ be a positive integer. Then there exists a constant $C=C(d)$ such that for any finite collection of balls $\mathcal B = \{B_i\}_{i=1}^m$ in $\Bbb R^d$ with the property that no ball contains the center of any other ball, we can partition the family $\mathcal B$ into $$ \mathcal B = \mathcal B_1 \cup\mathcal B_2 \cup \dots \cup \mathcal B_C, $$ where each subfamily $\mathcal B_j$ consists of disjoint balls.
This version of the covering theorem seems pretty restrictive, especially the part about no ball contains the center of any other balls. Indeed, in theor proof of the weak $(1,1)$-type estimate, they relied on a certain claim that they did not prove.
Edit As Skeeve mentioned in the comment, this claim is not explicitly stated in the book but more of a paraphrasing of the part the authors left out in a proof.
Claim: Let $K\subset \Bbb R^d$ be a compact set such that each $x\in K$ is associated with a real number $r_x>0$. Then $K$ can be covered by a family of balls $$ \mathcal B = \{ B(x_i,r_i) : i=1,\dots,k\ \}, $$ where $r_i := r_{x_i}$, such that for any distinct $i,j \le k$, we have $$ x_i\notin B(x_j,r_j) \quad\text{and}\quad x_j\notin B(x_i,r_i). $$
I don't find this claim to be trivial at all. In fact, I tried many different methods but failed to prove it. Note that the mapping $x\mapsto r_x$ doesn't enjoy any nice property like continuity of any kind.
While the usual version of Besicovitch covering theorem circumvents this problem, I still would like to know how to prove the above claim (or a counter example if it is actually false).
|
Larmor's formula$$P = {2 q^2 \dot{v}^2 \over 3 c^3}$$ states thatelectromagnetic radiation with power $P$ is produced by accelerating(or decelerating; hence the German name bremsstrahlungmeaning "brakingradiation") an electrical charge $q$. Charges can be accelerated byelectrostatic ormagnetic forces, gravitational acceleration being negligible bycomparison. We will consider electrostatic bremsstrahlung first andlaterits magnetic counterpart magnetobremsstrahlung,or "magnetic brakingradiation," synchrotron radiation for example.
The electric force is so muchstronger than gravity that ionized interstellar clouds have almost nonet charge onlarge scales; the charges of all free electrons in an ionized cloud arebalanced by the charges of positively ions. An electron (charge $-e\approx-4.8 \times 10^{-10}$ statcoulombs) passing by an ion (charge $+e$for a singly ionized atom, $+Ze$ for an ionized atom with $Z$ electronsremoved) is accelerated by their Coulomb attraction$$\dot{v} = {f \over m_{\rme}} = {-Ze^2 \over m_{\rm e} l^2} ,$$ where $m_{\rm e} \approx 9.1\times10^{-28}$ g is the electron mass and $l$ is the distance between theelectron and ion. Such radiation is calledfree-free radiationbecause an initially free electron is rarely captured by the ion duringthe interaction. If theionized interstellar cloud is reasonably dense, the electrons and ionsinteractfrequently enough that they come into local thermodynamic equilibrium(LTE) at some common temperature. Thus the radiation produced by theseinteractions is sometimes calledthermal bremsstrahlung. This section will cover:
(1) astronomical sources of free-freeemission
(2) the radio properties (spectrum,power, opacity) of free-freeemission
(3) applications: what do we learnfrom radio observations offree-free sources?
Interstellar gas is primarily hydrogenplus some helium and traceamounts of heavier elements such as carbon, nitrogen, oxygen, iron,... Astronomers often lump all of the heavier elements into thecategory of metals,meaningelements that readily form positive ions, eventhough many are not metallic in the usualsense. Much of the interstellar hydrogen is in the form of neutralatoms (calledHI in astronomical terminology) or diatomic molecules(H$_2$), but some is ionized. The singly ionized hydrogen atomH$^+$ isreferred to as HII, doubly ionized oxygen O$^{++}$ is calledOIII, etc.
In 1939 the astronomer BengtStrömgren realized that regions of diffuseinterstellar gas are either (1) mostly neutral and with nearly all oftheHI atoms in their ground electronic state or (2) almostcompletely ionized (HII much more abundant than HI), with very thinboundaries separating distinct HI andHII regions. Sometimes theHIIregionssurrounding stars are calledStrömgren spheres after hisearly theoretical models. What is the microscopic physical basis forthese ideas?
The ground electronic state of ahydrogen atom corresponds to an atom with the smallest(and hence most tightly bound) electron orbit around the nuclear protonthat is consistent with a stationary electronic wave function, astanding wave. [See Sections 13.3 of Rohlfs & Wilson for a briefdiscussion of the Bohr orbits in Rydberg atoms.] The electronic energylevels permitted by quantum mechanics are characterized by theirquantum numbers $n = 1,~2,~3,...$, where $n = 1$ corresponds to theground state. While quantum mechanics forbids an electron in the groundstate ($n = 1)$ from radiating according to the classical Larmorformula, it does not forbid radiative decay from higher levels ($n =2,~3,...$), and Larmor's equation fairly accuratelypredicts theradiative lifetimes of excited hydrogen atoms. The orbital radius$a_{\rm n}$ of an electron in the $n$th energy level is $a_{\rm n} =n^2 a_0$, where $a_0 \approx 5.29 \times 10^{-9}$ cm is called the Bohrradius. Applying Larmor's equation, as in problem 2 of problemset 2,showsthat the radiative lifetime $\tau$ is proportional to $a_{\rm n}^3$ andhence to $n^6$. Thus we can scale the [incorrect] classical result$\tau \approx5.5 \times 10^{-11}$ s for $n = 1$ to estimate the radiative lifetimesof excited states. For example, the approximate radiative lifetime ofthe$n = 2$ state would be $\tau \approx 2^6 \times 5.5 \times 10^{-11}{\rm ~s~} \approx 3 \times 10^{-9}$ s, in reasonable agreement with theaccurate quantum-mechanical result $\tau \approx 2 \times 10^{-9}$ s.Clearly excited hydrogen atoms will spontaneously decay very quickly tothe ground state by emitting radiation. At any one time, almost allneutral atoms are in the ground state.
|
All lattices in the moonshine picture are
number-like, that is of the form $M \frac{g}{h}$ with $M$ a positive integer and $0 \leq g < h$ with $(g,h)=1$.To understand the action of the Bost-Connes algebra on the Big Picture it is sometimes better to view the lattice $M \frac{g}{h}$ as a primitive $h$-th root of unity, centered at $hM$.
The distance from $M$ to any of the lattices $M \frac{g}{h}$ is equal to $2 log(h)$, and the distances from $M$ and $M \frac{g}{h}$ to $hM$ are all equal to $log(h)$.
For a prime value $h$, these $h$ lattices are among the $h+1$ lattices branching off at $hM$ in the $h$-adic tree (the remaining one being $h^2M$).
For general $h$ the situation is more complex. Here’s the picture for $h=6$ with edges in the $2$-adic tree painted blue, those in the $3$-adic tree red.
\[
\xymatrix{& & M \frac{1}{2} \ar@[blue]@{-}[d] & \\ & M \ar@[blue]@{-}[r] \ar@[red]@{-}[d] & 2M \ar@[red]@{-}[d] & M \frac{1}{6} \ar@[red]@{-}[d] \\ M \frac{1}{3} \ar@[red]@{-}[r] & 3M \ar@[blue]@{-}[r] \ar@[red]@{-}[d] & \boxed{6 M} \ar@[blue]@{-}[r] & 3M \frac{1}{2} \ar@[red]@{-}[d] \\ & M \frac{2}{3} & & M \frac{5}{6}} \]
To describe the moonshine group $(n|h)+e,f,\dots$ (an example was worked out in the tetrahedral snake post), we need to study the action of base-change with the matrix
\[ x = \begin{bmatrix} 1 & \frac{1}{h} \\ 0 & 1 \end{bmatrix} \] which sends a lattice of the form $M \frac{g}{h}$ with $0 \leq g < h$ to $M \frac{g+M}{h}$, so is a rotation over $\frac{2 \pi M}{h}$ around $h M$. But, we also have to describe the base-change action with the matrix \[ y = \begin{bmatrix} 1 & 0 \\ n & 1 \end{bmatrix} \] and for this we better use the second description of the lattice as $M \frac{g}{h}=(\frac{g'}{h},\frac{1}{h^2M})$ with $g'$ the multiplicative inverse of $g$ modulo $h$. Under the action by $y$, the second factor $\frac{1}{h^2M}$ will be fixed, so this time we have to look at all lattices of the form $(\frac{g}{h},\frac{1}{h^2M})$ with $0 \leq g < h$, which again can be considered as another set of $h$-th roots of unity, centered at $hM$. Here's this second interpretation for $h=6$: \[ \xymatrix{M \frac{5}{6} \ar@[red]@{-}[d] & & 4M \frac{1}{3} \ar@[red]@{-}[d] & \\ 3M \frac{1}{2} \ar@[blue]@{-}[r] \ar@[red]@{-}[d] & \boxed{6M} \ar@[blue]@{-}[r] \ar@[red]@{-}[d] & 12 M \ar@[red]@{-}[r] \ar@[red]@{-}[d] & 4 M \frac{2}{3} \\ M \frac{1}{6} & 18 M \ar@[blue]@{-}[r] \ar@[blue]@{-}[d] & 36 M & \\ & 9M \frac{1}{2} & & } \] Under $x$ the first set of $h$-th roots of unity centered at $hM$ is permuted, whereas $y$ permutes the second set of $h$-th roots of unity. These interpretations can be used to spot errors in computing the finite groups $\Gamma_0(n|h)/\Gamma_0(n.h)$.
Here’s part of the calculation of the action of $y$ on the $(360|1)$-snake (which consists of $60$-lattices).
First I got a group of order roughly $600.000$. After correcting some erroneous cycles, the order went down to 6912.
Finally I spotted that I mis-numbered two lattices in the description of $x$ and $y$, and the order went down to $48$ as it should, because I knew it had to be equal to $C_2 \times C_2 \times A_4$.
Similar Posts: A forgotten type and roots of unity (again) the moonshine picture – at last A tetrahedral snake Monstrous dessins 2 The defining property of 24 The Big Picture is non-commutative Snakes, spines, threads and all that Conway’s big picture looking for the moonshine picture the monster dictates her picture
|
The way the Taylor polynomials of a function of one variable progressively converge to the graph of the function like
y = cos x is really quite impressive and is inherently interesting. We can extend this topic into three dimensions using CalcPlot3D.
As an exercise, I require my students to generate the linear and quadratic Taylor polynomials of a function of two variables using the partial derivatives of the function evaluated at a particular point.
\( \begin{eqnarray} f(x,y) &\approx L(x,y) = f(a,b) &+ f_x(a,b)(x-a) + f_y(a,b)(y-b) \qquad (1^{st}\text{-deg. Taylor poly or tangent plane})\\ f(x,y) &\approx Q(x,y) = f(a,b) &+ f_x(a,b)(x-a) + f_y(a,b)(y-b) \\ &+\frac{f_{xx}(a,b)}{2}(x-a)^2 &+ f_{xy}(a,b)(x-a)(y-b) + \frac{f_{yy}(a,b)}{2}(y-b)^2 \qquad (2^{nd}\text{-deg. Taylor poly})\end{eqnarray}\)
Exercise: Determine the 1st and 2nd degree Taylor polynomials in two variables for the given function. Simplify both polynomials. Show all work including all partial derivatives and using the formula clearly with functional notation in the first step. Please also provide a printout of the given surface along with each of the Taylor polynomials. (That’s 2 printouts all together.) Include the point on the surface where the polynomial is tangent to the surface. Use the Format Surfaces option on the View Settings menu so that the Taylor polynomial is reverse color and transparent so it’s possible to tell the two surfaces apart. If necessary, zoom out and the rotate to a view that shows the surfaces clearly. Then use the Print Graph option on the File menu of the applet to print the graph.
\(f(x,y) = \sin(2x) + \cos y\) for x,y near (0,0)
Answers: 1st-degree Taylor Polynomial of f: 2nd-degree Taylor Polynomial of f: \(L(x,y)=1+2x\) \(Q(x,y)=1+2x-\frac{1}{2}y^2\)
There is also a feature of the applet that will allow you to demonstrate higher-degree Taylor polynomials for a function of two variables.
Example: Graph the function, \(f(x,y)=\cos(x)\sin(y)\). Then zoom out to -4 to 4 in the x and y-directions. Now select the View Taylor Polynomials option from the Tools menu at the top of the applet. It will take a few seconds as the computer calculates the partial derivatives and creates the Taylor polynomials. This example is successfully calculated all the way up to the 15th degree polynomial. Once it is ready, the original function is graphed as a wireframe and the 1st degree Taylor polynomial (the tangent plane) is shown. A scrollbar appears along the bottom edge of the 3D plot. Use this scrollbar to scroll through the various Taylor polynomials of this function. Note that only odd degrees add new terms for this particular function. As you increase the degree of the Taylor polynomial notice how the polynomial of two variables fits the original surface better and better around the origin until it is a fairly good approximation of the whole visible surface at the 15th degree. To better view the Taylor polynomial itself (shown in the text window just above the 3D plot), you can click and drag on the equation and view all terms, dragging the equation left and right. You can also use the Tools menu option Use Factorials in Taylor Polynomials to switch this property on or off. Using factorials makes the form of the terms of the higher order Taylor polynomials easier to see, and the terms also generally take up less horizontal space each. You can also vary the center point for the Taylor expansion using the Tools menu option just below View Taylor Polynomials. The default center point is the origin. Other nice functions to try centered about the origin include: \(f(x,y)=\cos(x)-\sin(y)\) \(f(x,y)=\sin(2x)-\cos(y)\) \(f(x,y)=\sin(x^2+y^2)\) \( f(x,y)=xe^y+1\) \(f(x,y)=e^{x^2+2x-y}\) \(f(x,y)=\arctan(xy)\) \(f(x,y)=\arctan(x+y)\)
Click here to open the CalcPlot3D applet in a new window.
Click here to open a pdf file which contains the instructions for the activity.
|
Abstract
We give infinite series of groups $\Gamma$ and of compact complex surfaces of general type $S$ with fundamental group $\Gamma$ such that
1) Any surface $S’$ with the same Euler number as $S$, and fundamental group $\Gamma$, is diffeomorphic to $S$. 2) The moduli space of $S$ consists of exactly two connected components, exchanged by complex conjugation.
Whence,
i) On the one hand we give simple counterexamples to the DEF = DIFF question whether deformation type and diffeomorphism type coincide for algebraic surfaces. ii) On the other hand we get examples of moduli spaces without real points. iii) Another interesting corollary is the existence of complex surfaces $S$ whose fundamental group $\Gamma$ cannot be the fundamental group of a real surface.
Our surfaces are surfaces isogenous to a product; i.e., they are quotients $(C_1 \times C_2)/ G $ of a product of curves by the free action of a finite group $G$.
They resemble the classical hyperelliptic surfaces, in that $G$ operates freely on $C_1$, while the second curve is a
triangle curve, meaning that $C_2 / G \equiv \mathbb{P}^1$ and the covering is branched in exactly three points.
|
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
|
Definition:Symmetric Difference/Definition 2 Definition
The
symmetric difference between two sets $S$ and $T$ is written $S * T$ and is defined as: $S * T = \paren {S \cup T} \setminus \paren {S \cap T}$
where:
There is no standard symbol for symmetric difference. The one used here, and in general on $\mathsf{Pr} \infty \mathsf{fWiki}$:
$S * T$ The following are often found for $S * T$: $S \oplus T$ $S + T$ $S \mathop \triangle T$ or $S \mathop \Delta T$ $S \mathop \Theta T$ $S \mathop \triangledown T$
are also variants for denoting this concept.
Also see Results about symmetric differencecan be found here. Sources 1965: J.A. Green: Sets and Groups... (previous) ... (next): Chapter $1$. Sets: Exercise $7$ 1970: B. Hartley and T.O. Hawkes: Rings, Modules and Linear Algebra... (previous) ... (next): $\S 1.2$: Some examples of rings: Ring Example $6$ 1986: Geoffrey Grimmett and Dominic Welsh: Probability: An Introduction... (previous) ... (next): $\S 1.2$: Outcomes and events: Exercise $3$ 1989: Ephraim J. Borowski and Jonathan M. Borwein: Dictionary of Mathematics: Entry: Symmetric difference 2008: Paul Halmos and Steven Givant: Introduction to Boolean Algebras... (previous) ... (next): Appendix $\text{A}$: Set Theory: Operations on Sets
|
Search
Now showing items 1-10 of 26
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV
(American Physical Society, 2017-06)
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
|
I have a short question, related to the ongoing search of mathematics instructors for counter-examples to common undergraduate mistakes.
The classical example of a function that is differentiable everywhere but has discontinuous derivative is\begin{equation} f(x)=\left\{ \begin{array}{cc} x^2\sin(1/x) &(x\neq0), \\ 0 &(x=0), \end{array}\right.\end{equation}which has derivative\begin{equation} f'(x)=\left\{ \begin{array}{cc} 2x\sin(1/x)-\cos(1/x) &(x\neq0), \\ 0 &(x=0). \end{array}\right.\end{equation}$f'$ fails to be continuous at $0$ purely because its left- and right-hand limits
do not even exist at $0$.
However, suppose that we have found a function $g$ whose derivative $g'$ has finite but unequal left- and right-hand limits at some cluster point $x_0$ in its domain. May we conclude that $g$ is not differentiable at $x_0$?
If this is not the case, is there a simple counter-example? (I'm guessing such a counter-example ought to be more complicated than the $f$ I have given above, as $f$ is sometimes claimed to be the simplest example of a differentiable function with discontinuous derivative.)
Thanks in advance!
|
Can someone please verify my answers to the following questions?
Answer true or false to the following questions:
Two elements of a group in the same conjugacy class must have the same order
A group of order 24 can have 5 conjugacy distinct classes of cardinalities 1, 4,4,6, and 12 respectively.
The group $S_3$ has three conjugacy classes, of cardinalities 1, 2, and 3, respectively.
An element is in the center of a group $G$ if and only if its centralizer is all of $G$
Every group has at least one conjugacy class consisting of only one element
If $H$ is a normal subgroup of $G$, then it is stable under the action of conjugation on $G$.
The group $\mathbb{Z}_{17} \times \mathbb{Z}_2$ has 34 distinct conjugacy classes
In any finite group $G$, the order of the centralizer of any element divides $|G|$.
In an abelian group, the centralizer of each element is trivial.
An abelian subgroup of a group is always normal
True. Let $x \in G$. Consider the element $gxg^{-1}$, where $g \in G$. Let $n$ be the order of $x$. Then, we have $(gxg^{-1})^n = g^nxg^{-n} = g^n g^{-n} = e$. So, we have just shown that the order of $y$ is less than or equal to the order of $x$ if $y$ is conjugate to $x$. But this implies that the orders of $x$ and $y$ are equal, since $x$ is conjugate to $y$ if and only if $y$ is conjugate to $x$ (and so the inequality holds both ways, so the orders must be equal).
False, since $1+4+4+6+12 \neq 24$
True. The conjugacy classes are $\{ e \}$, $\{ (1 2 3), (1 3 2) \}$, $\{ (1 2), ( 1 3), (2 3) \}$.
True. This is easy to see from the definitions.
True. The identity element comprises a conjugacy class consisting of only one element.
True, since $gHg^{-1} = H$, by definition.
True. The group $\mathbb{Z}_{17} \times \mathbb{Z}_2$ is abelian. Therefore, each element comprises a distinct conjugacy class. Since the order of the group is 34, there are 34 distinct conjugacy classes.
True. The centralizer of any element in a group $G$ forms a subgroup of $G$. Therefore, by Lagrange's theorem, the order of the centralizer divides $|G|$.
False. In an abelian group, the centralizer of each element is the entire group.
False. I can't seem to find a counter-example, but my intuition tells me that the statement is incorrect. Can someone please let me know of a counterexample?
|
Because the operators \(x\) and \(p\) are not compatible, \([\hat{X},\hat{P}]\neq 0\), there is
no measurement that can precisely determine both \(x\) and \(p\) simultaneously. Hence, there must be an uncertainty relation between them that specifies how uncertain we are about one quantity given a definite precision in the measurement of the other. Presumably, if one can be determined with infinite precision, then there will be an infinite uncertainty in the other. Recall that we had defined the uncertainty in a quantity by
\[\Delta A = \sqrt{\langle A^2 \rangle - \langle A \rangle ^2} \tag{1}\]
Thus, for \(x\) and \(p\), we have
\[ \Delta x = \sqrt{\langle x^2 \rangle - \langle x \rangle ^2} \tag{2a}\]
\[ \Delta p = \sqrt{\langle p^2 \rangle - \langle p \rangle ^2} \tag{2b}\]
These quantities can be expressed explicitly in terms of the wave function \(\Psi (x, t)\) using the fact that
\[\langle x \rangle = \langle \Psi(t)\vert x\vert\Psi(t)\rangle = \int dx \langle \Psi (t) \vert x \rangle \langle \vert x\vert X\vert\Psi(t)\rangle =\int dx \Psi^*(x,t) x \Psi(x,t) \tag{3}\]
and
\[\langle x^2 \rangle = \langle \Psi(t)\vert x^2\vert\Psi(t)\rangle = \int \Psi^*(x,t) x^2 \Psi(x,t) \tag{4}\]
Similarly,
\[\langle p \rangle = \langle \Psi(t)\vert p \vert\Psi(t)\rangle = \int dx \langle \Psi (t) \vert x \rangle \langle \vert p \vert \Psi (t) \rangle = \int dx \Psi^*(x,t){\hbar \over i}{\partial \over \partial x}\Psi(x,t) \tag{5}\]
and
\[\langle p^2 \rangle = \langle \Psi(t)\vert p^2\vert\Psi(t)\rangle = \int dx \Psi ^* (x, t)\left(-\hbar^2{\partial^2 \over \partial x^2}\right)\Psi(x,t) \tag{6}\]
Then, the Heisenberg uncertainty principle states that
\[\Delta x \Delta p \stackrel{>}{\sim} \hbar \tag{7}\]
which essentially states that the greater certainty with which a measurement of \(x\) or \(p\) can be made, the greater will be the
uncertainty in the other.
|
April 27th, 2016, 01:56 PM
# 1
Newbie
Joined: Apr 2016
From: massachusetts
Posts: 7
Thanks: 1
Drawing a card probability
This is just something I became curious about, not homework or anything, but I don't understand the result I'm getting.
My attempt at a problem statement:
A contestant is presented with a game where the goal is to draw 1 winning card from a collection of cards. Each time they draw, they must place the card back into the pile and the cards are randomized.
The number of times the contestant can attempt to draw the winning card is equal to the total number of cards.
For example, 1,000 tries for a pile of cards containing 999 losing and 1 winning card.
The question is:
how does the probability of winning change as the number of cards increases by multiples of 10, starting at 10 and going to 10^20.
-------
When I graph the result it's very unexpected to me and I'm asking here in hopes I can understand. At first it holds steady at about 36% chance of losing for 10^1 through 10^13, but around here
it starts to fluctuate wildly until by the time we're at 10^17 the probability of losing shoots to nearly 1 and stays there (?!)
Is this an error with the calculator?
Here is the equation I used for probability of drawing a losing card every time:
y=[((10^x)-1)/(10^x)]^(10^x), from x=1 to x=20
e.g. (99/100)^100 to represent the probability of drawing a losing card 100 times in a row.
April 27th, 2016, 03:56 PM
# 2
Global Moderator
Joined: May 2007
Posts: 6,823
Thanks: 723
Your analysis is correct. I presume the calculator is limited by precision. For large n, $\displaystyle (1-\frac{1}{n})^n \approx \frac{1}{e}$.
April 27th, 2016, 05:12 PM
# 3
Newbie
Joined: Apr 2016
From: massachusetts
Posts: 7
Thanks: 1
So as the number of cards gets very large, the probability of winning approaches 1-1/e This is a very clean answer and what I was initially curious about, thanks.
I feel like if I understood e better this would be an intuitive result... I'll have to think about this a bit in the future.
Also, looking again, I feel a bit silly for not doubting the calculator more confidently. Thanks again.
Tags card, drawing, probability
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post Probability of *not* guessing the right card in a 52-card deck... cbenson4 Probability and Statistics 1 March 5th, 2015 11:56 AM Probability of drawing a set of cards given several different drawing parameters Kinead Advanced Statistics 2 October 16th, 2014 10:07 PM Probability of Drawing Card guynamedluis Probability and Statistics 4 June 8th, 2012 02:26 AM A Card Probability Erimess Probability and Statistics 4 May 10th, 2011 09:14 PM "Pick a card, any card" probability question Niko Bellic Probability and Statistics 1 March 18th, 2010 01:15 PM
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.