text
stringlengths 256
16.4k
|
|---|
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
July 2001 , Volume 7 , Issue 3
Select all articles
Export/Reference:
Abstract:
We consider a special $2 \times 2$ viscous hyperbolic system of conservation laws of the form $u_t + A(u)u_{x} = \varepsilon u_{x x}$, where $A(u) = Df(u)$ is the Jacobian of a flux function $f$. For initialdata with smalltotalv ariation, we prove that the solutions satisfy a uniform BV bound, independent of $\varepsilon $. Letting $\varepsilon \to 0$, we show that solutions of the viscous system converge to the unique entropy weak solutions of the hyperbolic system $u_t + f(u)_{x} = 0$. Within the proof, we introduce two new Lyapunov functionals which control the interaction of viscous waves of the same family. This provides a first example where uniform BV bounds and convergence of vanishing viscosity solutions are obtained, for a system with a genuinely nonlinear field where shock and rarefaction curves do not coincide.
Abstract:
We consider data compression algorithms as a tool to get an approximate measure for the quantity of information contained in a string. By this it is possible to give a notion of orbit complexity for topological dynamical systems. In compact ergodic dynamical systems, entropy is almost everywhere equal to orbit complexity. The use of compression algorithms allows a direct estimation of the information content of the orbits.
Abstract:
Nonlinear stochastic dynamical systems as ordinary stochastic differential equations and stochastic difference equations are in the center of this presentation in view of the asymptotic behavior of their moments. We study the exponential p-th mean growth behavior of their solutions as integration time tends to infinity. For this purpose, the concepts of attractivity, stability and contractivity exponents for moments are introduced as generalizations of well-known moment Lyapunov exponents of linear systems. Under appropriate monotonicity assumptions we gain uniform estimates of these exponents from above and below. Eventually, these concepts are generalized to describe the exponential growth behavior along certain Lyapunov-type functionals.
Abstract:
We describe a unitary operator $U(\alpha)$ on
L $(\mathbb T)$, depending on a real parameter $\alpha$, that is a quantization of a simple piecewise holomorphic dynamical system on the cylinder $\mathbf C^* \cong \mathbb T \times \mathbb R$. We give results describing the spectrum of $U(\alpha)$ in terms of the diophantine properties of $\alpha$, and use these results to compare the quantum to classical dynamics. In particular, we prove that for almost all $\alpha$, the quantum dynamics localizes, whereas the classical dynamics does not. We also give a condition implying that the quantum dynamics does not localize. 2 Abstract:
In this paper, we study the stability and the instability of standing waves for the nonlinear Schrödinger equation with harmonic potential. We prove the existence of stable or unstable standing waves under certain conditions on the power of nonlinearity and the frequency of wave.
Abstract:
For $p>1, $ and $\phi_p (s) = |s| ^{p-2} s,$ we consider the equation
$(\phi_p (x'))' + \alpha \phi_p (x^+ ) - \beta \phi_p (x^- ) = f(t,x),$
where $ x^{+}=\max\{x,0\}$; $x^{-} =\max\{-x,0\},$ in a situation of resonance or near resonance for the period $T,$ i.e. when $\alpha,\beta$ satisfy exactly or approximately the equation
$\frac{\pi_p }{\alpha^{1/p}} + \frac{\pi_p}{\beta^{1/p}} = \frac{T}{n},$
for some integer $n.$ We assume that $f$ is continuous, locally Lipschitzian in $x,$ $T$-periodic in $t,$ bounded on $\mathbf R^2,$ and having limits $f_{\pm}(t)$ for $x \to \pm \infty,$ the limits being uniform in $t.$ Denoting by $v $ a solution of the homogeneous equation
$(\phi_p (x'))' + \alpha \phi_p (x^+ ) - \beta \phi_p (x^- ) = 0,$
we study the existence of $T$-periodic solutions by means of the function
$ Z (\theta) = \int_{\{t\in I | v_{\theta }(t)>0\}} f_{+}(t)v(t + \theta) dt + \int_{\{t\in I | v_{\theta }(t)<0\}} f_-(t) v (t + \theta) dt,$
where $ I \stackrel{def}{=} [0,T].$ In particular, we prove the existence of $T$-periodic solutions at resonance when $Z$ has $2z$ zeros in the interval $[0,T/n),$ all zeros being simple, and $z$ being different from $1.$
Abstract:
[]
Abstract:
The Josephson equation is investigated in detail: the existence and bifurcations for harmonic and subharmonic solutions under small perturbations are obtained by using second-order averaging method and subharmonic Melnikov function, and the criterion of existence for chaos is proved by Melnikov analysis; the bifurcation curves about n-subharmonic and heteroclinic orbits and the driving frequency $\omega$ effects to the forms of chaotic behaviors are given by numerical simulations.
Abstract:
We study the long-time behavior of solutions for damped nonlinear hyperbolic equations in the unbounded domains. It is proved that under the natural assumptions these equations possess the locally compact attractors which may have the infinite Hausdorff and fractal dimension. That is why we obtain the upper and lower bounds for the Kolmogorov's entropy of these attractors.
Moreover, we study the particular cases of these equations where the attractors occurred to be finite dimensional. For such particular cases we establish that the attractors consist of finite collections of finite dimensional unstable manifolds and every solution stabilizes to one of the finite number of equilibria points.
Abstract:
Friz and Robinson showed that analytic global attractors consisting of periodic functions can be parametrised using the values of the solution at a finite number of points throughout the domain, a result applicable to the $2$d Navier-Stokes equations with periodic boundary conditions. In this paper we extend the argument to cover any attractor consisting of analytic functions; in particular we are now able to treat the $2$d Navier-Stokes equations with Dirichlet boundary conditions.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]
|
Difference between revisions of "Geometry and Topology Seminar"
(→Spring Abstracts)
Line 258: Line 258:
===Bena Tshishiku===
===Bena Tshishiku===
"TBA"
"TBA"
+ + + + + + + +
===Autumn Kent===
===Autumn Kent===
Revision as of 14:58, 31 January 2017 Contents 1 Fall 2016 2 Spring 2017 3 Fall Abstracts 4 Spring Abstracts 5 Archive of past Geometry seminars Fall 2016 Spring 2017
date speaker title host(s) Jan 20 Carmen Rovi (University of Indiana Bloomington) "The mod 8 signature of a fiber bundle" Maxim Jan 27 Feb 3 Rafael Montezuma (University of Chicago) "Metrics of positive scalar curvature and unbounded min-max widths" Lu Wang Feb 10 Feb 17 Yair Hartman (Northwestern University) "Intersectional Invariant Random Subgroups and Furstenberg Entropy." Dymarz Feb 24 Lucas Ambrozio (University of Chicago) "TBA" Lu Wang March 3 Mark Powell (Université du Québec à Montréal) "TBA" Kjuchukova March 10 Autumn Kent (Wisconsin) Analytic functions from hyperbolic manifolds local March 17 March 24 Spring Break March 31 Xiangwen Zhang (University of California-Irvine) "TBA" Lu Wang April 7 reserved Lu Wang April 14 Xianghong Gong (Wisconsin) "TBA" local April 21 Joseph Maher (CUNY) "TBA" Dymarz April 28 Bena Tshishiku (Harvard) "TBA" Dymarz Fall Abstracts Ronan Conlon New examples of gradient expanding K\"ahler-Ricci solitons
A complete K\"ahler metric $g$ on a K\"ahler manifold $M$ is a \emph{gradient expanding K\"ahler-Ricci soliton} if there exists a smooth real-valued function $f:M\to\mathbb{R}$ with $\nabla^{g}f$ holomorphic such that $\operatorname{Ric}(g)-\operatorname{Hess}(f)+g=0$. I will present new examples of such metrics on the total space of certain holomorphic vector bundles. This is joint work with Alix Deruelle (Universit\'e Paris-Sud).
Jiyuan Han Deformation theory of scalar-flat ALE Kahler surfaces
We prove a Kuranishi-type theorem for deformations of complex structures on ALE Kahler surfaces. This is used to prove that for any scalar-flat Kahler ALE surfaces, all small deformations of complex structure also admit scalar-flat Kahler ALE metrics. A local moduli space of scalar-flat Kahler ALE metrics is then constructed, which is shown to be universal up to small diffeomorphisms (that is, diffeomorphisms which are close to the identity in a suitable sense). A formula for the dimension of the local moduli space is proved in the case of a scalar-flat Kahler ALE surface which deforms to a minimal resolution of \C^2/\Gamma, where \Gamma is a finite subgroup of U(2) without complex reflections. This is a joint work with Jeff Viaclovsky.
Sean Howe Representation stability and hypersurface sections
We give stability results for the cohomology of natural local systems on spaces of smooth hypersurface sections as the degree goes to \infty. These results give new geometric examples of a weak version of representation stability for symmetric, symplectic, and orthogonal groups. The stabilization occurs in point-counting and in the Grothendieck ring of Hodge structures, and we give explicit formulas for the limits using a probabilistic interpretation. These results have natural geometric analogs -- for example, we show that the "average" smooth hypersurface in \mathbb{P}^n is \mathbb{P}^{n-1}!
Nan Li Quantitative estimates on the singular sets of Alexandrov spaces
The definition of quantitative singular sets was initiated by Cheeger and Naber. They proved some volume estimates on such singular sets in non-collapsed manifolds with lower Ricci curvature bounds and their limit spaces. On the quantitative singular sets in Alexandrov spaces, we obtain stronger estimates in a collapsing fashion. We also show that the (k,\epsilon)-singular sets are k-rectifiable and such structure is sharp in some sense. This is a joint work with Aaron Naber.
Yu Li
In this talk, we prove that if an asymptotically Euclidean (AE) manifold with nonnegative scalar curvature has long time existence of Ricci flow, it converges to the Euclidean space in the strong sense. By convergence, the mass will drop to zero as time tends to infinity. Moreover, in three dimensional case, we use Ricci flow with surgery to give an independent proof of positive mass theorem. A classification of diffeomorphism types is also given for all AE 3-manifolds with nonnegative scalar curvature.
Peyman Morteza We develop a procedure to construct Einstein metrics by gluing the Calabi metric to an Einstein orbifold. We show that our gluing problem is obstructed and we calculate the obstruction explicitly. When our obstruction does not vanish, we obtain a non-existence result in the case that the base orbifold is compact. When our obstruction vanishes and the base orbifold is non-degenerate and asymptotically hyperbolic we prove an existence result. This is a joint work with Jeff Viaclovsky. Caglar Uyanik Geometry and dynamics of free group automorphisms
A common theme in geometric group theory is to obtain structural results about infinite groups by analyzing their action on metric spaces. In this talk, I will focus on two geometrically significant groups; mapping class groups and outer automorphism groups of free groups.We will describe a particular instance of how the dynamics and geometry of their actions on various spaces provide deeper information about the groups.
Bing Wang The extension problem of the mean curvature flow
We show that the mean curvature blows up at the first finite singular time for a closed smooth embedded mean curvature flow in R^3. A key ingredient of the proof is to show a two-sided pseudo-locality property of the mean curvature flow, whenever the mean curvature is bounded. This is a joint work with Haozhao Li.
Ben Weinkove Gauduchon metrics with prescribed volume form
Every compact complex manifold admits a Gauduchon metric in each conformal class of Hermitian metrics. In 1984 Gauduchon conjectured that one can prescribe the volume form of such a metric. I will discuss the proof of this conjecture, which amounts to solving a nonlinear Monge-Ampere type equation. This is a joint work with Gabor Szekelyhidi and Valentino Tosatti.
Jonathan Zhu Entropy and self-shrinkers of the mean curvature flow
The Colding-Minicozzi entropy is an important tool for understanding the mean curvature flow (MCF), and is a measure of the complexity of a submanifold. Together with Ilmanen and White, they conjectured that the round sphere minimises entropy amongst all closed hypersurfaces. We will review the basics of MCF and their theory of generic MCF, then describe the resolution of the above conjecture, due to J. Bernstein and L. Wang for dimensions up to six and recently claimed by the speaker for all remaining dimensions. A key ingredient in the latter is the classification of entropy-stable self-shrinkers that may have a small singular set.
Yu Zeng Short time existence of the Calabi flow with rough initial data
Calabi flow was introduced by Calabi back in 1950’s as a geometric flow approach to the existence of extremal metrics. Analytically it is a fourth order nonlinear parabolic equation on the Kaehler potentials which deforms the Kaehler potential along its scalar curvature. In this talk, we will show that the Calabi flow admits short time solution for any continuous initial Kaehler metric. This is a joint work with Weiyong He.
Spring Abstracts Lucas Ambrozio
"TBA"
Rafael Montezuma
"Metrics of positive scalar curvature and unbounded min-max widths"
In this talk, I will construct a sequence of Riemannian metrics on the three-dimensional sphere with scalar curvature greater than or equal to 6, and arbitrarily large min-max widths. The search for such metrics is motivated by a rigidity result of min-max minimal spheres in three-manifolds obtained by Marques and Neves.
Carmen Rovi The mod 8 signature of a fiber bundle
In this talk we shall be concerned with the residues modulo 4 and modulo 8 of the signature of a 4k-dimensional geometric Poincare complex. I will explain the relation between the signature modulo 8 and two other invariants: the Brown-Kervaire invariant and the Arf invariant. In my thesis I applied the relation between these invariants to the study of the signature modulo 8 of a fiber bundle. In 1973 Werner Meyer used group cohomology to show that a surface bundle has signature divisible by 4. I will discuss current work with David Benson, Caterina Campagnolo and Andrew Ranicki where we are using group cohomology and representation theory of finite groups to detect non-trivial signatures modulo 8 of surface bundles.
Yair Hartman
"Intersectional Invariant Random Subgroups and Furstenberg Entropy."
In this talk I'll present a joint work with Ariel Yadin, in which we solve the Furstenberg Entropy Realization Problem for finitely supported random walks (finite range jumps) on free groups and lamplighter groups. This generalizes a previous result of Bowen. The proof consists of several reductions which have geometric and probabilistic flavors of independent interests. All notions will be explained in the talk, no prior knowledge of Invariant Random Subgroups or Furstenberg Entropy is assumed.
Bena Tshishiku
"TBA"
Mark Powell Stable classification of 4-manifolds
A stabilisation of a 4-manifold M is a connected sum of M with some number of copies of S^2 x S^2. Two 4-manifolds are said to be stably diffeomorphic if they admit diffeomorphic stabilisations. Since a necessary condition is that the fundamental groups be isomorphic, we study this equivalence relation for a fixed group. I will discuss recent progress in classifying 4-manifolds up to stable diffeomorphism for certain families of groups, arising from work with Daniel Kasprowski, Markus Land and Peter Teichner. As a by-product we also obtained a result on the analogous question with the complex projective plane CP^2 replacing S^2 x S^2.
Autumn Kent Analytic functions from hyperbolic manifolds
At the heart of Thurston's proof of Geometrization for Haken manifolds is a family of analytic functions between Teichmuller spaces called "skinning maps." These maps carry geometric information about their associated hyperbolic manifolds, and I'll discuss what is presently known about their behavior. The ideas involved form a mix of geometry, algebra, and analysis.
Xiangwen Zhang
"TBA"
Archive of past Geometry seminars
2015-2016: Geometry_and_Topology_Seminar_2015-2016
2014-2015: Geometry_and_Topology_Seminar_2014-2015 2013-2014: Geometry_and_Topology_Seminar_2013-2014 2012-2013: Geometry_and_Topology_Seminar_2012-2013 2011-2012: Geometry_and_Topology_Seminar_2011-2012 2010: Fall-2010-Geometry-Topology
|
radianmeasure is the ratio of two lengths, it is a unitless measure. For example, suppose the radius were 2 inches and the distance along the arc were also 2 inches. When we calculate the radian measure of the angle, the “inches” cancel, and we have a result without units.
\( \require{cancel}\)
\[\theta \, \text{radians} = \frac{s}{r} = \frac{2 \cancel{\text{in.}}}{2 \cancel{\text{in.}}} = 1 \]
Therefore, it is not necessary to write the label “radians” after a radian measure, and if we see an angle that is not labeled with “degrees” or the degree symbol, we can assume that it is a radian measure.
Considering the most basic case, the
unit circle(a circle with radius 1), we know that 1 rotation equals 360 degrees, 360°. We can also track one rotation around a circle by finding the circumference, C=2π r, and for the unit circle C=2π. These two different ways to rotate around a circle give us a way to convert from degrees to radians.
\[ \begin{align}
1\, \text{rotation} &= 360^\circ = 2\pi \,\text{radians} \\
\frac{1}{2}\, \text{rotation} &= 180^\circ = \pi \,\text{radians} \\
\frac{1}{4}\, \text{rotation} &= 90^\circ = \frac{\pi}{2} \,\text{radians}
\end{align}\]
Identifying Special Angles Measured in Radians
In addition to knowing the measurements in degrees and radians of a quarter revolution, a half revolution, and a full revolution, there are other frequently encountered angles in one revolution of a circle with which we should be familiar. It is common to encounter multiples of 30, 45, 60, and 90 degrees. Memorizing these angles will be very useful as we study the properties associated with angles. Here, we can list the corresponding radian values for the common measures.
YouTube Video: Radian Measure
|
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:342-382, 2019.
Abstract
We consider the sorted top-$k$ problem whose goal is to recover the top-$k$ items with the correct order out of $n$ items using pairwise comparisons. In many applications, multiple rounds of interaction can be costly. We restrict our attention to algorithms with a constant number of rounds $r$ and try to minimize the sample complexity, i.e. the number of comparisons. When the comparisons are noiseless, we characterize how the optimal sample complexity depends on the number of rounds (up to a polylogarithmic factor for general $r$ and up to a constant factor for $r=1$ or 2). In particular, the sample complexity is $\Theta(n^2)$ for $r=1$, $\Theta(n\sqrt{k} + n^{4/3})$ for $r=2$ and $\tilde{\Theta}\left(n^{2/r} k^{(r-1)/r} + n\right)$ for $r \geq 3$. We extend our results of sorted top-$k$ to the noisy case where each comparison is correct with probability $2/3$. When $r=1$ or 2, we show that the sample complexity gets an extra $\Theta(\log(k))$ factor when we transition from the noiseless case to the noisy case. We also prove new results for top-$k$ and sorting in the noisy case. We believe our techniques can be generally useful for understanding the trade-off between round complexities and sample complexities of rank aggregation problems.
@InProceedings{pmlr-v99-braverman19a,title = {Sorted Top-k in Rounds},author = {Braverman, Mark and Mao, Jieming and Peres, Yuval},booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory},pages = {342--382},year = {2019},editor = {Beygelzimer, Alina and Hsu, Daniel},volume = {99},series = {Proceedings of Machine Learning Research},address = {Phoenix, USA},month = {25--28 Jun},publisher = {PMLR},pdf = {http://proceedings.mlr.press/v99/braverman19a/braverman19a.pdf},url = {http://proceedings.mlr.press/v99/braverman19a.html},abstract = { We consider the sorted top-$k$ problem whose goal is to recover the top-$k$ items with the correct order out of $n$ items using pairwise comparisons. In many applications, multiple rounds of interaction can be costly. We restrict our attention to algorithms with a constant number of rounds $r$ and try to minimize the sample complexity, i.e. the number of comparisons. When the comparisons are noiseless, we characterize how the optimal sample complexity depends on the number of rounds (up to a polylogarithmic factor for general $r$ and up to a constant factor for $r=1$ or 2). In particular, the sample complexity is $\Theta(n^2)$ for $r=1$, $\Theta(n\sqrt{k} + n^{4/3})$ for $r=2$ and $\tilde{\Theta}\left(n^{2/r} k^{(r-1)/r} + n\right)$ for $r \geq 3$. We extend our results of sorted top-$k$ to the noisy case where each comparison is correct with probability $2/3$. When $r=1$ or 2, we show that the sample complexity gets an extra $\Theta(\log(k))$ factor when we transition from the noiseless case to the noisy case. We also prove new results for top-$k$ and sorting in the noisy case. We believe our techniques can be generally useful for understanding the trade-off between round complexities and sample complexities of rank aggregation problems.}}
%0 Conference Paper%T Sorted Top-k in Rounds%A Mark Braverman%A Jieming Mao%A Yuval Peres%B Proceedings of the Thirty-Second Conference on Learning Theory%C Proceedings of Machine Learning Research%D 2019%E Alina Beygelzimer%E Daniel Hsu%F pmlr-v99-braverman19a%I PMLR%J Proceedings of Machine Learning Research%P 342--382%U http://proceedings.mlr.press%V 99%W PMLR%X We consider the sorted top-$k$ problem whose goal is to recover the top-$k$ items with the correct order out of $n$ items using pairwise comparisons. In many applications, multiple rounds of interaction can be costly. We restrict our attention to algorithms with a constant number of rounds $r$ and try to minimize the sample complexity, i.e. the number of comparisons. When the comparisons are noiseless, we characterize how the optimal sample complexity depends on the number of rounds (up to a polylogarithmic factor for general $r$ and up to a constant factor for $r=1$ or 2). In particular, the sample complexity is $\Theta(n^2)$ for $r=1$, $\Theta(n\sqrt{k} + n^{4/3})$ for $r=2$ and $\tilde{\Theta}\left(n^{2/r} k^{(r-1)/r} + n\right)$ for $r \geq 3$. We extend our results of sorted top-$k$ to the noisy case where each comparison is correct with probability $2/3$. When $r=1$ or 2, we show that the sample complexity gets an extra $\Theta(\log(k))$ factor when we transition from the noiseless case to the noisy case. We also prove new results for top-$k$ and sorting in the noisy case. We believe our techniques can be generally useful for understanding the trade-off between round complexities and sample complexities of rank aggregation problems.
Braverman, M., Mao, J. & Peres, Y.. (2019). Sorted Top-k in Rounds. Proceedings of the Thirty-Second Conference on Learning Theory, in PMLR 99:342-382
This site last compiled Sat, 17 Aug 2019 00:05:37 +0000
|
This is a neat question and I've thought about it before. Here's what we came up with:
You run your algorithm $n$ times to get outputs $x_1, \cdots, x_n \in \mathbb{R}^d$ and you know what with high probability a large fraction of $x_i$s fall into some good set $G$. You don't know what $G$ is, just that it is convex. The good news is that there is a way to get a point in $G$ with no further information about it. Call this point $f(x_1, \cdots, x_n)$.
Theorem. For all natural numbers $n$ and $d$, there exists a function $f : (\mathbb{R}^d)^n \to \mathbb{R}^d$ such that the following holds. Let $x_1 ... x_n \in \mathbb{R}^d$ and let $G \subset \mathbb{R}^d$ be a convex set satisfying $$\frac{1}{n}\left|\left\{ i \in [n] : x_i \in G \right\}\right| > \frac{d}{d+1}.$$ Then $f(x_1, ..., x_n) \in G$. Moreover, $f$ is computable in time polynomial in $n^d$.
Note that, for $d=1$, we can set $f$ to be the median. So this shows how to generalise the median for $d>1$.
Before proving this result, note that it is tight: Let $n=d+1$ and let $x_1, \cdots, x_d$ be the standard basis elements and $x_{d+1}=0$. Any subset of $d$ of the points is contained in an affine space $G$ of dimension $d-1$ (which is uniquely defined by those points). But no point is contained in all of those affine spaces. Hence there is some convex $G$ that contains $n\cdot d/(d+1)=d$ points but doesn't contain $f(x_1, \cdots, x_n)$, whatever value that takes.
Proof. We use the following result.
Helly's Theorem. Let $K_1 ... K_m$ be convex subsets of $\mathbb{R}^d$. Suppose the intersection of any $d+1$ $K_i$s is nonempty. Then the intersection of all $K_i$s is nonempty.
Click here for a proof of Helly's Theorem.
Now to prove our theorem:
Let $k<n/(d+1)$ be an upper bound on the number of points not in $G$. Consider all closed halfspaces $K_1 ... K_m \subset \mathbb{R}^d$ containing at least $n-k$ points with their their boundary containing a set of points of maximal rank (this is a finite number of halfspaces as each $K_i$ is defined by $d+1$ points on its boundary).
The complement of each $K_i$ contains at most $k$ points. By a union bound, the intersection any $d+1$ $K_i$s contains at least $n-k(d+1)$>0 points. By Helly's theorem (since halfspaces are convex), there is a point in the intersection of all the $K_is$. We let $f$ be a function that computes an arbitrary point in the intersection of the $K_i$s.
All that remains is to show that the intersection of the $K_i$s is contained in $G$.
Without loss of generality, $G$ is the convex hull of a subset of the points with full rank. That is, we can replace $G$ with the convex hull of the points it contains. If this does not have full rank, we can simply apply our theorem in lower dimension.
Each face of $G$ defines a halfspace, where $G$ is the intersection of these halfspaces. Each of these halfspaces contains $G$ and hence contains at least $n-k$ points. The boundary of one of these half spaces contains a face of $G$ and hence contains a set of points of maximal rank. Thus each of these halfspaces is a $K_i$. Thus the intersection of all $K_i$s is contained in $G$, as required.
To compute $f$, set up a linear program where the linear constraints correspond to $K_i$s and a feasible solution corresponds to a point in the intersection of all the $K_i$s.
Q.E.D.
Unfortunately, this result is not very practical in the high-dimensional setting. A good question is whether we can compute $f$ more efficiently:
Open Problem. Prove the above theorem with the additional conclusion that $f$ can be computed in time polynomial in $n$ and $d$.
Aside: We can also change the problem to get an efficient solution: If $x_1, \cdots, x_n$ have the property that strictly more than half of them lie in a ball $B(y,\varepsilon)$, then we can find a point $z$ that lies in $B(y,3\varepsilon)$ in time polynomial in $n$ and $d$. In particular, we can set $z=x_i$ for an arbitrary $i$ such that strictly more than half of the points are in $B(z,2\varepsilon)$.
|
Current browse context:
physics.flu-dyn
Change to browse by: References & Citations Bookmark(what is this?) Physics > Fluid Dynamics Title: Equilibrium and stability of two-dimensional pinned drops
(Submitted on 21 Aug 2019 (v1), last revised 22 Aug 2019 (this version, v2))
Abstract: Superhydrophobicity relies on the stability of drops's interfaces pinned on sharp edges to sustain non-wetting (Cassie-Baxter) equilibrium states. Gibbs already pointed out that equilibrium is possible as long as the pinning angle at the edge falls between the equilibrium contact angles corresponding to the flanks of the edge. However, the lack of stability can restrict further the realizable equilibrium configurations. To find these limits we analyze here the equilibrium and stability of two-dimensional drops bounded by interfaces pinned on mathematically sharp edges. We are specifically interested on how the drop's stability depends on its size, which is measured with the Bond number $Bo = (\mathcal{W}_d/\ell_c)^2$, defined as the ratio of the drop's characteristic length scale $\mathcal{W}_d$ to the capillary length $\ell_c = \sqrt{\sigma/\rho g}$. Drops with a fixed volume become more stable as they shrink in size. On the contrary, open drops, i.e. capable of exchanging mass with a reservoir, are less stable as their associated Bond number decreases. Submission historyFrom: José Graña Otero [view email] [v1]Wed, 21 Aug 2019 16:22:20 GMT (3475kb,D) [v2]Thu, 22 Aug 2019 19:14:18 GMT (1303kb,D)
|
Pfleiderer, C.; Fai\ss{}t, A.; {von L̈ohneysen}, H.; Hayden, S. M.; Lonzarich, G. G.
Title:
Field Dependence of the Specific Heat of Single-Crystalline {{ZrZn2}}
Abstract:
We present measurements of the specific heat C of a single crystal of ZrZn2 in the range 2\textendash{}30K, at magnetic field B up to 14T. For B=0 and low temperature the specific heat varies as C$\approx\gamma$T+$\beta$T3, where $\gamma\approx$45mJ/molK2 and $\beta$ corresponds to a Debye temperature $\Theta$D$\approx$340K. Magnetic field reduces $\gamma$ by up to 30% at 14T. The variation of $\gamma$ with B is compared with predictions of a self-consistent model of the magnetic equation of state, where phenomenological parameters are taken from the DC magnetization and neutron scattering. «
We present measurements of the specific heat C of a single crystal of ZrZn2 in the range 2\textendash{}30K, at magnetic field B up to 14T. For B=0 and low temperature the specific heat varies as C$\approx\gamma$T+$\beta$T3, where $\gamma\approx$45mJ/molK2 and $\beta$ corresponds to a Debye temperature $\Theta$D$\approx$340K. Magnetic field reduces $\gamma$ by up to 30% at 14T. The variation of $\gamma$ with B is compared with predictions of a self-consistent model of the magnetic equation of sta... »
Keywords:
Weak ferromagnetism,Specific heat — low temperature
|
There has been a great deal of excitement among topologists about the proof of the Virtual Haken Theorem, and in fact of the Virtual Fibering Theorem (for closed hyperbolic 3-manifolds, but I'm guessing they will soon be proven in full generality). The proof is lucidly discussed in Danny Calegari's blog. The theorems state that every compact orientable irreducible 3-manifold with infinite fundamental group has a finite cover which is Haken or a surface bundle over a circle, correspondingly. This implies various good things for a 3-manifold with fundamental group π, including:
π is large, meaning that π has a finite index subgroup which maps onto a free group with at least 2 generators. In particular the Betti numbers of finite covers can become arbitrarily large. π is linear over $\mathbb{Z}$, i.e.π admits a faithful representation $\pi \to \mathrm{GL}(n,\mathbb{Z})$ for some $n$. (Thurston conjectured that $n\leq 4$ is sufficient). π is virtually biorderable.
Stefan Friedl, from whose comment the above list is an excerpt, summarizes the situation as follows:
It seems like every nice property of fundamental groups which one can possibly ask for either holds for π or a finite index subgroup of π.
All well and good. But how could you `sell' that to somebody who isn't a classically-oriented 3-dimensional topologist? An elevator pitch is defined by Wikipedia as follows:
An elevator pitchis a short summary used to quickly and simply define a product, service, or organization and its value proposition. The name "elevator pitch" reflects the idea that it should be possible to deliver the summary in the time span of an elevator ride, or approximately thirty seconds to two minutes. In The Perfect Elevator Speech, Aileen Pincus states that an elevator speech should "sum up unique aspects of your service or product in a way that excites others."
The Virtual Fibering Conjecture (or the Virtual Haken Conjecture) was
the grand conjecture in 3-manifold topology following Geometrization, and thus must have/ should have/ ought to have (I believe) a compelling elevator pitch. For contrast, Geometrization is easy to `sell' because it directly applies to the Homeomorphism Problem in 3-manifold topology: Given two 3-manifolds, determine whether or not they are homeomorphic. Geometrization allows you to canonically decompose both manifolds into submanifolds with geometric structure, and then to compare geometric invariants. In terms of "The Goals of Mathematical Research" as given in the introduction to The Princeton Companion to Mathematics, this corresponds to the goal of Classifying. Question: What is a good elevator pitch for Virtual Fibering (or for Virtual Haken), explaining the utility of these results in terms of "the fundamental goals of mathematical research" (Solving Equations, Classifying, Generalizing, Discovering Patterns, Explaining Patterns and Coincidences, Counting and Measuring, and Finding Explicit Algorithms). The target would be mathematicians who are not 3-dimensional topologists.
Everyone in the approximate vicinity of the field instinctively feels that these are historic results, but I'd like to be able to justify that feeling (in the abovementioned sense) to myself and to others.
|
I want to generate a list of simple trigonometric questions in
TeXForm, for example, as follows.
Evaluate the following expressions.\begin{enumerate}\item $\sin 30^\circ$\item $\sec 90^\circ$% ... others go here ...\item $\csc 315^\circ$\end{enumerate}
So I can just copy and paste it to my LaTeX input file to create a problem sheet.
I have a list of arguments
args and a list of functions
funcs as follows.
args = Array[15 # &, 24, 0];funcs = {Sin, Cos, Tan, Csc, Sec, Cot};
How to define a function
GenerateProblem[n_] where
n represents the number of problems to generate in
TeXForm?
|
I'm using Mathematica 9 and its Control Systems functionality.
I've searched the references extensively but seem not to be able to find any examples of state space model parameters estimation. All examples show models parametarized already or when they are given symbolic parameters they are not simulated.
So my question is whether there're built-in functions that would estimate state space model paramters (including covariance matrices for stochastic inputs and measurements)?
If there is none as of now may be you could give me some directions on implementing such procedures myself?
Example problem
Let me provide an example problem which I will borrow from (Zivot et. al, 2003):
Harvey (1985) and Clark (1987) provide an alternative to the BN decomposition of an I(1) time series with drift into permanent and transitory components based on unobserved components structural time series models. For example, Clark's model for the natural logarithm of postwar real GDP specifies the trend as a pure random walk, and the cycle as a stationary AR(2) process:
$y_t=\tau_t+c_t$
$\tau_t=\mu+\tau_{t-1}+v_t$
$c_t=\phi_1 c_{t-1}+\phi_2 c_{t-2}+w_t$
Now I want to try and estimate this example in Mathematica.
First I get the data:
gdp = Differences[Log[CountryData["Russia", {"GDP", {1999, 2013}}][[All,2]]]]
Then I setup a state-space model as follows:
eq = {y[t] == c[t] + τ[t], c[t + 1] == α c[t] + β c[t - 1], τ[t + 1] == μ + τ[t]}StateSpaceModel[eq, {{y[t], 0}, {c[t], 0}, {τ[t], 0}},{}, {y[t]}, t]
So what do I do next in order to estimate the parameters with data?
Zivot, E., Wang, J., Koopman, S.J. (2004) State Space Modeling in Macroeconomics and Finance Using SsfPack for S+FinMetrics
|
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering
Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code.
he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects.
i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent.
you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl.
In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos...
Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval
@AkivaWeinberger are you familiar with the theory behind Fourier series?
anyway here's a food for thought
for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely.
(a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$?
@AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it.
> In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite.
I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d...
Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions.
@AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations
hence you're free to rescale the sides, and therefore the (semi)perimeter as well
so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality
that makes a lot of the formulas simpler, e.g. the inradius is identical to the area
It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane?
$q$ is the upper summation index in the sum with the Bernoulli numbers.
This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
|
Answer
$59$ meters
Work Step by Step
Let $h$ be the height of the building. $h=\dfrac{50}{(\cot 32^{\circ}-\cot 53^{\circ})}$ This gives: $h=\dfrac{50}{(1.6000335-0.753554)}$ $h \approx 59$ meters Hence, the height of the building is $59$ meters.
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
A typical atom consists of three subatomic particles: protons, neutrons, and electrons (as seen in the helium atom below). Other particles exist as well, such as alpha and beta particles (which are discussed below). The Bohr model shows the three basic subatomic particles in a simple manner. Most of an atom's mass is in the
nucleus—a small, dense area at the center of every atom, composed of nucleons. Nucleons include protons and neutrons. All the positive charge of an atom is contained in the nucleus, and originates from the protons. Neutrons are neutrally-charged. Electrons, which are negatively-charged, are located outside of the nucleus . Introduction
The Bohr model is outdated, but it depicts the three basic subatomic particles in a comprehensible way. Electron clouds are more accurate representations of where electrons are found. Darker areas represent where the electrons are more likely to be found, and lighter areas represent where they are less likely to be found.
Particle Electric Charge (C) Atomic Charge Mass (g) Atomic Mass (Au) Spin Protons +1.6022 x 10 -19 +1 1.6726 x 10 -24 1.0073 1/2 Neutrons 0 0 1.6740 x 10 -24 1.0078 1/2 Electrons -1.6022 x 10 -19 -1 9.1094 x 10 -28 0.00054858 1/2 Au is the SI symbol for atomic mass unit. The positive charge of protons cancels the negative charge of the electrons. Neutrons have no charge. With regard to mass, protons and neutrons are very similar, and have a much greater mass than electrons. Compared with neutrons and protons, the mass of an electron is usually negligible. Spin is associated with the rotation of a particle. Protons, neutrons, and electrons each have a total spin of 1/2. Protons
Protons were discovered by Ernest Rutherford in the year 1919, when he performed his gold foil experiment. He projected alpha particles (helium nuclei) at gold foil, and the positive alpha particles were deflected. He concluded that protons exist in a nucleus and have a positive nuclear charge. The atomic number or proton number is the number of protons present in an atom. The atomic number determines an element (e.g., the element of atomic number 6 is carbon).
Electrons
Electrons were discovered by Sir John Joseph Thomson in 1897. After many experiments involving cathode rays, J.J. Thomson demonstrated the ratio of mass to electric charge of cathode rays. He confirmed that cathode rays are fundamental particles that are negatively-charged; these cathode rays became known as electrons. Robert Millikan, through oil drop experiments, found the value of the electronic charge.
Electrons are located in an electron cloud, which is the area surrounding the nucleus of the atom. There is usually a higher probability of finding an electron closer to to the nucleus of an atom. Electrons can abbreviated as e
-. Electrons have a negative charge that is equal in magnitude to the positive charge of the protons. However, their mass is considerably less than that of a proton or neutron (and as such is usually considered insignificant). Unequal amounts of protons and electrons create ions: positive cations or negative anions. Neutrons
Neutrons were discovered by James Chadwick in 1932, when he demonstrated that penetrating radiation incorporated beams of neutral particles. Neutrons are located in the nucleus with the protons. Along with protons, they make up almost all of the mass of the atom. The number of neutrons is called the neutron number and can be found by subtracting the proton number from the atomic mass number. The neutrons in an element determine the isotope of an atom, and often its stability. The number of neutrons is not necessarily equal to the number of protons.
Identification
Both of the following are appropriate ways of representing the composition of a particular atom:
Often the proton number is not indicated because the elemental symbol conveys the same information.
Consider a neutral atom of carbon: \(\ce{^{12}_{6}C}\). The atomic mass number of Carbon is 12 amu, the proton number is 6, and it has no charge. In neutral atoms, the charge is omitted.
Above is the atomic symbol for helium from the periodic table, with the atomic number, elemental symbol, and mass indicated.
Every element has a specific number of protons, so the proton number is not always written (as in the second method above).
# Neutrons = Atomic Mass Number - Proton Number Atomic mass number is abbreviated as A. Proton number(or atomic number) is abbreviated Z. # Protons = Proton Number or Atomic Number In neutral atoms, # Electrons = # Protons In ions, # Electrons = # Protons - (Charge) Chargeis written with the number before the positive or negative sign Example, 1+
Note: The atomic mass number is not the same as the atomic mass seen on the periodic table. Click here for more information.
Other Basic Atomic Particles
Many of these particles (explained in detail below) are emitted through radioactive decay. Click here for more information. Also note that many forms of radioactive decay emit gamma rays, which are not particles.
Alpha Particles
Alpha particles can be denoted by He
2 +,α 2 +, or just α. They are helium nuclei, which consist of two protons and two neutrons. The net spin on an alpha particle is zero. They result from large, unstable atoms through a process called alpha decay. Alpha decay is the process by which an atom emits an alpha particle, thereby becoming a new element. This only occurs in elements with large, radioactive nuclei. The smallest noted element that emits alpha particles is element 52, tellurium. Alpha particles are generally not harmful. They can be easily stopped by a single sheet of paper or by one's skin. However, they can cause considerable damage to the insides of one's body. Alpha decay is used as a safe power source for radioisotope generators used in artificial heart pacemakers and space probes.
Figure: Alpha Decay involves the emission of an alpha particle from the nucleus
Beta Particles
Beta particles (β) are either free electrons or positrons with high energy and high speed; they are emitted in a process called beta decay. Positrons have the exact same mass as an electron, but are positively-charged. There are two forms of beta decay: the emission of electrons, and the emission of positrons. Beta particles, which are 100 times more penetrating than alpha particles, can be stopped by household items like wood or an aluminum plate or sheet. Beta particles have the ability to penetrate living matter and can sometimes alter the structure of molecules they strike. The alteration usually is considered damage, and can cause cancer and death. In contrast to beta particle's harmful effects, they can also be used in radiation to treat cancer.
Beta (β - -) or Electron Emission -
Electron emission may result when excess neutrons make the nucleus of an atom unstable. As a result, one of the neutrons decays into a proton, an electron, and an anti-neutrino. The proton remains in the nucleus, and the electron and anti-neutrino are emitted. The electron is called a beta particle. The equation for this process is given below:
\[ _{1}^{0}\textrm{n}\rightarrow {_{1}^{1}\textrm{p}}^+ + \textrm{e}^- + \bar{\nu_{e}} \]
n = Neutron p += Proton e -= Electron (beta particle) ν e= Anti-neutrino β - Decay
Betaβ +( +) or Positron Emission +(
Position emission occurs when an excess of protons makes the atom unstable. In this process, a proton is converted into a neutron, a positron, and a neutrino. While the neutron remains in the nucleus, the positron and the neutrino are emitted. The positron can be called a beta particle in this instance. The equation for this process is given below:
\[ { _{1}^{1}\textrm{p}}^+ \rightarrow _{1}^{0}\textrm{n} + \textrm{e}^+ + \nu_{e} \]
n = Neutron p += Proton e += Positron (beta particle) ν e β + Decay
Outside Links Basic Sub-Atomic Particles: http://www.youtube.com/watch?v=lP57g...eature=related Alpha Particles: http://en.wikipedia.org/wiki/Alpha_decay Beta Particles: http://en.wikipedia.org/wiki/Beta_particle What are Sub-Atomic Particles?: http://www.youtube.com/watch?v=uXcOqjCQzh8 Atomic Number and Mass Number: http://www.youtube.com/watch?v=lDo78hPTlgk References Petrucci, Ralph, William Harwood, Geoffrey Herring, and Jeffry Madura.General Chemistry. 9th ed. Upper Saddle River, New Jersey: Pearson Prentince Hall, 2007. Haskin, Larry A. The Atomic Nucleus and Chemistry; D. C. Heath and Company: Lexington, MA, 1972; pp. 3-4, 43-53. Petrucci, Ralph, F. Geoffrey Herring, Jeffrey D. Madura, and Carey Bissonnette. General Chemistry. 10th ed. Upper Saddle River, New Jersey: Pearson Education, Inc., 2011. Problems
1. Identify the number of protons, electrons, and neutrons in the following atom.
2. Identify the subatomic particles (protons, electrons, neutrons, and positrons) present in the following:
\(\ce{^{14}_6C}\) \(\alpha\) \(\ce{^{35}Cl^-}\) \(\beta^+\) \(\beta^-\) \(\ce{^{24}Mg^{2+}}\) \(\ce{^{60}Co}\) \(\ce{^3H}\) \(\ce{^{40}Ar}\) \(^1_0n\)
3. Given the following, identify the subatomic particles present. (The periodic table is required to solve these problems)
Charge +1, 3 protons, mass number 6. Charge -2, 7 neutrons, mass number 17. 26 protons, 20 neutrons. 28 protons, mass number 62. 5 electrons, mass number 10. Charge -1, 18 electrons, mass number 36.
4. Arrange the following elements in order of increasing (a) number of protons; (b) number of neutrons; (c) mass.
27Co, when A=59; 56Fe, when Z=26; 11Na, when A=23; 80Br, when Z=35; 29Cu, when A=30; 55Mn, when Z=25
5. Fill in the rest of the table:
Atomic Number Mass Number Number of Protons Number of Neutrons Number of Electrons 2 2 23 11 15 16 85 37 53 74 Solutions and Explanations
1. There are 4 protons, 5 neutrons, and 4 electrons. This is a neutral beryllium atom.
2. Identify the subatomic particles present in the following:
14 6C 6 protons, 8 neutrons, 6 electrons There are 6 protons in accordance with the proton number in the subscript. There are 6 electrons because the atom is neutral. There are 8 neutrons because 14-6=8. 14 is the atomic mass number in the superscript. α 2 protons, 2 neutrons, 0 electrons This is an alpha particle which can also be written as 4He 2 +. There are two protons because the element is helium. There are no electrons because 2-2 = 0. There are 2 neutrons because 4-2=2. This is an alpha particle which can also be written as 35Cl - 17 protons, 18 neutrons, 18 electrons This is a chloride ion. According to the periodic table, there are 17 protons because the element is chlorine. There are 18 electrons due to the negative charge: 17-(-1) = 18. There are 18 neutrons because 35-17=18. β + 0 protons, 0 neutrons, 0 electrons, 1 positron This is a beta +particle. It can also be written as e +. "e" represents an electron, but when it has as positive charge it is a positron. This is a beta β - 0 protons, 0 neutrons, 1 electron This is a beta -particle, and can also be written as e -. This is a standard electron. This is a beta 24Mg 2 + 12 protons, 12 neutrons, 10 electrons This is a magnesium Ion. There are 12 protons from the magnesium atom. There are 10 electrons because 12-2 = 10. There are 12 neutrons because 24-12 = 12. > 60Co 27 protons, 33 neutrons, 27 electrons The cobalt atom has 27 protons as seen in the periodic table. There are also 27 electrons because the charge is 0. There are 33 neutrons because 60-27 = 33. 3H 1 protons, 2 neutrons, 1 electrons There is 1 proton because the element is hydrogen. There is 1 electron because the atom is neutral. There are 2 neutrons because 3-1 = 2. 40Ar 18 protons, 22 neutrons, 18 electrons There are 18 protons from the argon element. There 18 electrons because it is neutral, and 22 neutrons because 40 - 18 = 22. n 0 protons, 1 neutrons, 0 electrons This is a free neutron denoted by the lower case n.
3. Given the following, identify the subatomic particles present. (The periodic table is required to solve these problems)
Charge +1, 3 protons, mass number 6. 3 protons, 3 neutrons, 2 electrons Charge -2, 8 neutrons, mass number 17. 9 protons, 8 neutrons, 7 electrons 26 protons, 20 neutrons. 26 protons, 20 neutrons, 26 electrons 28 protons, mass number 62. 28 protons, 34 neutrons, 28 electrons 5 electrons, mass number 10. 5 protons, 5 neutrons, 5 electrons Charge -1, 18 electrons, mass number 36. 17 protons, 19 neutrons, 18 electrons
4. Arrange the following lements in order of increasing (a) number of protons; (b) number of neutrons; (c) atomic mass.
a)
Na, Mn, Fe, Co, Cu, Br Z=#protons; Na: z=11; Mn: Z=25, given; Fe: Z=26, given; Co: Z=27; Cu: Z=29; Br: Z=35, given
b)
Na, Cu, Fe, Mn, Co, Br A=#protons+#neutrons, so #n=A-#protons(Z); Na: #n=23-11=12; Cu: #n=59-29=30; Fe: #n=56-26=30; Mn: #n=55-25=30; Co: #n=59-27=32; Br: #n=80-35=45
Note: Cu, Fe, Mn are all equal in their number of neutrons, which is 30.
c)
Na, Mn, Fe, Co, Cu, Br Na: 22.9898 amu; Mn: 54.9380 amu; Fe: 55.845 amu; Co: 58.9332 amu; Cu: 63.546 amu; Br: 79.904
Note: This is the same order as the number of protons, because as Atomic Number(Z) increases so does Atomic Mass.
5. Fill in the rest of the table:
Atomic Number Mass Number Number of Protons Number of Neutrons Number of Electrons 2 4 2 2 2 11 23 11 12 11 15 31 15 16 15 37 85 37 48 37 53 127 53 74 53
Note: Atomic Number=Number of Protons=Number of Electrons and Mass Number=Number of Protons+Number of Neutrons
Contributors Jiaxu (Josh) Wang (UCD)
|
Here we want to give an easy mathematical bootstrap argument why solutions to the time independent 1D Schrödinger equation (TISE) tend to be rather nice. First formally rewrite the differential form$$-\frac{\hbar^2}{2m} \psi^{\prime\prime}(x) + V(x) \psi(x) ~=~ E \psi(x) \tag{1}$$into the int...
[Some time travel comments] Since in the previous paragraph, we have explained how travelling to the future will not necessary result in you to arrive in the future that is resulted as if you have never time travelled (via twin paradox), what is the reason that the past you travelled back, has to be the past you learnt from historical records :?
@0ßelö7 Well, I'd omit the explanation of the notation on the slide itself, and since there seems to be two pairs of formulae, I'd just put one of the two and then say that there's another one with suitable substitutions.
I mean, "Hey, I bet you've always wondered how to prove X - here it is" is interesting. "Hey, you know that statement everyone knows how to prove but doesn't bother to write down? Here is the proof written down" significantly less so
Sorry I have a quick question: For questions like this physics.stackexchange.com/questions/356260/… where the accepted answer clearly does not answer the original question what is the best thing to do; downvote, flag or just leave it?
So this question says express $u^0$ in terms of $u^j$ where $u$ is the four-velocity and I get what $u^0$ and $u^j$ are but I'm a bit confused how to go about this one? I thought maybe using the space-time interval and evaluating for $\frac{dt}{d\tau}$ but it's not workin out for me... :/ Anyone give me a quickie starter please? :p
Although a physics question, this is still important to chemistry. The delocalized electric field is related to the force (and therefore the repulsive potential) between two electrons. This in turn is what we need to solve the Schrödinger Equation to describe molecules. Short answer: You can calculate the expectation value of the corresponding operator, which comes close to the mentioned superposition. — Feodoran13 hours ago
If we take an electron that's delocalised w.r.t position, how can one evaluate the electric field over some space? Is it some superposition or a sort of field with all the charge at the expectation value of the position?
@0ßelö7 I just looked back at chat and noticed Phase's question, I wasn't purposefully ignoring you - do you want me to look over it? Because I don't think I'll gain much personally from reading the slides.
Maybe it's just me having not really done much with Eigenbases but I don't recognise where I "put it in terms of M's eigenbasis". I just wrote it down for some vector v, rather than a space that contains all of the vectors v
If we take an electron that's delocalised w.r.t position, how can one evaluate the electric field over some space? Is it some superposition or a sort of field with all the charge at the expectation value of the position?
Honey, I Shrunk the Kids is a 1989 American comic science fiction film. The directorial debut of Joe Johnston and produced by Walt Disney Pictures, it tells the story of an inventor who accidentally shrinks his and his neighbor's kids to a quarter of an inch with his electromagnetic shrinking machine and throws them out into the backyard with the trash, where they must venture into their backyard to return home while fending off insects and other obstacles.Rick Moranis stars as Wayne Szalinski, the inventor who accidentally shrinks his children, Amy (Amy O'Neill) and Nick (Robert Oliveri). Marcia...
|
I am told that Fourier showed that we can represent an arbitrary continuous function, $f(x)$, as a convergent series in the elementary trigonometric functions
$$f(x) = \sum_{k = 0}^\infty a_k \cos(kx) + b_k \sin(kx)$$
Also, suppose that $\{\phi_n(x)\}^\infty_{n = 0}$ is a set of orthogonal functions with respect to a weight function $w(x)$ on the interval $(a, b)$. And let $f(x)$ be an arbitrary function defined on $(a, b)$. Then the generalised Fourier series is
$$f(x) = \sum_{k = 0}^\infty c_k \phi_k (x)$$
I have the following questions relating to this:
How does a Fourier $\sin$/$\cos$ series arise from a "normal" Fourier series $f(x) = \sum_{k = 0}^\infty a_k \cos(kx) + b_k \sin(kx)$?
How does this relate to the generalised Fourier series $f(x) = \sum_{k = 0}^\infty c_k \phi_k (x)$?
I would greatly appreciate clarification on this.
EDIT: When I say Fourier $\sin$/$\cos$ Series, I'm referring to what is known as "Fourier sine series" and "Fourier cosine series".
|
The Monster is the largest of the 26 sporadic simple groups and has order
808 017 424 794 512 875 886 459 904 961 710 757 005 754 368 000 000 000
= 2^46 3^20 5^9 7^6 11^2 13^3 17 19 23 29 31 41 47 59 71.
It is not so much the size of its order that makes it hard to do actual calculations in the monster, but rather the dimensions of its smallest non-trivial irreducible representations (196 883 for the smallest, 21 296 876 for the next one, and so on).
In characteristic two there is an irreducible representation of one dimension less (196 882) which appears to be of great use to obtain information. For example, Robert Wilson used it to prove that The Monster is a Hurwitz group. This means that the Monster is generated by two elements g and h satisfying the relations
$g^2 = h^3 = (gh)^7 = 1 $
Geometrically, this implies that the Monster is the automorphism group of a Riemann surface of genus g satisfying the Hurwitz bound 84(g-1)=#Monster. That is,
g=9619255057077534236743570297163223297687552000000001=42151199 * 293998543 * 776222682603828537142813968452830193
Or, in analogy with the Klein quartic which can be constructed from 24 heptagons in the tiling of the hyperbolic plane, there is a finite region of the hyperbolic plane, tiled with heptagons, from which we can construct this monster curve by gluing the boundary is a specific way so that we get a Riemann surface with exactly 9619255057077534236743570297163223297687552000000001 holes. This finite part of the hyperbolic tiling (consisting of #Monster/7 heptagons) we’ll call the
empire of the monster and we’d love to describe it in more detail.
Look at the half-edges of all the heptagons in the empire (the picture above learns that every edge in cut in two by a blue geodesic). They are exactly #Monster such half-edges and they form a dessin d’enfant for the monster-curve.
If we label these half-edges by the elements of the Monster, then multiplication by g in the monster interchanges the two half-edges making up a heptagonal edge in the empire and multiplication by h in the monster takes a half-edge to the one encountered first by going counter-clockwise in the vertex of the heptagonal tiling. Because g and h generated the Monster, the dessin of the empire is just a concrete realization of the monster.
Because g is of order two and h is of order three, the two permutations they determine on the dessin, gives a group epimorphism $C_2 \ast C_3 = PSL_2(\mathbb{Z}) \rightarrow \mathbb{M} $ from the modular group $PSL_2(\mathbb{Z}) $ onto the Monster-group.
In noncommutative geometry, the group-algebra of the modular group $\mathbb{C} PSL_2 $ can be interpreted as the coordinate ring of a noncommutative manifold (because it is formally smooth in the sense of Kontsevich-Rosenberg or Cuntz-Quillen) and the group-algebra of the Monster $\mathbb{C} \mathbb{M} $ itself corresponds in this picture to a finite collection of ‘points’ on the manifold. Using this geometric viewpoint we can now ask the question
What does the Monster see of the modular group?
To make sense of this question, let us first consider the commutative equivalent : what does a point P see of a commutative variety X?
Evaluation of polynomial functions in P gives us an algebra epimorphism $\mathbb{C}[X] \rightarrow \mathbb{C} $ from the coordinate ring of the variety $\mathbb{C}[X] $ onto $\mathbb{C} $ and the kernel of this map is the maximal ideal $\mathfrak{m}_P $ of
$\mathbb{C}[X] $ consisting of all functions vanishing in P.
Equivalently, we can view the point $P= \mathbf{spec}~\mathbb{C}[X]/\mathfrak{m}_P $ as the scheme corresponding to the quotient $\mathbb{C}[X]/\mathfrak{m}_P $. Call this the 0-th formal neighborhood of the point P.
This sounds pretty useless, but let us now consider higher-order formal neighborhoods. Call the affine scheme $\mathbf{spec}~\mathbb{C}[X]/\mathfrak{m}_P^{n+1} $ the n-th forml neighborhood of P, then the first neighborhood, that is with coordinate ring $\mathbb{C}[X]/\mathfrak{m}_P^2 $ gives us tangent-information. Alternatively, it gives the best linear approximation of functions near P.
The second neighborhood $\mathbb{C}[X]/\mathfrak{m}_P^3 $ gives us the best quadratic approximation of function near P, etc. etc.
These successive quotients by powers of the maximal ideal $\mathfrak{m}_P $ form a system of algebra epimorphisms
$\ldots \frac{\mathbb{C}[X]}{\mathfrak{m}_P^{n+1}} \rightarrow \frac{\mathbb{C}[X]}{\mathfrak{m}_P^{n}} \rightarrow \ldots \ldots \rightarrow \frac{\mathbb{C}[X]}{\mathfrak{m}_P^{2}} \rightarrow \frac{\mathbb{C}[X]}{\mathfrak{m}_P} = \mathbb{C} $
and its inverse limit $\underset{\leftarrow}{lim}~\frac{\mathbb{C}[X]}{\mathfrak{m}_P^{n}} = \hat{\mathcal{O}}_{X,P} $ is the completion of the local ring in P and contains all the infinitesimal information (to any order) of the variety X in a neighborhood of P. That is, this completion $\hat{\mathcal{O}}_{X,P} $ contains
all information that P can see of the variety X.
In case P is a smooth point of X, then X is a manifold in a neighborhood of P and then this completion
$\hat{\mathcal{O}}_{X,P} $ is isomorphic to the algebra of formal power series $\mathbb{C}[[ x_1,x_2,\ldots,x_d ]] $ where the $x_i $ form a local system of coordinates for the manifold X near P.
Right, after this lengthy recollection, back to our question
what does the monster see of the modular group? Well, we have an algebra epimorphism
$\pi~:~\mathbb{C} PSL_2(\mathbb{Z}) \rightarrow \mathbb{C} \mathbb{M} $
and in analogy with the commutative case, all information the Monster can gain from the modular group is contained in the $\mathfrak{m} $-adic completion
$\widehat{\mathbb{C} PSL_2(\mathbb{Z})}_{\mathfrak{m}} = \underset{\leftarrow}{lim}~\frac{\mathbb{C} PSL_2(\mathbb{Z})}{\mathfrak{m}^n} $
where $\mathfrak{m} $ is the kernel of the epimorphism $\pi $ sending the two free generators of the modular group $PSL_2(\mathbb{Z}) = C_2 \ast C_3 $ to the permutations g and h determined by the dessin of the pentagonal tiling of the Monster’s empire.
As it is a hopeless task to determine the Monster-empire explicitly, it seems even more hopeless to determine the kernel $\mathfrak{m} $ let alone the completed algebra… But, (surprise) we can compute $\widehat{\mathbb{C} PSL_2(\mathbb{Z})}_{\mathfrak{m}} $ as explicitly as in the commutative case we have $\hat{\mathcal{O}}_{X,P} \simeq \mathbb{C}[[ x_1,x_2,\ldots,x_d ]] $ for a point P on a manifold X.
Here the details : the quotient $\mathfrak{m}/\mathfrak{m}^2 $ has a natural structure of $\mathbb{C} \mathbb{M} $-bimodule. The group-algebra of the monster is a semi-simple algebra, that is, a direct sum of full matrix-algebras of sizes corresponding to the dimensions of the irreducible monster-representations. That is,
$\mathbb{C} \mathbb{M} \simeq \mathbb{C} \oplus M_{196883}(\mathbb{C}) \oplus M_{21296876}(\mathbb{C}) \oplus \ldots \ldots \oplus M_{258823477531055064045234375}(\mathbb{C}) $
with exactly 194 components (the number of irreducible Monster-representations). For any $\mathbb{C} \mathbb{M} $-bimodule $M $ one can form the tensor-algebra
$T_{\mathbb{C} \mathbb{M}}(M) = \mathbb{C} \mathbb{M} \oplus M \oplus (M \otimes_{\mathbb{C} \mathbb{M}} M) \oplus (M \otimes_{\mathbb{C} \mathbb{M}} M \otimes_{\mathbb{C} \mathbb{M}} M) \oplus \ldots \ldots $
and applying the formal neighborhood theorem for formally smooth algebras (such as $\mathbb{C} PSL_2(\mathbb{Z}) $) due to Joachim Cuntz (left) and Daniel Quillen (right) we have an isomorphism of algebras
$\widehat{\mathbb{C} PSL_2(\mathbb{Z})}_{\mathfrak{m}} \simeq \widehat{T_{\mathbb{C} \mathbb{M}}(\mathfrak{m}/\mathfrak{m}^2)} $
where the right-hand side is the completion of the tensor-algebra (at the unique graded maximal ideal) of the $\mathbb{C} \mathbb{M} $-bimodule $\mathfrak{m}/\mathfrak{m}^2 $, so we’d better describe this bimodule explicitly.
Okay, so what’s a bimodule over a semisimple algebra of the form $S=M_{n_1}(\mathbb{C}) \oplus \ldots \oplus M_{n_k}(\mathbb{C}) $? Well, a
simple S-bimodule must be either (1) a factor $M_{n_i}(\mathbb{C}) $ with all other factors acting trivially or (2) the full space of rectangular matrices $M_{n_i \times n_j}(\mathbb{C}) $ with the factor $M_{n_i}(\mathbb{C}) $ acting on the left, $M_{n_j}(\mathbb{C}) $ acting on the right and all other factors acting trivially.
That is, any S-bimodule can be represented by a quiver (that is a directed graph) on k vertices (the number of matrix components) with a loop in vertex i corresponding to each simple factor of type (1) and a directed arrow from i to j corresponding to every simple factor of type (2).
That is, for the Monster, the bimodule $\mathfrak{m}/\mathfrak{m}^2 $ is represented by a quiver on 194 vertices and now we only have to determine how many loops and arrows there are at or between vertices.
Using Morita equivalences and standard representation theory of quivers it isn’t exactly rocket science to determine that the number of arrows between the vertices corresponding to the irreducible Monster-representations $S_i $ and $S_j $ is equal to
$dim_{\mathbb{C}}~Ext^1_{\mathbb{C} PSL_2(\mathbb{Z})}(S_i,S_j)-\delta_{ij} $
Now, I’ve been wasting a lot of time already here explaining what representations of the modular group have to do with quivers (see for example here or some other posts in the same series) and for quiver-representations we all know how to compute Ext-dimensions in terms of the Euler-form applied to the dimension vectors.
Right, so for every Monster-irreducible $S_i $ we have to determine the corresponding dimension-vector $~(a_1,a_2;b_1,b_2,b_3) $ for the quiver
$\xymatrix{ & & & &
\vtx{b_1} \\ \vtx{a_1} \ar[rrrru]^(.3){B_{11}} \ar[rrrrd]^(.3){B_{21}} \ar[rrrrddd]_(.2){B_{31}} & & & & \\ & & & & \vtx{b_2} \\ \vtx{a_2} \ar[rrrruuu]_(.7){B_{12}} \ar[rrrru]_(.7){B_{22}} \ar[rrrrd]_(.7){B_{23}} & & & & \\ & & & & \vtx{b_3}} $
Now the dimensions $a_i $ are the dimensions of the +/-1 eigenspaces for the order 2 element g in the representation and the $b_i $ are the dimensions of the eigenspaces for the order 3 element h. So, we have to determine to which conjugacy classes g and h belong, and from Wilson’s paper mentioned above these are classes 2B and 3B in standard Atlas notation.
So, for each of the 194 irreducible Monster-representations we look up the character values at 2B and 3B (see below for the first batch of those) and these together with the dimensions determine the dimension vector $~(a_1,a_2;b_1,b_2,b_3) $.
For example take the 196883-dimensional irreducible. Its 2B-character is 275 and the 3B-character is 53. So we are looking for a dimension vector such that $a_1+a_2=196883, a_1-275=a_2 $ and $b_1+b_2+b_3=196883, b_1-53=b_2=b_3 $ giving us for that representation the dimension vector of the quiver above $~(98579,98304,65663,65610,65610) $.
Okay, so for each of the 194 irreducibles $S_i $ we have determined a dimension vector $~(a_1(i),a_2(i);b_1(i),b_2(i),b_3(i)) $, then standard quiver-representation theory asserts that the number of loops in the vertex corresponding to $S_i $ is equal to
$dim(S_i)^2 + 1 – a_1(i)^2-a_2(i)^2-b_1(i)^2-b_2(i)^2-b_3(i)^2 $
and that the number of arrows from vertex $S_i $ to vertex $S_j $ is equal to
$dim(S_i)dim(S_j) – a_1(i)a_1(j)-a_2(i)a_2(j)-b_1(i)b_1(j)-b_2(i)b_2(j)-b_3(i)b_3(j) $
This data then determines completely the $\mathbb{C} \mathbb{M} $-bimodule $\mathfrak{m}/\mathfrak{m}^2 $ and hence the structure of the completion $\widehat{\mathbb{C} PSL_2}_{\mathfrak{m}} $ containing all information the Monster can gain from the modular group.
But then, one doesn’t have to go for the full regular representation of the Monster. Any faithful permutation representation will do, so we might as well go for the one of minimal dimension.
That one is known to correspond to the largest maximal subgroup of the Monster which is known to be a two-fold extension $2.\mathbb{B} $ of the Baby-Monster. The corresponding permutation representation is of dimension 97239461142009186000 and decomposes into Monster-irreducibles
$S_1 \oplus S_2 \oplus S_4 \oplus S_5 \oplus S_9 \oplus S_{14} \oplus S_{21} \oplus S_{34} \oplus S_{35} $
(in standard Atlas-ordering) and hence repeating the arguments above we get a quiver on just 9 vertices! The actual numbers of loops and arrows (I forgot to mention this, but the quivers obtained are actually symmetric) obtained were found after laborious computations mentioned in this post and the details I’ll make avalable here.
Anyone who can spot a relation between the numbers obtained and any other part of mathematics will obtain quantities of genuine (ie. non-Inbev) Belgian beer…8 Comments
|
An inclusive reference of transformation matrices for basic OpenGL work. It is intended as a quick reference while coding OpenGL without libraries.
There are many tutorials and guides online that explain how to use these matrices and mathematic fundamentals. If you are new to the subject, I recommend reading this article, which is an overview of the whole process explained in a cognitive way.
If you are looking for a deeper explanation of affine/homogenous mathematics and how these matrices are constructed, I
highly recommend Essential Mathematics for Games and Interactive Applications. Of all the books I’ve used for my personal research, this one stands out as the most complete and comprehensive. For example, many books skip the View transformation, even the OpenGL red book, but this one goes into the nitty-gritty details of everything. Model Operations
The \(Model \rightarrow World\) matrix is the resulting matrix of an ordered multiplication of transformations (scale first, then rotate, then translate).
\(M_{model \rightarrow world} = T_{ranslate} \cdot R_{otate} \cdot S_{cale}\)
Translation Matrix
\(\begin{aligned}T_{(x,y,z)} =\begin{bmatrix}1 & 0 & 0 & x\\
0 & 1 & 0 & y\\ 0 & 0 & 1 & z\\ 0 & 0 & 0 & 1\\ \end{bmatrix} \end{aligned} \) Rotation Matrices
\(\begin{aligned}R_x =\begin{bmatrix}1 & 0 & 0 & 0 \\
0 & \cos\theta & -\sin\theta & 0 \\ 0 & \sin\theta & \cos\theta & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} , R_y = \begin{bmatrix} \cos\theta & 0 & \sin\theta & 0 \\ 0 & 1 & 0 & 0 \\ -\sin\theta & 0 & \cos\theta & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} , R_z = \begin{bmatrix} \cos\theta & -\sin\theta & 0 & 0 \\ \sin\theta & \cos\theta & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \end{aligned} \)
Multiply all the matrices to apply the general rotation, shown below. \(\alpha\) = x rotation, \(\beta\) = y rotation and \(\gamma\) = z rotation. Note that the rotations are applied in the following order: x first, then y, then z.
\(\begin{aligned}R_z \cdot R_y \cdot R_x =\begin{bmatrix}\cos\beta\cos\gamma & \cos\gamma\sin\alpha\sin\beta - \cos\alpha\sin\gamma & \cos\alpha\cos\gamma\sin\beta + \sin\alpha\sin\gamma & 0 \\
\cos\beta\sin\gamma & \cos\alpha\cos\gamma + \sin\alpha\sin\beta\sin\gamma & -\cos\gamma\sin\alpha + \cos\alpha\sin\beta\sin\gamma & 0 \\ -\sin\beta & \cos\beta\sin\alpha & \cos\alpha\cos\beta & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \end{aligned} \) Scale Matrix
\(\begin{aligned}S_{(x,y,z)} =\begin{bmatrix}x & 0 & 0 & 0 \\
0 & y & 0 & 0 \\ 0 & 0 & z & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \end{aligned} \) Shear Matrix
The following combines all shearing operations, the notation \(Sr_{xy}\) denotes a shear along \(x\) by \(y\). It is rarely used in practice.
\(\begin{aligned}Sr_{(xy,xz,yx,yz,zx,zy)} =\begin{bmatrix}1 & Sr_{yx} & Sr_{zx} & 0 \\
Sr_{xy} & 1 & Sr_{zy} & 0 \\ Sr_{xz} & Sr_{yz} & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \end{aligned} \) View Matrices
There is more than one technique to compute the \(World \rightarrow View\) matrix. Covered here are the Inverse model calculations and the Look At calculations.
Matrix Inverse
You can treat the camera as a normal model. Apply initial orientation, then rotation, then translation to obtain \(V\). The inverse of this matrix is your \(World \rightarrow View\) matrix. Calculating the inverse of a 4x4 matrix is costly, so special properties are used to simplify the inverse operation.
\(V_{view \rightarrow world} = T_{ranslate} \cdot R_{otate} \cdot V_{orientation}\)
\(V_{world \rightarrow view} = V_{view \rightarrow world}^{-1}\)
First, we define a starting view orientation. We will use the standard right-handed OpenGL version: \(\vec{Vside} = (1,0,0), \vec{Vup} = (0,1,0), -\vec{Vforward} = -(0,0,1)\). This defines a camera facing \(-z\), with \(y\) going up and \(x\) going right.
\(\begin{aligned}V_{orientation} =\begin{bmatrix}Vside_x & Vup_x & -Vforward_x & 0 \\
Vside_y & Vup_y & -Vforward_y & 0 \\ Vside_z & Vup_z & -Vforward_z & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \end{aligned} \)
Next, we compute our translation and rotation using previous matrices. Multiply the starting orientation with rotation, then translation to obtain \(V\).
\(V_{view \rightarrow world} = T_{ranslate} \cdot R_{otate} \cdot V_{orientation}\)
To invert \(V\), we can use special affine properties of our transformations and invert it with the following calculation.
\(\begin{aligned}V^{-1}_{world \rightarrow view} =\begin{bmatrix}R^{-1} & -(R^{-1} \cdot V_{pos}) \\
0^T & 1 \\ \end{bmatrix} = \begin{bmatrix} R^T & -(R^T \cdot V_{pos}) \\ 0^T & 1 \\ \end{bmatrix} \end{aligned} \)
For example:\(\begin{aligned}V_{view \rightarrow world} =\begin{bmatrix}R_{00} & R_{01} & R_{02} & Vpos_x \\
R_{10} & R_{11} & R_{12} & Vpos_y \\ R_{20} & R_{21} & R_{22} & Vpos_z \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} , V^{-1}_{world \rightarrow view} = \begin{bmatrix} R_{00} & R_{10} & R_{20} & -R_{00}Vpos_x - R_{10}Vpos_y - R_{20}Vpos_z \\ R_{01} & R_{11} & R_{21} & -R_{01}Vpos_x - R_{11}Vpos_y - R_{21}Vpos_z \\ R_{02} & R_{12} & R_{22} & -R_{02}Vpos_x - R_{12}Vpos_y - R_{22}Vpos_z \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \end{aligned} \) Look At Operation
Another solution to compute our \(World \rightarrow View\) matrix is using the Look At formula and operations. We define \(eye\) as our camera position, \(target\) as our look-at target point, \(up\) as the upward direction \((0,1,0)\).
First, we compute the rotational portion of our matrix.
\( \hat{Vforward} = \frac{\vec{target} - \vec{eye}}{|\vec{target} - \vec{eye}|} \)
\( \hat{Vside} = \frac{\vec{Vforward} \times \vec{up}}{|\vec{Vforward} \times \vec{up}|} \)
\( \hat{Vup} = \frac{\vec{Vforward} \times \vec{Vside}}{|\vec{Vforward} \times \vec{Vside}|} \)
\(\begin{aligned}V_{rot} =\begin{bmatrix}Vside_x & Vup_x & -Vforward_x \\
Vside_y & Vup_y & -Vforward_y \\ Vside_z & Vup_z & -Vforward_z \\ \end{bmatrix} \end{aligned} \)
Next, using the inverse of our camera translation \(-eye\), we compute the translation vector \(Vpos\).
\(V_{pos} = V_{rot} \cdot -\vec{eye}\)
\(\begin{aligned}V_{pos} =\begin{bmatrix}Vside_x & Vup_x & -Vforward_x \\
Vside_y & Vup_y & -Vforward_y \\ Vside_z & Vup_z & -Vforward_z \\ \end{bmatrix} \cdot \begin{bmatrix} -eye_x \\ -eye_y \\ -eye_z \\ \end{bmatrix} \end{aligned} \)
Finally, we compose our view matrix using \(Vrot\) and \(Vpos.\)
\(\begin{aligned}V^{-1}_{world \rightarrow view} =\begin{bmatrix}Vside_x & Vup_x & -Vforward_x & Vpos_x \\
Vside_y & Vup_y & -Vforward_y & Vpos_y \\ Vside_z & Vup_z & -Vforward_z & Vpos_z \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \end{aligned} \) Projection Matrices Perspective Projection
This matrix assumes a symmetric projection volume. It is equivalent to the
gluPerspective() call. We will need \(\theta_{fov}\) which is your field of view (something close to 90), \(ar\) the aspect ratio of your projection, \(n\) the near clipping plane, \(f\) the far clipping plane.
\( d = \cot(\frac{\theta_{fov}}{2}) \)
\( ar = \frac{width}{height} \)
\(\begin{aligned}P_{persp} =\begin{bmatrix}\frac{d}{ar} & 0 & 0 & 0 \\
0 & d & 0 & 0 \\ 0 & 0 & \frac{n+f}{n-f} & \frac{2nf}{n-f} \\ 0 & 0 & -1 & 0 \\ \end{bmatrix} \end{aligned} \) Oblique Projection
This projection works for non-symmetric projections and is a generalized version of the perspective projection. It is equivalent to the
glFrustum() call. We will need \(n\) the near clipping plane, \(f\) the far clipping plane, \([l, r]\) the left and right interval along \(x\) on the near clipping plane, \([t, b]\) the top and bottom interval along \(y\) on the near clipping plane.
\(\begin{aligned}P_{obl} =\begin{bmatrix}\frac{2n}{r-l} & 0 & \frac{r+l}{r-l} & 0 \\
0 & \frac{2n}{t-b} & \frac{t+b}{t-b} & 0 \\ 0 & 0 & \frac{n+f}{n-f} & \frac{2nf}{n-f} \\ 0 & 0 & -1 & 0 \\ \end{bmatrix} \end{aligned} \) Orthographic Projection
This projection does not resize further objects, it is useful for 2D games and special applications. It is equivalent to the
glOrtho() call. We will need \(n\) the near clipping plane, \(f\) the far clipping plane, \([l, r]\) the left and right interval along \(x\) on the near clipping plane, \([t, b]\) the top and bottom interval along \(y\) on the near clipping plane.
\(\begin{aligned}P_{ortho} =\begin{bmatrix}\frac{2}{r-l} & 0 & 0 & -\frac{r+l}{r-l} \\
0 & \frac{2}{t-b} & 0 & -\frac{t+b}{t-b} \\ 0 & 0 & -\frac{2}{f-n} & -\frac{f+n}{f-n} \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \end{aligned} \) Oblique Orthographic Projection
A special case of the orthographic projection, with a shear along the \(z\) axis.
\(\begin{aligned}P_{obl ortho} =\begin{bmatrix}\frac{2}{r-l} & 0 & \frac{1}{r-l} & -\frac{r+l-n}{r-l} \\
0 & \frac{2}{t-b} & \frac{1}{t-b} & -\frac{t+b-n}{t-b} \\ 0 & 0 & -\frac{2}{f-n} & -\frac{f+n}{f-n} \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \end{aligned} \) Final Notes
That should cover most your needs. If there is a useful transform missing, or mathematical errors, please leave a comment.
Cover image : Snail by iQ.
|
This set of Advanced Network Theory Questions and Answers focuses on “Advanced Problems on Network Theorems – 2”.
1. A network contains linear resistors and ideal voltage source s. If values of all the resistors are doubled, then voltage across each resistor is __________
a) Halved b) Doubled c) Increases by 2 times d) Remains same View Answer
Explanation: V / R ratio is a constant R. If R is doubled then, electric current will become half. So voltage across each resistor is same.
2. A voltage waveform V(t) = 12t
2 is applied across a 1 H inductor for t ≥ 0, with initial electric current through it being zero. The electric current through the inductor for t ≥ 0 is given by __________ a) 12 t b) 24 t c) 12 t 3 d) 4 t 3 View Answer
Explanation: We know that, I = \(\frac{1}{L} \int_0^t V \,dt\)
= 1\(\int_0^t 12 t^2 \,dt\)
= 4 t
3.
3. The linear circuit element among the following is ___________
a) Capacitor b) Inductor c) Resistor d) Capacitor & Inductor View Answer
Explanation: A linear circuit element does not change its value with voltage or current. The resistance is only one among the others does not change its value with voltage or current.
4. In the circuit shown, V
C is zero at t=0 s. For t>0, the capacitor current I C(t), where t is in second, is ___________ a) 0.50 e -25t mA b) 0.25 e -25t mA c) 0.50 e -12.5t mA d) 0.25 e -6.25t mA View Answer
Explanation: The capacitor voltage V
C(t) = V C(∞) – [V C(∞)-V C(0)]e -t/RC
R = 20 || 20 = \(\frac{20×20}{20+20} = \frac{400}{40}\) = 10 kΩ
V
C(∞) = 10 × \(\frac{20}{20+20}\) = 5 V
Given, V
C(0) = 0
∴ V
C(t) = 5 – (5-0)e -t/10×4×10^(-6)×10^3
= 5(1 – e
-25t)
I
C(t) = C\(\frac{dV_C (t)}{dt} = 4×10^{-6} \frac{d}{dt}5(1-e^{-25t})\)
= 4 × 10
-6× 5 × 25e -25t
∴ I
L(t) = 0.50e -2.5tmA.
Explanation: Equivalent impedance = (5 + j3) || (5 – √3)
= \(\frac{(5+j3)×(5 – \sqrt{3})}{(5+j3) + (5 – \sqrt{3})} \)
= \(\frac{25+9}{10}\) = 3.4 Ω
V
AB= Current × Impedance
= 5∠30° × 3.4
= 17∠30°.
6. For the circuit given below, the driving point impedance is given by, Z(s) = \(\frac{0.2s}{s^2+0.1s+2} \). The component values are _________
a) L = 5 H, R = 0.5 Ω, C = 0.1 F b) L = 0.1 H, R = 0.5 Ω, C = 5 F c) L = 5 H, R = 2 Ω, C = 0.1 F d) L = 0.1 H, R = 2 Ω, C = 5 F View Answer
Explanation: Dividing point impedance = R || sL || \(\frac{1}{sC} \)
= \(\Big\{\frac{(R)(\frac{1}{sC})}{R + \frac{1}{sC}}\Big\}\) || sL
= \(\frac{\frac{R}{1+sRC}}{\frac{R}{1+sRC}+sL} \)
= \(\frac{sRL}{s^2 RLC+sL+R} \)
Given that, Z(s) = \(\frac{0.2s}{s^2+0.1s+2} \)
∴ On comparing, we get L = 0.1 H, R = 2 Ω, C = 5 F.
Explanation: In mesh aef, 8(I
1– I 3) + 3(I 1– I 2) = 15
Or, 11 I
1– 3 I 2– 8I 3= 15
In mesh efd, 5(I
2– I 3) + 2I 2+ 3(I 2– I 1) = 0
Or, -3 I
1+ 10I 2– 5I 3= 0
In mesh abcde, 10I
3+ 5(I 3– I 2) + 8(I 3– I 1) = 0
Or, -8I
1– 5I 2+23I 3= 0
Thus loop equations are,
11 I
1– 3 I 2– 8I 3= 15
-3 I
1+ 10I 2– 5I 3= 0
-8I
1– 5I 2+ 23I 3= 0
Solving by Cramer’s rule, I
3= current through the 10Ω resistor = 1.23 A
∴ Current through 10 Ω resistor = 1.23 A
Power loss (P) = \(I_3^2 r\) = (1.23)
2×10 = 15.13 W.
8. The switch S is the circuit shown in the figure is ideal. If the switch is repeatedly closed for 1 ms and opened for 1 ms, the average value of i(t) is ____________
a) 0.25 mA b) 0.35 mA c) 0.5 mA d) 1 mA View Answer
Explanation: Since i = \(\frac{5}{10 × 10^{-3}}\) = 0.5 × 10
3= 0.5 mA
As the switch is repeatedly close, then i(t) will be a square wave.
So average value of electric current is (\(\frac{0.5}{2}\)) = 0.25 mA.
Explanation: The circuit is as shown in figure below.
R
eq= 5 + \(\frac{10(R_{eq}+5)}{10 + 5 + R_{eq}}\)
Or, \(R_{eq}^2 + 15R_{eq}\) = 5R
eq+ 75 + 10R eq+ 50
Or, R
eq= \(\sqrt{125}\) = 11.18 Ω.
10. A particular electric current is made up of two component: a 10 A and a sine wave of peak value 14.14 A. The average value of electric current is __________
a) 0 b) 24.14 A c) 10 A d) 14.14 A View Answer
Explanation: Average dc electric current = 10 A
Average ac electric current = 0 A as it is alternating in nature.
Average electric current = 10 + 0 = 10 A.
11. Given that, R
1 = 36 Ω and R 2 = 75 Ω, each having tolerance of ±5% are connected in series. The value of resultant resistance is ___________ a) 111 ± 0 Ω b) 111 ± 2.77 Ω c) 111 ± 5.55 Ω d) 111 ± 7.23 Ω View Answer
Explanation: R
1= 36 ± 5% = 36 ± 1.8 Ω
R
2= 75 ± 5% = 75 ± 3.75 Ω
∴ R
1+ R 2= 111 ± 5.55 Ω.
Explanation: In order for 600 C charges to be delivered to 100 V source, the electric current must be anti-clockwise.
\(i = \frac{dQ}{dt} = \frac{600}{60}\) = 10A
Applying KVL we get
V
1+ 60 – 100 = 10 × 20 ⇒ V 1= 240 V.
13. The energy required to charge a 10 μF capacitor to 100 V is ____________
a) 0.01 J b) 0.05 J c) 5 X 10 -9 J d) 10 X 10 -9 J View Answer
Explanation: E = \(\frac{1}{2} CV^2\)
= 5 X 10
-6X 100 2
= 0.05 J.
14. Among the following, the active element of electrical circuit is ____________
a) Voltage source b) Current source c) Resistance d) Voltage and current source both View Answer
Explanation: We know that active elements are the ones that are used to drive the circuit. They also cause the electric current to flow through the circuit or the voltage drop across the element. Here only the voltage and current source are the ones satisfying the above conditions.
Explanation: In the given figure, R
X= R Y= R Z= \(\frac{5×5}{5+5+5}\) = 1.67 Ω
Here, R = [(R
X+ 2) || (R Y+ 3)] + R X
= (3.67 || 4.67) + R
Z
= \(\frac{3.67 × 4.67}{3.67 + 4.67}\) + 1.67
= \(\frac{17.1389}{8.34}\) + 1.67
= 3.725 Ω
∴ I = \(\frac{V}{R} = \frac{2}{3.725}\) = 0.54 A.
Sanfoundry Global Education & Learning Series – Network Theory.
To practice advanced questions and answers on all areas of Network Theory,
here is complete set of 1000+ Multiple Choice Questions and Answers.
|
Radar engineering details radar sensorradar antennaradar sensors
Radar engineering details are technical details pertaining to the components of a radar and their ability to detect the return energy from moving scatterers — determining an object's position or obstruction in the environment.wikipedia
50 Related Articles
radar stationradarsradar system
Radar engineering details are technical details pertaining to the components of a radar and their ability to detect the return energy from moving scatterers — determining an object's position or obstruction in the environment.
Radar engineering details
light scatteringscatteredscatter
Radar engineering details are technical details pertaining to the components of a radar and their ability to detect the return energy from moving scatterers — determining an object's position or obstruction in the environment.
angular areasolid angles4-π
This includes field of view in terms of solid angle and maximum unambiguous range and velocity, as well as angular, range and velocity resolution.
autonomous cruise control systemactive cruise controlradar cruise control
Applications of radar include adaptive cruise control, autonomous landing guidance, radar altimeter, air traffic management, early-warning radar, fire-control radar, forward warning collision sensing, ground penetrating radar, surveillance, and weather forecasting.
radio altimeterradar altimetryelectronic altimeter
Applications of radar include adaptive cruise control, autonomous landing guidance, radar altimeter, air traffic management, early-warning radar, fire-control radar, forward warning collision sensing, ground penetrating radar, surveillance, and weather forecasting.
ATM
Applications of radar include adaptive cruise control, autonomous landing guidance, radar altimeter, air traffic management, early-warning radar, fire-control radar, forward warning collision sensing, ground penetrating radar, surveillance, and weather forecasting.
early warning radarearly warningair search radar
Applications of radar include adaptive cruise control, autonomous landing guidance, radar altimeter, air traffic management, early-warning radar, fire-control radar, forward warning collision sensing, ground penetrating radar, surveillance, and weather forecasting.
fire control radarradarfire control
Applications of radar include adaptive cruise control, autonomous landing guidance, radar altimeter, air traffic management, early-warning radar, fire-control radar, forward warning collision sensing, ground penetrating radar, surveillance, and weather forecasting.
forward collision warningpre-collision systemprecrash system
Applications of radar include adaptive cruise control, autonomous landing guidance, radar altimeter, air traffic management, early-warning radar, fire-control radar, forward warning collision sensing, ground penetrating radar, surveillance, and weather forecasting.
ground penetrating radargeoradarGPR
Applications of radar include adaptive cruise control, autonomous landing guidance, radar altimeter, air traffic management, early-warning radar, fire-control radar, forward warning collision sensing, ground penetrating radar, surveillance, and weather forecasting.
electronic surveillancestakeoutmonitoring
Applications of radar include adaptive cruise control, autonomous landing guidance, radar altimeter, air traffic management, early-warning radar, fire-control radar, forward warning collision sensing, ground penetrating radar, surveillance, and weather forecasting.
weather forecastweathermanweather forecasts
Applications of radar include adaptive cruise control, autonomous landing guidance, radar altimeter, air traffic management, early-warning radar, fire-control radar, forward warning collision sensing, ground penetrating radar, surveillance, and weather forecasting.
phased array radarphased-arrayphased-array radar
This is done electronically, with a phased array antenna, or mechanically by rotating a physical antenna.
monostatic (single antenna) CW radarsmonostatic operation
The emitter and the receiver can be in the same place, as with the monostatic radars, or be separated as in the
bistaticbi-staticbi-static system
bistatic radars.
bandwidthbandwidthssignal bandwidth
Figures of merit of an ESA are the bandwidth, the effective isotropically radiated power (EIRP) and the G R /T quotient, the field of view.
ERPpowerEIRP
Figures of merit of an ESA are the bandwidth, the effective isotropically radiated power (EIRP) and the G R /T quotient, the field of view.
directional couplercombinerdirectional couplers
For example, the generation of wideband monopulse receive patterns depends on a feed network which combines two subarrays using a wideband hybrid coupler.
AESAactive phased arrayAESA radar
Active versus passive: In an active electronically scanned array (AESA), each antenna is connected to a T/R module featuring solid state power amplification (SSPA). An AESA has distributed power amplification and offers high performance and reliability, but is expensive. In a passive electronically scanned array, the array is connected to a single T/R module featuring vacuum electronics devices (VED). A PESA has centralized power amplification and offers cost savings, but requires low-loss phase shifters
PESAelectronically scannedelectronically steered
Active versus passive: In an active electronically scanned array (AESA), each antenna is connected to a T/R module featuring solid state power amplification (SSPA). An AESA has distributed power amplification and offers high performance and reliability, but is expensive. In a passive electronically scanned array, the array is connected to a single T/R module featuring vacuum electronics devices (VED). A PESA has centralized power amplification and offers cost savings, but requires low-loss phase shifters
apertureaperture efficiencyeffective area
Aperture: The Antenna aperture of a radar sensor is real or synthetic. Real-beam radar sensors allow for real-time target sensing. Synthetic aperture radar (SAR) allow for an angular resolution beyond real beamwidth by moving the aperture over the target, and adding the echoes coherently.
synthetic aperture radarSARsynthetic aperture
Aperture: The Antenna aperture of a radar sensor is real or synthetic. Real-beam radar sensors allow for real-time target sensing. Synthetic aperture radar (SAR) allow for an angular resolution beyond real beamwidth by moving the aperture over the target, and adding the echoes coherently.
group delayphase delaydelay distortion
Architecture: The field of view is scanned with a highly directive frequency-orthogonal (slotted waveguide), spatially orthogonal (switched beamforming networks), or time-orthogonal beams. In case of time-orthogonal scanning, the beam of an ESA is scanned preferably by applying a progressive time delay, \Delta \tau, constant over frequency, instead of by applying a progressive phase shift, constant over frequency. Usage of true-time-delay (TTD) phase shifters avoids beam squinting with frequency. The scanning angle, \theta, is expressed as a function of the phase shift progression, \beta, which is a function of the frequency and the progressive time delay, \Delta \tau, which is invariant with frequency:
beam formingbeamformerAntenna beamforming
Beam forming: The beam is formed in the digital (digital beamforming (DBF)), intermediate frequency (IF), optical, or radio frequency (RF) domain.
antennaantennasradio antenna
This is done electronically, with a phased array antenna, or mechanically by rotating a physical antenna.
|
Applied/ACMS/absS18 Contents 1 ACMS Abstracts: Spring 2018 ACMS Abstracts: Spring 2018 Thomas Fai (Harvard) The Lubricated Immersed Boundary Method
Many real-world examples of fluid-structure interaction, including the transit of red blood cells through the narrow slits in the spleen, involve the near-contact of elastic structures separated by thin layers of fluid. The separation of length scales between these fine lubrication layers and the larger elastic objects poses significant computational challenges. Motivated by the challenge of resolving such multiscale problems, we introduce an immersed boundary method that uses elements of lubrication theory to resolve thin fluid layers between immersed boundaries. We apply this method to two-dimensional flows of increasing complexity, including eccentric rotating cylinders and elastic vesicles near walls in shear flow, to show its increased accuracy compared to the classical immersed boundary method. We present preliminary simulation results of cell suspensions, a problem in which near-contact occurs at multiple levels, such as cell-wall, cell-cell, and intracellular interactions, to highlight the importance of resolving thin fluid layers in order to obtain the correct overall dynamics.
Michael Herty (RWTH-Aachen) Opinion Formation Models and Mean field Games Techniques
Mean-Field Games are games with a continuum of players that incorporate the time dimension through a control-theoretic approach. Recently, simpler approaches relying on reply strategies have been proposed. Based on an example in opinion formation modeling we explore the link between differentiability notions and mean-field game approaches. For numerical purposes a model predictive control framework is introduced consistent with the mean-field game setting that allows for efficient simulation. Numerical examples are also presented as well as stability results on the derived control.
Lee Panetta (Texas A&M) Traveling waves and pulsed energy emissions seen in numerical simulations of electromagnetic wave scattering by ice crystals
The numerical simulation of single particle scattering of electromagnetic energy plays a fundamental role in remote sensing studies of the atmosphere and oceans, and in efforts to model aerosol "radiative forcing" processes in a wide variety of models of atmospheric and climate dynamics, I will briefly explain the main challenges in the numerical simulation of single particle scattering and describe how work with 3-d simulations of scattering of an incident Gaussian pulse, using a Pseudo-Spectral Time Domain method to numerically solve Maxwell’s Equations, led to an investigation of episodic bursts of energy that were observed at various points in the near field during the decay phase of the simulations. The main focus of the talk will be on simulations in dimensions 1 and 2, simple geometries, and a single refractive index (ice at 550 nanometers). The periodic emission of pulses is easy to understand and predict on the basis of Snell’s laws in the 1-d case considered. In much more interesting 2-d cases, simulations show traveling waves within the crystal that give rise to pulsed emissions of energy when they interact with each other or when they enter regions of high surface curvature. The time-dependent simulations give a more dynamical view of "photonic nanojets" reported earlier in steady-state simulations in other contexts, and of energy release in "morphology-dependent resonances."
Francois Monard (UC Santa Cruz) Inverse problems in integral geometry and Boltzmann transport
The Boltzmann transport (or radiative transfer) equation describes the transport of photons interacting with a medium via attenuation and scattering effects. Such an equation serves as the model for many imaging modalities (e.g., SPECT, Optical Tomography) where one aims at reconstructing the optical parameters (absorption/scattering) or a source term, out of measurements of intensities radiated outside the domain of interest.
In this talk, we will review recent progress on the inversion of some of the inverse problems mentioned above. In particular, we will discuss an interesting connection between the inverse source problem (where the optical parameters are assumed to be known) and a problem from integral geometry, namely the tensor tomography problem (or how to reconstruct a tensor field from knowledge of its integrals along geodesic curves).
Haizhao Yang (National University of Singapore) A Unified Framework for Oscillatory Integral Transform: When to use NUFFT or Butterfly Factorization?
This talk introduces fast algorithms of the matvec $g=Kf$ for $K\in \mathbb{C}^{N\times N}$, which is the discretization of the oscillatory integral transform $g(x) = \int K(x,\xi) f(\xi)d\xi$ with a kernel function $K(x,\xi)=\alpha(x,\xi)e^{2\pi i\Phi(x,\xi)}$, where $\alpha(x,\xi)$ is a smooth amplitude function , and $\Phi(x,\xi)$ is a piecewise smooth phase function with $O(1)$ discontinuous points in $x$ and $\xi$. A unified framework is proposed to compute $Kf$ with $O(N\log N)$ time and memory complexity via the non-uniform fast Fourier transform (NUFFT) or the butterfly factorization (BF), together with an $O(N)$ fast algorithm to determine whether NUFFT or BF is more suitable. This framework works for two cases: 1) explicite formulas for the amplitude and phase functions are known; 2) only indirect access of the amplitude and phase functions are available. Especially in the case of indirect access, our main contributions are: 1) an $O(N\log N)$ algorithm for recovering the amplitude and phase functions is proposed based on a new low-rank matrix recovery algorithm; 2) a new stable and nearly optimal BF with amplitude and phase functions in form of a low-rank factorization (IBF-MAT) is proposed to evaluate the matvec $Kf$. Numerical results are provided to demonstrate the effectiveness of the proposed framework.
Eric Keaveny (Imperial College London) Linking the micro- and macro-scales in populations of swimming cells
Swimming cells and microorganisms are as diverse in their collective dynamics as they are in their individual shapes and swimming mechanisms. They are able to propel themselves through simple viscous fluids, as well as through more complex environments where they must interact with other microscopic structures. In this talk, I will describe recent simulations that explore the connection between dynamics at the scale of the cell with that of the population in the case where the cells are sperm. In particular, I will discuss how the motion of the sperm’s flagella can greatly impact the overall dynamics of their suspensions. Additionally, I will discuss how in complex environments, the density and stiffness of structures with which the cells interact impact the effective diffusion of the population.
Anne Gelb (Dartmouth) Reducing the effects of bad data measurements using variance based weighted joint sparsity
We introduce the variance based joint sparsity (VBJS) method for sparse signal recovery and image reconstruction from multiple measurement vectors. Joint sparsity techniques employing $\ell_{2,1}$ minimization are typically used, but the algorithm is computationally intensive and requires fine tuning of parameters. The VBJS method uses a weighted $\ell_1$ joint sparsity algorithm, where the weights depend on the pixel-wise variance. The VBJS method is accurate, robust, cost efficient and also reduces the effects of false data.
Molei Tao (Georgia Tech) Explicit high-order symplectic integration of nonseparable Hamiltonians: algorithms and long time performance
Symplectic integrators preserve the phase-space volume and have favorable performances in long time simulations. Methods for an explicit symplectic integration have been extensively studied for separable Hamiltonians (i.e., H(q,p)=K(p)+V(q)), and they lead to both accurate and efficient simulations. However, nonseparable Hamiltonians also model important problems, such as non-Newtonian mechanics and nearly integrable systems in action-angle coordinates. Unfortunately, implicit methods had been the only available symplectic approach for general nonseparable systems.
This talk will describe a recent result that constructs explicit and arbitrary high-order symplectic integrators for arbitrary Hamiltonians. Based on a mechanical restraint that binds two copies of phase space together, these integrators have good long time performance. More precisely, based on backward error analysis, KAM theory, and some additional multiscale analysis, a pleasant error bound is established for integrable systems. This bound is then demonstrated on a conceptual example and the Schwarzschild geodesics problem. For nonintegrable systems, some numerical experiments with the nonlinear Schrodinger equation will be discussed.
Boualem Khouider (UVic) Using a stochastic convective parametrization to improve the simulation of tropical modes of variability in a GCM
Convection in the tropics is organized into a hierarchy of scales ranging from the individual cloud of 1 to 10 km to cloud clusters and super-clusters of 100’s km and 1000’s km, respectively, and their planetary scale envelopes. These cloud systems are strongly coupled to large scale dynamics in the from of wave disturbances going by the names of meso-scale systems, convectively coupled equatorial waves (CCEW), and intraseasonal oscillations, including the eastward propagating Madden Julian Oscillation (MJO) and poleward moving monsoon intraseasonal oscillation (MISO). Coarse resolution climate models (GCMs) have serious difficulties in representing these tropical modes of variability, which are known to impact weather and climate variability in both the tropics and elsewhere on the globe. Atmospheric rivers, for example, such the pineapple express that brings heavy rainfall to the Pacific North West, are believed to be directly connected to the MJO.
The deficiency in the GCMs is believed to be rooted from the inadequateness of the underlying cumulus parameterizations to represent the variability at the multiple spatial and temporal scales of organized convection and the associated two-way interactions between the wave flows and convection; these parameterizations are based on the quasi-equilibrium closure where convection is basically slaved to the large scale dynamics. To overcome this problem we employ a stochastic multi-cloud model (SMCM) convective parametrization, which mimics the interactions at sub-grid scales of multiple cloud types, as seen in observations. The new scheme is incorporated into the National Centers for Environmental Prediction (NCEP) Climate Forecast System version 2 (CFSv2) model (CFSsmcm) in lieu of the pre-existing simplified Arakawa-Schubert (SAS) cumulus scheme.
Significant improvements are seen in the simulation of MJO, CCEWs as well as the Indian MISO. These improvements appear in the form of improved variability, morphology and physical features of these wave flows. This particularly confirms the multicloud paradigm of organized tropical convection, on which the SMCM design was based, namely, congestus, deep and stratiform cloud decks that interact with each other to form the building block for multiscale convective systems. An adequate account for the dynamical interactions of this cloud hierarchy thus constitutes an important requirement for cumulus parameterizations to succeed in representing atmospheric tropical variability. SAS fails to fulfill this requirement evident in the unrealistic physical structures of the major intra-seasonal modes simulated by the default CFSv2.
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
Definition:Null Set Contents Definition
Let $\left({X, \Sigma, \mu}\right)$ be a measure space.
Family of Null Sets
The family of
$\mu$-null sets, $\left\{{N \in \Sigma: \mu \left({N}\right) = 0}\right\}$, is denoted $\mathcal N_{\mu}$. Definition in $\R^n$ A set $E \subseteq \R^n$ is called a null set if for any $\epsilon > 0$ there exists a countable collection $J_i := \left(({\mathbf a_i \,.\,.\, \mathbf b_i}\right))$, $i \in \N$ of open $n$-rectangles such that: $\displaystyle E \subseteq \bigcup_{i \mathop = 1}^\infty J_i$
and
$\displaystyle \sum_{i \mathop = 1}^\infty \operatorname{vol} \left({J_i}\right) \le \epsilon$. Said another way, a null set is a set that can be covered by a countable collection of open $n$-rectangles having total volume as small as we wish. On Equivalence of Definitions of Null Set in Euclidean Space, it is shown that this definition is compatible with that for general measure spaces. Also known as
Because of the defining equality $\mu \left({N}\right) = 0$, a
$\mu$-null set $N$ is also sometimes called a ($\mu$-)measure zero set. Note
Not to be confused with the empty set.
|
Keywords
copositive matrix, extreme ray, zero support set
Abstract
Let $A \in {\cal C}^n$ be an exceptional extremal copositive $n \times n$ matrix with positive diagonal. A zero $u$ of $A$ is a non-zero nonnegative vector such that $u^TAu = 0$. The support of a zero $u$ is the index set of the positive elements of $u$. A zero $u$ is minimal if there is no other zero $v$ such that $\Supp v \subset \Supp u$ strictly. Let $G$ be the graph on $n$ vertices which has an edge $(i,j)$ if and only if $A$ has a zero with support $\{1,\dots,n\} \setminus \{i,j\}$. In this paper, it is shown that $G$ cannot contain a cycle of length strictly smaller than $n$. As a consequence, if all minimal zeros of $A$ have support of cardinality $n - 2$, then $G$ must be the cycle graph $C_n$.
Recommended Citation
Hildebrand, Roland.(2018),"Extremal Copositive Matrices with Zero Supports of Cardinality n-2",
Electronic Journal of Linear Algebra,Volume 34, pp. 28-34. DOI: https://doi.org/10.13001/1081-3810.3649
|
I'm trying to understand how a Gaussian Process with a squared exponential covariance function can be obtained from Bayesian Linear Regression with a Gaussian prior $N(0,\sigma_p^2 I)$ on the parameters and an infinite number of basis functions. I'm following the proof in chapter four of
Gaussian Processes for Machine Learning.
Let $\phi_c(x)=\exp(-\frac{(x-c)^2}{2\ell^2})$, where $c$ is the center of the basis function. Then Bayesian Linear Regression is a Gaussian Process with mean function $\mu(x)= 0$ and covariance function $k(x_p,x_q)=\sigma_p^2\sum_{c=1}^N\phi_c(x_p)\phi_c(x_q)$. Here's the part of the proof where I'm confused because it feels a little hand-wavy.
Now, allowing an infinite number of basis functions centered everywhere on an interval (and scaling down the variance of the prior on the weights with the number of basis functions) we obtain the limit $$ \lim_{N\to\infty}\frac{\sigma^2_p}{N}\sum_{c=1}^N\phi_c(x_p)\phi_c(x_q)=\sigma^2_p\int_{c_{\text{min}}}^{c_{\text{max}}}\phi_c(x_p)\phi_c(x_q)dc. $$ Plugging in the Gaussian-shaped basis functions and letting the integration limits go to infinity we obtain $$ \begin{align*} k(x_p, x_q) &= \sigma^2_p\int_{-\infty}^\infty\exp\Big(-\frac{(x_p - c)^2}{2\ell^2}\Big)\exp\Big(-\frac{(x_q - c)^2}{2\ell^2}\Big)dc\\ &=\sqrt{\pi}\ell\sigma^2_p\exp\Big(-\frac{(x_p - x_q)^2}{2(\sqrt{2}\ell)^2}\Big) \end{align*} $$ which we recognize as a squared exponential covariance function with a $\sqrt{2}$ times longer length-scale.
I'm hoping someone could add some further explanation to this proof. Why must the covariance function be scaled by a factor of $1/N?$ Also, it's unclear to me why we can simply change the limits of integration from $c_{\text{min}}$ and $c_{\text{max}}$ to $\pm\infty$. Maybe this is because I'm not seeing how the limit of the covariance function corresponds to having an infinite number of basis functions centered densely in $x$-space.
|
How to Generate Random Surfaces in COMSOL Multiphysics®
To easily generate random-looking geometric surfaces, the COMSOL Multiphysics® software provides a powerful set of built-in functions and operators, such as functions for uniform and Gaussian random distributions and a very useful
sum operator. In this blog post, we show you how to generate a randomized surface with what amounts to a “one liner” expression with detailed control of the constituent spatial frequency components that determine the nature of the surface’s roughness.
Characterizing Surface Roughness
There are many ways to characterize a rough surface. One way is to use its approximate fractal dimension, which is a value between 2 and 3 for a surface. A surface of fractal dimension 2 is an ordinary, almost everywhere smooth surface; the value 2.5 represents a fairly rugged surface; and values close to 3 represent something that is close to “3D space filling”. Correspondingly, a curve of fractal dimension 1 is smooth almost everywhere, the value 1.5 represents a fairly rugged line, and values close to 2 represent something that is close to “2D space filling”.
The range of fractal dimension values for curves going from 1 (left) to about 1.2 (center) and to 1.6 (right).
Using a fractal dimension measure can be a useful approximation, but we need to remember that real surfaces aren’t fractal in nature over more than a few orders of magnitude of scale. Real surfaces have a spatial frequency “cutoff” due to their finite size and due to the fact that when “zooming in”, you will eventually hit some new type of microstructure behavior.
Another way of characterizing surface roughness is with respect to its spatial frequency content. This can be turned into a constructive method of synthesizing surface data by using a sum of trigonometric functions similar to a Fourier series expansion. Each term in such a sum represents a certain frequency of oscillation through space. This is the method that we will use here. Let’s quickly review the concepts of spatial frequencies and elementary wave shapes before moving on to trigonometric series.
Spatial Frequencies
In physics, the frequency of oscillations over time occurs in mathematical expressions like
where the unit of the frequency
f is 1/s, also known as hertz or Hz.
Oscillations through space have a corresponding spatial frequency, as in the following expression, where we simply have replaced the time variable
t by a spatial variable x and the time frequency f with the spatial frequency v.
where the SI unit of the spatial frequency is 1/
m.
Spatial frequencies are commonly represented by a wave number
k = 2 πv.
A related quantity is the wavelength \lambda=\frac{1}{\nu}, which is related to the frequency and wave number as follows:
There may be more than one dimension of space and, accordingly, there may be multiple spatial frequencies. In 2D, using Cartesian coordinates, we have:
where \bf{k}=(k_x,k_y)=(2\pi \nu_x,2\pi\nu_y) is the wave vector and \bf{x}=(x,y).
The wave vector \bf{k} represents the direction of the wave.
Elementary Waves
A rough surface
f( x, y) can be seen as composed of many elementary waves of the form
where
φ is a phase angle.
The phase angle also makes it possible to express sine functions due to the relationship sin(\theta)=cos(\pi/2-\theta).
For a completely random surface, it should hold that the phase angle
φ can take any value in, say, the interval 0 to π or – π/2 to π/2. When synthesizing elementary waves for a random surface, we can pick φ from a uniform random distribution in such an interval of length π, since we then allow for the expression cos(\phi) to span all possible values between -1 and +1. Note that there may be end-point or wrap-around effects if we choose an interval with a size bigger than π. This is due to the cosine function being its own mirror image in steps of π, according to cos(\pi-\theta)=-cos(\theta).
In order to get an efficient representation that can be used for simulations, we will only allow for a discrete set of spatial frequencies:
ν x = m, ν= y n
where
m and n are integers.
Let’s consider a surface that is composed of elementary waves of the following form:
By letting
m and n take both positive and negative values with equal probabilities, we should be able to get a method of synthesizing a surface with no preferred direction of oscillations.
Note that, in this way, each wave direction is represented twice. For example, the direction (-2,-3) is the same as (2,3); (2,-1) is the same as (-2,1); and so on.
If we allow the spatial frequencies
m and n to take values up to maximum integers M and N, respectively, then this corresponds to a high-frequency cutoff at:
Since we also allow for negative values, there are negative cutoffs at:
Having a spatial frequency cutoff at \nu_{xmax}=M in the
x direction means that the shortest wavelength we can represent is \lambda_{xmin}=\frac{1}{M}, and similarly for the y direction, \lambda_{ymin}=\frac{1}{N}. Associated Amplitudes for Elementary Waves
Each elementary wave will have an associated amplitude so that each constituent wave component has the following form:
The final surface will be a sum over such wave components:
The simplest choice of amplitude would be to choose the coefficients
A mn from a uniform or perhaps Gaussian distribution. However, it turns out that this will not generate a particularly natural-looking surface. In nature, different processes, such as wearing and erosion, make it more likely that slow oscillations have a larger amplitude than fast ones. In the discrete case, this corresponds to the amplitudes tapering off according to some distribution:
where the spectral exponent
β indicates how quickly higher frequencies are attenuated. Following The Science of Fractal Images (Ref.1), the spectral exponent can be related to the fractal dimension of a surface, but only for an infinite series of waves covering arbitrarily high frequencies and only for certain ranges of the exponent. In practice, the amplitudes a( m, n) of our synthesized surface will be generated using a limited number of frequencies, multiplied with a random function g( m, n) having a Gaussian distribution: a( m, n) = g( m, n) h( m, n)
A Gaussian, or normal, distribution is chosen to get a smooth but random variation in amplitudes with no limit on the magnitude.
The phase angles
φ will be sampled from a function u with a uniform random distribution between – π/2 and π/2: φ( m, n) = u( m, n) Summing It Up
To represent our rough surface, we want to use the following double sum:
where
x and y are spatial coordinates; m and n are spatial frequencies; a( m, n) are amplitudes; and φ( m, n) are phase angles. This expression is similar to a truncated Fourier series. Although the series is expressed in terms of cosine functions, the phase angles make it so this sum can express a quite general trigonometric series due to the angle sum rule: Determining Periodicity
Due to its definition, the function
f( x, y) will be periodic. In order to get a natural-looking surface, we should “cut out” a suitably small portion by letting x and y vary between some limited values; otherwise, the periodicity of the synthesized data will be apparent. What should these values be?
The overall periodicity will be determined by the slowest oscillations, which correspond to the spatial frequencies
m = 1 or n = 1 in the x direction and y direction, respectively. This gives a period length of 1 in each direction.
We could generate the surface over a rectangle [
a, a + 1] × [ b, b + 1] or smaller in order to “avoid” the periodicity. Defining Parameters and Random Functions in COMSOL Multiphysics®
For the COMSOL Multiphysics implementation, start by defining a couple of parameters for the spatial frequency resolution and spectral exponent according to the following figure:
The amplitude generation will require a random function with a Gaussian distribution in two variables. This functionality is available under the
Global Definitions node:
Here, the
Label and Function name have been changed to Gaussian Random and g1, respectively. In addition, the Number of arguments is set to 2 instead of the default 1 and the Distribution type is set to Normal, which corresponds to a normal or Gaussian distribution.
In a similar way, for the phase angle, we need a uniform random function in the interval between –
π/2 and π/2:
The
Label is changed to Uniform Random, the Function name to u1, the Number of arguments to 2, and the Range to pi.
You can optionally use random seeds to get the same surface each time you use the same input parameters.
Defining the Parametric Surface
The next step is to add a
Parametric Surface node under Geometry using a fairly lengthy z-coordinate expression, as follows: 0.01*sum(sum(if((m!=0)||(n!=0),((m^2+n^2)^(-b/2))*g1(m,n)*cos(2*pi*(m*s1+n*s2)+u1(m,n)),0),m,-N,N),n,-N,N)
where
x = s1 and y = s2 vary between 0 and 1.
The factor 0.01 is used to scale the data in the
z direction. Alternatively, this scaling factor can be absorbed into the amplitude coefficients.
Note that whenever you update any of the parameters or expressions for the Parametric Surface, you need to click the
Rebuild with Updated Functions button in the Advanced Settings section of the Settings window.
This expression is a double-sum over the integer parameters
m and n each running from – N to N. If we compare this to the mathematical discussion earlier, we can see that we have set M = N, resulting in a square surface patch. The term where m and n are simultaneously zero corresponds to an unwanted “DC” term and is eliminated from the sum by the if statement.
The syntax for the
sum() operator is as follows: sum(expr,index,lower,upper)
which evaluates a sum of a general expression
expr for all indices index from lower to upper.
The syntax for the
if() operator is as follows: if(cond,expr1,expr2)
for which the conditional expression
cond is evaluated to expr1 or expr2 depending on the value of the condition.
In this example, the resolution of the parametric surface has been increased by setting the
Maximum number of knots to 100 (the default is 20). In addition, the Relative tolerance is relaxed to 1e-3 (the default is 1e-6). The underlying representation of the parametric surface is based on nonuniform rational B-splines (NURBS). More knots correspond to a finer resolution of the NURBS representation. The tolerance is increased, since we are not overly concerned about the approximation accuracy of the generated surface for this example.
By generating a mesh, we can get a useful visualization of the surface, as seen in the figure below.
A meshed random surface.
Note that N = 20 means that the fastest oscillations are 1/20 = 0.05 m, assuming SI units. The periodicity in the
x and y directions can be seen by following the curves parallel to the y– and x-axes at x = 0, x = 1 and y = 0, y = 1; respectively.
To see the periodicity even more clearly, we can plot the surface on the square [0,2] × [0,2]:
The periodicity of the surface on the square [0,2] × [0,2]. The surface height is represented by color. Surfaces generated on the square [0,1] × [0,1] by superimposing 20 frequency components with amplitude spectral exponents β = 0.5, β = 1.0, β = 1.5, and β = 1.8, clockwise from the top-left image. The surface height is represented by color. Using the Surface Data in Analyses
This type of randomly generated surface can, in COMSOL Multiphysics, be used in any kind of physics simulation context, including for electromagnetics, structural mechanics, acoustics, fluid, heat, or chemical analysis. The expression for the double sum is not limited for use in geometry modeling, but can also be used for material data, equation coefficients, boundary conditions, and more. Using methods, a large number of surface realizations can be used in a loop to gather statistics of the results.
By generalizing the double-sum to a triple-sum, you can synthesize 3D inhomogeneous material data. However, you have to be prepared for long and memory-intensive computations when performing triple-sums for 3D simulations.
A fracture flow simulation based on synthetically generated fracture aperture data. The Rock Fracture Flow tutorial model is part of the COMSOL Multiphysics Application Library. A generic thermal expansion analysis of two 1-centimeter-sized metal blocks with a material interface based on the parametric surface described in this blog post. The bottom material slab is aluminum and the top material slab is steel. The visualization shows the von Mises stress at the material interface and on the surface of the aluminum slab. Relationship to Discrete Cosine and Fourier Transforms
The sum
is similar to a discrete cosine transform or to the real part of a discrete Fourier transform:
where the subscript
c is used to indicate complex quantities and x and y now take discrete values. Here, the phase angle information is encoded in the complex Fourier coefficients.
Due to the definition of the discrete Fourier transform, we are allowed to perform a shift in index in order to generate the following more familiar form:
or by using discrete values:
More commonly, the discrete Fourier transform is indexed like this:
where
Note that in order to generate real-valued data, the Fourier coefficients need to fulfill conjugate symmetry relationships in order to eliminate the imaginary-valued contributions from sine functions. Using a sum of cosine functions (i.e., a cosine transform) avoids this problem.
A fast way of generating a large number of Fourier coefficients is to use a fast cosine transform (FCT) or fast Fourier transform (FFT). This could be done in another program and then imported to the COMSOL Desktop® user interface as an interpolation table. The trigonometric interpolation method described above is slower, but has the advantage that it can be used directly on an unstructured mesh and is automatically refined by simply refining the mesh in the user interface.
For a description of using FFT for synthesizing surfaces, see Ref.1.
1D and Cylindrical Cases
Let’s conclude with a few interesting, special cases of random surface generation in COMSOL Multiphysics, including curves and cylinders.
Random Curve
In a 2D simulation, a random curve can be generated using the following expression:
0.01*sum(if((m!=0),((m^2)^(-b/2))*g1(m)*cos(2*pi*m*s+u1(m)),0),m,-N,N)
where g1 and u1 are 1D random functions.
Note that when generating a curve, the spectral exponent will have a lower value as compared to that of a surface for the “same level of randomness”.
Random Polar Curve
A randomized curve in polar coordinates representing random deviations from a circle can be generated:
x=cos(2*pi*s)*(1+0.1*sum(if((m!=0),((m^2)^(-b/2))*g1(m)*cos(2*pi*m*s+u1(m)),0),m,-N,N))
y=sin(2*pi*s)*(1+0.1*sum(if((m!=0),((m^2)^(-b/2))*g1(m)*cos(2*pi*m*s+u1(m)),0),m,-N,N))
This corresponds to a parametric curve in 2D polar coordinates:
Random Cylinder
A randomized cylinder in 3D can be generated using a parametric surface with parameters as follows:
x=cos(2*pi*s1)*(1+0.1*sum(sum(if((m!=0)||(n!=0),((m^2+n^2)^(-b/2))*g1(m,n)*cos(2*pi*(m*s1+n*s2)+u1(m,n)),0),m,-N,N),n,-N,N))
y=sin(2*pi*s1)*(1+0.1*sum(sum(if((m!=0)||(n!=0),((m^2+n^2)^(-b/2))*g1(m,n)*cos(2*pi*(m*s1+n*s2)+u1(m,n)),0),m,-N,N),n,-N,N))
z=s2*2*pi
where the parameters
s1 and s2 vary between 0 and 1.
This corresponds to a parametric surface in cylindrical coordinates:
Such a single-piece random cylinder represents a type of self-intersecting surface that is not allowed in COMSOL Multiphysics. You can easily get around this by, for example, creating four surface patches corresponding to the parameter
s1 varying from 0 to 0.25, 0.25 to 0.5, 0.5 to 0.75, and 0.75 to 1.0. One such patch corresponds to a polar angle span of size \frac{\pi}{2}. Reference The Science of Fractal Images, Editors: Peitgen, Heinz-Otto, Saupe, Dietmar. Eds.
Comments (29) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
A Belyi-extender (or dessinflateur) is a rational function $q(t) = \frac{f(t)}{g(t)} \in \mathbb{Q}(t)$ that defines a map
\[ q : \mathbb{P}^1_{\mathbb{C}} \rightarrow \mathbb{P}^1_{\mathbb{C}} \] unramified outside $\{ 0,1,\infty \}$, and has the property that $q(\{ 0,1,\infty \}) \subseteq \{ 0,1,\infty \}$.
An example of such a Belyi-extender is the power map $q(t)=t^n$, which is totally ramified in $0$ and $\infty$ and we clearly have that $q(0)=0,~q(1)=1$ and $q(\infty)=\infty$.
The composition of two Belyi-extenders is again an extender, and we get a rather mysterious monoid $\mathcal{E}$ of all Belyi-extenders.
Very little seems to be known about this monoid. Its units form the symmetric group $S_3$ which is the automrphism group of $\mathbb{P}^1_{\mathbb{C}} – \{ 0,1,\infty \}$, and mapping an extender $q$ to its degree gives a monoid map $\mathcal{E} \rightarrow \mathbb{N}_+^{\times}$ to the multiplicative monoid of positive natural numbers.
If one relaxes the condition of $q(t) \in \mathbb{Q}(t)$ to being defined over its algebraic closure $\overline{\mathbb{Q}}$, then such maps/functions have been known for some time under the name of
dynamical Belyi-functions, for example in Zvonkin’s Belyi Functions: Examples, Properties, and Applications (section 6).
Here, one is interested in the complex dynamical system of iterations of $q$, that is, the limit-behaviour of the orbits
\[ \{ z,q(z),q^2(z),q^3(z),… \} \] for all complex numbers $z \in \mathbb{C}$.
In general, the 2-sphere $\mathbb{P}^1_{\mathbb{C}} = S^2$ has a finite number of open sets (the Fatou domains) where the limit behaviour of the series is similar, and the union of these open sets is dense in $S^2$. The complement of the Fatou domains is the Julia set of the function, of which we might expect a nice fractal picture.
Let’s take again the power map $q(t)=t^n$. For a complex number $z$ lying outside the unit disc, the series $\{ z,z^n,z^{2n},… \}$ has limit point $\infty$ and for those lying inside the unit circle, this limit is $0$. So, here we have two Fatou domains (interior and exterior of the unit circle) and the Julia set of the power map is the (boring?) unit circle.
Fortunately, there are indeed dynamical Belyi-maps having a more pleasant looking Julia set, such as this one
But then, many dynamical Belyi-maps (and Belyi-extenders) are systems of an entirely different nature, they are
completely chaotic, meaning that their Julia set is the whole $2$-sphere! Nowhere do we find an open region where points share the same limit behaviour… (the butterfly effect).
There’s a nice sufficient condition for chaotic behaviour, due to Dennis Sullivan, which is pretty easy to check for dynamical Belyi-maps.
A
periodic point for $q(t)$ is a point $p \in S^2 = \mathbb{P}^1_{\mathbb{C}}$ such that $p = q^m(p)$ for some $m > 1$. A critical point is one such that either $q(p) = \infty$ or $q'(p)=0$.
Sullivan’s result is that $q(t)$ is completely chaotic when all its critical points $p$ become eventually periodic, that is some $q^k(p)$ is periodic,
but $p$ itself is not periodic.
For a Belyi-map $q(t)$ the critical points are either comlex numbers mapping to $\infty$ or the inverse images of $0$ or $1$ (that is, the black or white dots in the dessin of $q(t)$) which are not leaf-vertices of the dessin.
Let’s do an example, already used by Sullivan himself:
\[ q(t) = (\frac{t-2}{t})^2 \] This is a Belyi-function, and in fact a Belyi-extender as it is defined over $\mathbb{Q}$ and we have that $q(0)=\infty$, $q(1)=1$ and $q(\infty)=1$. The corresponding dessin is (inverse images of $\infty$ are marked with an $\ast$)
The critical points $0$ and $2$ are not periodic, but they become eventually periodic:
\[
2 \rightarrow^q 0 \rightarrow^q \infty \rightarrow^q 1 \rightarrow^q 1 \] and $1$ is periodic.
For a general Belyi-extender $q$, we have that the image under $q$ of any critical point is among $\{ 0,1,\infty \}$ and because we demand that $q(\{ 0,1,\infty \}) \subseteq \{ 0,1,\infty \}$, every critical point of $q$ eventually becomes periodic.
If we want to avoid the corresponding dynamical system to be completely chaotic, we have to ensure that one of the periodic points among $\{ 0,1,\infty \}$ (and there is at least one of those) must be critical.
Let’s consider the very special Belyi-extenders $q$ having the additional property that $q(0)=0$, $q(1)=1$ and $q(\infty)=\infty$, then all three of them are periodic.
So, the system is always completely chaotic
unless the black dot at $0$ is not a leaf-vertex of the dessin, or the white dot at $1$ is not a leaf-vertex, or the degree of the region determined by the starred $\infty$ is at least two.
Going back to the mystery Manin-Marcolli sub-monoid of $\mathcal{E}$, it might explain why it is a good idea to restrict to very special Belyi-extenders having associated dessin a $2$-coloured tree, for then the periodic point $\infty$ is critical (the degree of the outside region is at least two), and therefore the conditions of Sullivan’s theorem are not satisfied. So, these Belyi-extenders do not necessarily have to be completely chaotic. (tbc)
Similar Posts: the mystery Manin-Marcolli monoid Dessinflateurs 214066877211724763979841536000000000000 permutation representations of monodromy groups The best rejected proposal ever Coxeter on Escher’s Circle Limits Connes & Consani go categorical The group algebra of all algebraic numbers The Langlands program and non-commutative geometry Manin’s geometric axis
|
Search
Now showing items 1-1 of 1
Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2016-09)
The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
|
The Monster is the largest of the 26 sporadic simple groups and has order
808 017 424 794 512 875 886 459 904 961 710 757 005 754 368 000 000 000
= 2^46 3^20 5^9 7^6 11^2 13^3 17 19 23 29 31 41 47 59 71.
It is not so much the size of its order that makes it hard to do actual calculations in the monster, but rather the dimensions of its smallest non-trivial irreducible representations (196 883 for the smallest, 21 296 876 for the next one, and so on).
In characteristic two there is an irreducible representation of one dimension less (196 882) which appears to be of great use to obtain information. For example, Robert Wilson used it to prove that The Monster is a Hurwitz group. This means that the Monster is generated by two elements g and h satisfying the relations
$g^2 = h^3 = (gh)^7 = 1 $
Geometrically, this implies that the Monster is the automorphism group of a Riemann surface of genus g satisfying the Hurwitz bound 84(g-1)=#Monster. That is,
g=9619255057077534236743570297163223297687552000000001=42151199 * 293998543 * 776222682603828537142813968452830193
Or, in analogy with the Klein quartic which can be constructed from 24 heptagons in the tiling of the hyperbolic plane, there is a finite region of the hyperbolic plane, tiled with heptagons, from which we can construct this monster curve by gluing the boundary is a specific way so that we get a Riemann surface with exactly 9619255057077534236743570297163223297687552000000001 holes. This finite part of the hyperbolic tiling (consisting of #Monster/7 heptagons) we’ll call the
empire of the monster and we’d love to describe it in more detail.
Look at the half-edges of all the heptagons in the empire (the picture above learns that every edge in cut in two by a blue geodesic). They are exactly #Monster such half-edges and they form a dessin d’enfant for the monster-curve.
If we label these half-edges by the elements of the Monster, then multiplication by g in the monster interchanges the two half-edges making up a heptagonal edge in the empire and multiplication by h in the monster takes a half-edge to the one encountered first by going counter-clockwise in the vertex of the heptagonal tiling. Because g and h generated the Monster, the dessin of the empire is just a concrete realization of the monster.
Because g is of order two and h is of order three, the two permutations they determine on the dessin, gives a group epimorphism $C_2 \ast C_3 = PSL_2(\mathbb{Z}) \rightarrow \mathbb{M} $ from the modular group $PSL_2(\mathbb{Z}) $ onto the Monster-group.
In noncommutative geometry, the group-algebra of the modular group $\mathbb{C} PSL_2 $ can be interpreted as the coordinate ring of a noncommutative manifold (because it is formally smooth in the sense of Kontsevich-Rosenberg or Cuntz-Quillen) and the group-algebra of the Monster $\mathbb{C} \mathbb{M} $ itself corresponds in this picture to a finite collection of ‘points’ on the manifold. Using this geometric viewpoint we can now ask the question
What does the Monster see of the modular group?
To make sense of this question, let us first consider the commutative equivalent : what does a point P see of a commutative variety X?
Evaluation of polynomial functions in P gives us an algebra epimorphism $\mathbb{C}[X] \rightarrow \mathbb{C} $ from the coordinate ring of the variety $\mathbb{C}[X] $ onto $\mathbb{C} $ and the kernel of this map is the maximal ideal $\mathfrak{m}_P $ of
$\mathbb{C}[X] $ consisting of all functions vanishing in P.
Equivalently, we can view the point $P= \mathbf{spec}~\mathbb{C}[X]/\mathfrak{m}_P $ as the scheme corresponding to the quotient $\mathbb{C}[X]/\mathfrak{m}_P $. Call this the 0-th formal neighborhood of the point P.
This sounds pretty useless, but let us now consider higher-order formal neighborhoods. Call the affine scheme $\mathbf{spec}~\mathbb{C}[X]/\mathfrak{m}_P^{n+1} $ the n-th forml neighborhood of P, then the first neighborhood, that is with coordinate ring $\mathbb{C}[X]/\mathfrak{m}_P^2 $ gives us tangent-information. Alternatively, it gives the best linear approximation of functions near P.
The second neighborhood $\mathbb{C}[X]/\mathfrak{m}_P^3 $ gives us the best quadratic approximation of function near P, etc. etc.
These successive quotients by powers of the maximal ideal $\mathfrak{m}_P $ form a system of algebra epimorphisms
$\ldots \frac{\mathbb{C}[X]}{\mathfrak{m}_P^{n+1}} \rightarrow \frac{\mathbb{C}[X]}{\mathfrak{m}_P^{n}} \rightarrow \ldots \ldots \rightarrow \frac{\mathbb{C}[X]}{\mathfrak{m}_P^{2}} \rightarrow \frac{\mathbb{C}[X]}{\mathfrak{m}_P} = \mathbb{C} $
and its inverse limit $\underset{\leftarrow}{lim}~\frac{\mathbb{C}[X]}{\mathfrak{m}_P^{n}} = \hat{\mathcal{O}}_{X,P} $ is the completion of the local ring in P and contains all the infinitesimal information (to any order) of the variety X in a neighborhood of P. That is, this completion $\hat{\mathcal{O}}_{X,P} $ contains
all information that P can see of the variety X.
In case P is a smooth point of X, then X is a manifold in a neighborhood of P and then this completion
$\hat{\mathcal{O}}_{X,P} $ is isomorphic to the algebra of formal power series $\mathbb{C}[[ x_1,x_2,\ldots,x_d ]] $ where the $x_i $ form a local system of coordinates for the manifold X near P.
Right, after this lengthy recollection, back to our question
what does the monster see of the modular group? Well, we have an algebra epimorphism
$\pi~:~\mathbb{C} PSL_2(\mathbb{Z}) \rightarrow \mathbb{C} \mathbb{M} $
and in analogy with the commutative case, all information the Monster can gain from the modular group is contained in the $\mathfrak{m} $-adic completion
$\widehat{\mathbb{C} PSL_2(\mathbb{Z})}_{\mathfrak{m}} = \underset{\leftarrow}{lim}~\frac{\mathbb{C} PSL_2(\mathbb{Z})}{\mathfrak{m}^n} $
where $\mathfrak{m} $ is the kernel of the epimorphism $\pi $ sending the two free generators of the modular group $PSL_2(\mathbb{Z}) = C_2 \ast C_3 $ to the permutations g and h determined by the dessin of the pentagonal tiling of the Monster’s empire.
As it is a hopeless task to determine the Monster-empire explicitly, it seems even more hopeless to determine the kernel $\mathfrak{m} $ let alone the completed algebra… But, (surprise) we can compute $\widehat{\mathbb{C} PSL_2(\mathbb{Z})}_{\mathfrak{m}} $ as explicitly as in the commutative case we have $\hat{\mathcal{O}}_{X,P} \simeq \mathbb{C}[[ x_1,x_2,\ldots,x_d ]] $ for a point P on a manifold X.
Here the details : the quotient $\mathfrak{m}/\mathfrak{m}^2 $ has a natural structure of $\mathbb{C} \mathbb{M} $-bimodule. The group-algebra of the monster is a semi-simple algebra, that is, a direct sum of full matrix-algebras of sizes corresponding to the dimensions of the irreducible monster-representations. That is,
$\mathbb{C} \mathbb{M} \simeq \mathbb{C} \oplus M_{196883}(\mathbb{C}) \oplus M_{21296876}(\mathbb{C}) \oplus \ldots \ldots \oplus M_{258823477531055064045234375}(\mathbb{C}) $
with exactly 194 components (the number of irreducible Monster-representations). For any $\mathbb{C} \mathbb{M} $-bimodule $M $ one can form the tensor-algebra
$T_{\mathbb{C} \mathbb{M}}(M) = \mathbb{C} \mathbb{M} \oplus M \oplus (M \otimes_{\mathbb{C} \mathbb{M}} M) \oplus (M \otimes_{\mathbb{C} \mathbb{M}} M \otimes_{\mathbb{C} \mathbb{M}} M) \oplus \ldots \ldots $
and applying the formal neighborhood theorem for formally smooth algebras (such as $\mathbb{C} PSL_2(\mathbb{Z}) $) due to Joachim Cuntz (left) and Daniel Quillen (right) we have an isomorphism of algebras
$\widehat{\mathbb{C} PSL_2(\mathbb{Z})}_{\mathfrak{m}} \simeq \widehat{T_{\mathbb{C} \mathbb{M}}(\mathfrak{m}/\mathfrak{m}^2)} $
where the right-hand side is the completion of the tensor-algebra (at the unique graded maximal ideal) of the $\mathbb{C} \mathbb{M} $-bimodule $\mathfrak{m}/\mathfrak{m}^2 $, so we’d better describe this bimodule explicitly.
Okay, so what’s a bimodule over a semisimple algebra of the form $S=M_{n_1}(\mathbb{C}) \oplus \ldots \oplus M_{n_k}(\mathbb{C}) $? Well, a
simple S-bimodule must be either (1) a factor $M_{n_i}(\mathbb{C}) $ with all other factors acting trivially or (2) the full space of rectangular matrices $M_{n_i \times n_j}(\mathbb{C}) $ with the factor $M_{n_i}(\mathbb{C}) $ acting on the left, $M_{n_j}(\mathbb{C}) $ acting on the right and all other factors acting trivially.
That is, any S-bimodule can be represented by a quiver (that is a directed graph) on k vertices (the number of matrix components) with a loop in vertex i corresponding to each simple factor of type (1) and a directed arrow from i to j corresponding to every simple factor of type (2).
That is, for the Monster, the bimodule $\mathfrak{m}/\mathfrak{m}^2 $ is represented by a quiver on 194 vertices and now we only have to determine how many loops and arrows there are at or between vertices.
Using Morita equivalences and standard representation theory of quivers it isn’t exactly rocket science to determine that the number of arrows between the vertices corresponding to the irreducible Monster-representations $S_i $ and $S_j $ is equal to
$dim_{\mathbb{C}}~Ext^1_{\mathbb{C} PSL_2(\mathbb{Z})}(S_i,S_j)-\delta_{ij} $
Now, I’ve been wasting a lot of time already here explaining what representations of the modular group have to do with quivers (see for example here or some other posts in the same series) and for quiver-representations we all know how to compute Ext-dimensions in terms of the Euler-form applied to the dimension vectors.
Right, so for every Monster-irreducible $S_i $ we have to determine the corresponding dimension-vector $~(a_1,a_2;b_1,b_2,b_3) $ for the quiver
$\xymatrix{ & & & &
\vtx{b_1} \\ \vtx{a_1} \ar[rrrru]^(.3){B_{11}} \ar[rrrrd]^(.3){B_{21}} \ar[rrrrddd]_(.2){B_{31}} & & & & \\ & & & & \vtx{b_2} \\ \vtx{a_2} \ar[rrrruuu]_(.7){B_{12}} \ar[rrrru]_(.7){B_{22}} \ar[rrrrd]_(.7){B_{23}} & & & & \\ & & & & \vtx{b_3}} $
Now the dimensions $a_i $ are the dimensions of the +/-1 eigenspaces for the order 2 element g in the representation and the $b_i $ are the dimensions of the eigenspaces for the order 3 element h. So, we have to determine to which conjugacy classes g and h belong, and from Wilson’s paper mentioned above these are classes 2B and 3B in standard Atlas notation.
So, for each of the 194 irreducible Monster-representations we look up the character values at 2B and 3B (see below for the first batch of those) and these together with the dimensions determine the dimension vector $~(a_1,a_2;b_1,b_2,b_3) $.
For example take the 196883-dimensional irreducible. Its 2B-character is 275 and the 3B-character is 53. So we are looking for a dimension vector such that $a_1+a_2=196883, a_1-275=a_2 $ and $b_1+b_2+b_3=196883, b_1-53=b_2=b_3 $ giving us for that representation the dimension vector of the quiver above $~(98579,98304,65663,65610,65610) $.
Okay, so for each of the 194 irreducibles $S_i $ we have determined a dimension vector $~(a_1(i),a_2(i);b_1(i),b_2(i),b_3(i)) $, then standard quiver-representation theory asserts that the number of loops in the vertex corresponding to $S_i $ is equal to
$dim(S_i)^2 + 1 – a_1(i)^2-a_2(i)^2-b_1(i)^2-b_2(i)^2-b_3(i)^2 $
and that the number of arrows from vertex $S_i $ to vertex $S_j $ is equal to
$dim(S_i)dim(S_j) – a_1(i)a_1(j)-a_2(i)a_2(j)-b_1(i)b_1(j)-b_2(i)b_2(j)-b_3(i)b_3(j) $
This data then determines completely the $\mathbb{C} \mathbb{M} $-bimodule $\mathfrak{m}/\mathfrak{m}^2 $ and hence the structure of the completion $\widehat{\mathbb{C} PSL_2}_{\mathfrak{m}} $ containing all information the Monster can gain from the modular group.
But then, one doesn’t have to go for the full regular representation of the Monster. Any faithful permutation representation will do, so we might as well go for the one of minimal dimension.
That one is known to correspond to the largest maximal subgroup of the Monster which is known to be a two-fold extension $2.\mathbb{B} $ of the Baby-Monster. The corresponding permutation representation is of dimension 97239461142009186000 and decomposes into Monster-irreducibles
$S_1 \oplus S_2 \oplus S_4 \oplus S_5 \oplus S_9 \oplus S_{14} \oplus S_{21} \oplus S_{34} \oplus S_{35} $
(in standard Atlas-ordering) and hence repeating the arguments above we get a quiver on just 9 vertices! The actual numbers of loops and arrows (I forgot to mention this, but the quivers obtained are actually symmetric) obtained were found after laborious computations mentioned in this post and the details I’ll make avalable here.
Anyone who can spot a relation between the numbers obtained and any other part of mathematics will obtain quantities of genuine (ie. non-Inbev) Belgian beer…8 Comments
|
On a power-type coupled system of Monge-Ampère equations
DOI: http://dx.doi.org/10.12775/TMNA.2015.064
Abstract
$$
\begin{cases}
\det D^{2}u_{1}={(-u_{2})}^\alpha & \hbox{in $\Omega,$} \\
\det D^{2}u_{2}={(-u_{1})}^\beta & \hbox{in $\Omega,$} \\
u_{1}<0,\ u_{2}<0& \hbox{in $\Omega,$}\\
u_{1}=u_{2}=0 & \hbox{on $ \partial \Omega,$}
\end{cases}
$$%
here $\Omega$~is a smooth, bounded and strictly convex domain
in~$\mathbb{R}^{N}$, $N\geq2$, $\alpha >0$, $\beta >0$. When $\Omega$ is
the unit ball in $\mathbb{R}^{N}$, we use index theory of fixed
points for completely continuous operators to get existence,
uniqueness results and nonexistence of radial convex solutions under
some corresponding assumptions on $\alpha$, $\beta$. When $\alpha>0$,
$\beta>0$ and $\alpha\beta=N^2$
we also study a~corresponding eigenvalue problem in more general domains.
Keywords References
L. Caffarelli, L. Nirenberg and J. Spruck, The Dirichlet problem for nonlinear second-order elliptic equations I. Monge-Ampere equations, Comm. Pure Appl. Math.
(1984), 369-402.
K. Deimling, Nonlinear Functional Analysis, Springer-Verlag, Berlin, 1985.
D. Gilbarg and N.S. Trudinger, Elliptic Partial Dierential Equations of Second Order, Springer-Verlag, Berlin, Heidelberg, 2001.
D. Guo and V. Lakshmikantham, Nonlinear Problems in Abstract Cones, Academic Press, Orlando, FL, 1988.
J.V.A. Goncalves and C.A.P. Santos, Classical solutions of singular Monge-Ampere equation in a ball, J. Math. Anal. Appl. 305 (2005), 240-252.
C. Gutierrez, The Monge-Ampere Equation, Birkhauser, Basel, 2000.
S. Hu and H. Wang, Convex solutions of boundary value problems arising from Monge-Ampere equations, Discrete Contin. Dynam. Systems 16 (2006), 705-720.
J. Jacobsen, Global bifurcation problems associated with K-Hessian operators, Topol. Methods Nonlinear Anal. 14 (1999), 81-130.
P.L. Lions, Two remarks on Monge-Ampere equations, Ann. Mat. Pura Appl. 142 (4) (1985), 263-275.
L. Ma and B. Liu, Symmetry results for classical solutions of Monge-Ampere system in the plane, arXiv: 0908.1428.
N.S. Trudinger, Weak solutions of hessian equations, Commun. Partial Dierential Equations 22 (7&8) (1997), 1251-1261.
K. Tso, On a real Monge-Ampere functional, Invent. Math. 101 (1990), 425-448.
H. Wang, Convex solutions of systems arising from Monge{Ampere equations, Electron. J. Qual. Theory Dier. Equ., Special Edition I. 26 (2009), 1-8.
H. Wang, Radial convex solutions of boundary value problems for systems of Monge-Ampere equations, arXiv:1008.4614v1.
W. Wang, On a kind of eigenvalue problems of Monge{Ampere type, Chinese Ann. Math. Ser. A 28 (3) (2007), 347-358.
Z. Zhang and K. Wang, Existence and non-existence of solutions for a class of Monge-Ampere equations, J. Dierential Equations 246 (2009), 2849-2875.
Refbacks There are currently no refbacks.
|
A
tetrahedral snake, sometimes called a Steinhaus snake, is a collection of tetrahedra, linked face to face.
Steinhaus showed in 1956 that the last tetrahedron in the snake can never be a translation of the first one. This is a consequence of the fact that the group generated by the four reflexions in the faces of a tetrahedron form the free product $C_2 \ast C_2 \ast C_2 \ast C_2$.
For a proof of this, see Stan Wagon’s book The Banach-Tarski paradox, starting at page 68.
The thread $(3|3)$ is the
spine of the $(9|1)$-snake which involves the following lattices \[ \xymatrix{& & 1 \frac{1}{3} \ar@[red]@{-}[dd] & & \\ & & & & \\ 1 \ar@[red]@{-}[rr] & & 3 \ar@[red]@{-}[rr] \ar@[red]@{-}[dd] & & 1 \frac{2}{3} \\ & & & & \\ & & 9 & &} \] It is best to look at the four extremal lattices as the vertices of a tetrahedron with the lattice $3$ corresponding to its point of gravity.
The congruence subgroup $\Gamma_0(9)$ fixes each of these lattices, and the arithmetic group $\Gamma_0(3|3)$ is the conjugate of $\Gamma_0(1)$
\[ \Gamma_0(3|3) = \{ \begin{bmatrix} \frac{1}{3} & 0 \\ 0 & 1 \end{bmatrix}.\begin{bmatrix} a & b \\ c & d \end{bmatrix}.\begin{bmatrix} 3 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} a & \frac{b}{3} \\ 3c & 1 \end{bmatrix}~|~ad-bc=1 \} \] We know that $\Gamma_0(3|3)$ normalizes the subgroup $\Gamma_0(9)$ and we need to find the moonshine group $(3|3)$ which should have index $3$ in $\Gamma_0(3|3)$ and contain $\Gamma_0(9)$.
So, it is natural to consider the finite group $A=\Gamma_0(3|3)/\Gamma_9(0)$ which is generated by the co-sets of
\[ x = \begin{bmatrix} 1 & \frac{1}{3} \\ 0 & 1 \end{bmatrix} \qquad \text{and} \qquad y = \begin{bmatrix} 1 & 0 \\ 3 & 0 \end{bmatrix} \] To determine this group we look at the action of it on the lattices in the $(9|1)$-snake. It will fix the central lattice $3$ but will move the other lattices.
Recall that it is best to associate to the lattice $M.\frac{g}{h}$ the matrix
\[ \alpha_{M,\frac{g}{h}} = \begin{bmatrix} M & \frac{g}{h} \\ 0 & 1 \end{bmatrix} \] and then the action is given by right-multiplication.
\[
\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}.x = \begin{bmatrix} 1 & \frac{1}{3} \\ 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} 1 & \frac{1}{3} \\ 0 & 1 \end{bmatrix}.x = \begin{bmatrix} 1 & \frac{2}{3} \\ 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} 1 & \frac{2}{3} \\ 0 & 1 \end{bmatrix}.x=\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \] That is, $x$ corresponds to a $3$-cycle $1 \rightarrow 1 \frac{1}{3} \rightarrow 1 \frac{2}{3} \rightarrow 1$ and fixes the lattice $9$ (so is rotation around the axis through the vertex $9$).
To compute the action of $y$ it is best to use an alternative description of the lattice, replacing the roles of the base-vectors $\vec{e}_1$ and $\vec{e}_2$. These latices are projectively equivalent
\[ \mathbb{Z} (M \vec{e}_1 + \frac{g}{h} \vec{e}_2) \oplus \mathbb{Z} \vec{e}_2 \quad \text{and} \quad \mathbb{Z} \vec{e}_1 \oplus \mathbb{Z} (\frac{g’}{h} \vec{e}_1 + \frac{1}{h^2M} \vec{e}_2) \] where $g.g’ \equiv~1~(mod~h)$. So, we have equivalent descriptions of the lattices \[ M,\frac{g}{h} = (\frac{g’}{h},\frac{1}{h^2M}) \quad \text{and} \quad M,0 = (0,\frac{1}{M}) \] and we associate to the lattice in the second normal form the matrix \[ \beta_{M,\frac{g}{h}} = \begin{bmatrix} 1 & 0 \\ \frac{g’}{h} & \frac{1}{h^2M} \end{bmatrix} \] and then the action is again given by right-multiplication.
In the tetrahedral example we have
\[ 1 = (0,\frac{1}{3}), \quad 1\frac{1}{3}=(\frac{1}{3},\frac{1}{9}), \quad 1\frac {2}{3}=(\frac{2}{3},\frac{1}{9}), \quad 9 = (0,\frac{1}{9}) \] and \[ \begin{bmatrix} 1 & 0 \\ \frac{1}{3} & \frac{1}{9} \end{bmatrix}.y = \begin{bmatrix} 1 & 0 \\ \frac{2}{3} & \frac{1}{9} \end{bmatrix},\quad \begin{bmatrix} 1 & 0 \\ \frac{2}{3} & \frac{1}{9} \end{bmatrix}. y = \begin{bmatrix} 1 & 0 \\ 0 & \frac{1}{9} \end{bmatrix}, \quad \begin{bmatrix} 1 & 0 \\ 0 & \frac{1}{9} \end{bmatrix}. y = \begin{bmatrix} 1 & 0 \\ \frac{1}{3} & \frac{1}{9} \end{bmatrix} \] That is, $y$ corresponds to the $3$-cycle $9 \rightarrow 1 \frac{1}{3} \rightarrow 1 \frac{2}{3} \rightarrow 9$ and fixes the lattice $1$ so is a rotation around the axis through $1$.
Clearly, these two rotations generate the full rotation-symmetry group of the tetrahedron
\[ \Gamma_0(3|3)/\Gamma_0(9) \simeq A_4 \] which has a unique subgroup of index $3$ generated by the reflexions (rotations with angle $180^o$ around axis through midpoints of edges), generated by $x.y$ and $y.x$.
The moonshine group $(3|3)$ is therefore the subgroup generated by
\[ (3|3) = \langle \Gamma_0(9),\begin{bmatrix} 2 & \frac{1}{3} \\ 3 & 1 \end{bmatrix},\begin{bmatrix} 1 & \frac{1}{3} \\ 3 & 2 \end{bmatrix} \rangle \]
|
This question already has an answer here:
Prove the map has a fixed point 2 answers
Let $(X,d)$ be compact. Show: for a map $f$ that when $\forall x, y \in X$ with $x\neq y$
$d(f(x),f(y))<d(x,y)$ is fulfilled.
Then $f$ has a unique fixed point.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
Let $(X,d)$ be compact. Show: for a map $f$ that when $\forall x, y \in X$ with $x\neq y$
$d(f(x),f(y))<d(x,y)$ is fulfilled.
Then $f$ has a unique fixed point.
Assume there are two fixed points, let $x$ and $y$.
As $f(x)=x$ and $f(y)=y$, then $$d(f(x),f(y))=d(x,y)<d(x,y),$$ a contradiction.
|
Here we want to give an easy mathematical bootstrap argument why solutions to the time independent 1D Schrödinger equation (TISE) tend to be rather nice. First formally rewrite the differential form$$-\frac{\hbar^2}{2m} \psi^{\prime\prime}(x) + V(x) \psi(x) ~=~ E \psi(x) \tag{1}$$into the int...
[Some time travel comments] Since in the previous paragraph, we have explained how travelling to the future will not necessary result in you to arrive in the future that is resulted as if you have never time travelled (via twin paradox), what is the reason that the past you travelled back, has to be the past you learnt from historical records :?
@0ßelö7 Well, I'd omit the explanation of the notation on the slide itself, and since there seems to be two pairs of formulae, I'd just put one of the two and then say that there's another one with suitable substitutions.
I mean, "Hey, I bet you've always wondered how to prove X - here it is" is interesting. "Hey, you know that statement everyone knows how to prove but doesn't bother to write down? Here is the proof written down" significantly less so
Sorry I have a quick question: For questions like this physics.stackexchange.com/questions/356260/… where the accepted answer clearly does not answer the original question what is the best thing to do; downvote, flag or just leave it?
So this question says express $u^0$ in terms of $u^j$ where $u$ is the four-velocity and I get what $u^0$ and $u^j$ are but I'm a bit confused how to go about this one? I thought maybe using the space-time interval and evaluating for $\frac{dt}{d\tau}$ but it's not workin out for me... :/ Anyone give me a quickie starter please? :p
Although a physics question, this is still important to chemistry. The delocalized electric field is related to the force (and therefore the repulsive potential) between two electrons. This in turn is what we need to solve the Schrödinger Equation to describe molecules. Short answer: You can calculate the expectation value of the corresponding operator, which comes close to the mentioned superposition. — Feodoran13 hours ago
If we take an electron that's delocalised w.r.t position, how can one evaluate the electric field over some space? Is it some superposition or a sort of field with all the charge at the expectation value of the position?
@0ßelö7 I just looked back at chat and noticed Phase's question, I wasn't purposefully ignoring you - do you want me to look over it? Because I don't think I'll gain much personally from reading the slides.
Maybe it's just me having not really done much with Eigenbases but I don't recognise where I "put it in terms of M's eigenbasis". I just wrote it down for some vector v, rather than a space that contains all of the vectors v
If we take an electron that's delocalised w.r.t position, how can one evaluate the electric field over some space? Is it some superposition or a sort of field with all the charge at the expectation value of the position?
Honey, I Shrunk the Kids is a 1989 American comic science fiction film. The directorial debut of Joe Johnston and produced by Walt Disney Pictures, it tells the story of an inventor who accidentally shrinks his and his neighbor's kids to a quarter of an inch with his electromagnetic shrinking machine and throws them out into the backyard with the trash, where they must venture into their backyard to return home while fending off insects and other obstacles.Rick Moranis stars as Wayne Szalinski, the inventor who accidentally shrinks his children, Amy (Amy O'Neill) and Nick (Robert Oliveri). Marcia...
|
This is a heuristic explanation of Witten's statement, without going into the subtleties of axiomatic quantum field theory issues, such as vacuum polarization or renormalization.
A particle is characterized by a definite momentum plus possible other quantum numbers. Thus, one particle states are by definition states with a definite eigenvalues of the momentum operator, they can have further quantum numbers. These states should exist even in an interactiong field theory, describing a single particle away from any interaction.In a local quantum field theory, these states are associated with local field operators: $$| p, \sigma \rangle = \int e^{ipx} \psi_{\sigma}^{\dagger}(x) |0\rangle d^4x$$Where $\psi $ is the field corresponding to the particle and $\sigma$ describes the set of other quantum numbers additional to the momentum.A symmetry generator $Q$ being the integral of a charge density according to the Noether's theorem$$Q = \int j_0(x') d^3x'$$should generate a local field when it acts on a local field:$[Q, \psi_1(x)] = \psi_2(x)$(In the case of internal symmetries $\psi_2$ depends linearly on the components of $\psi_1(x)$, in the case of space time symmetries it depends on the derivatives of the components of $\psi_1(x)$)
Thus in general:
$$[Q, \psi_{\sigma}(x)] = \sum_{\sigma'} C_{\sigma\sigma'}(i\nabla)\psi_{\sigma'}(x)])$$
Where the dependence of the coefficients $ C_{\sigma\sigma'}$ on the momentum operator $\nabla$ is due to the possibility that $Q$ contains a space-time symmetry.Thus for an operator $Q$ satisfying $Q|0\rangle = 0$, we have$$ Q | p, \sigma \rangle = \int e^{ipx} Q \psi_{\sigma}^{\dagger}(x) |0\rangle d^4x = \int e^{ipx} [Q , \psi_{\sigma}^{\dagger}(x)] |0\rangle d^4x = \int e^{ipx} \sum_{\sigma'} C_{\sigma\sigma'}(i\nabla)\psi_{\sigma'}(x) |0\rangle d^4x = \sum_{\sigma'} C_{\sigma\sigma'}(p) \int e^{ipx} \psi_{\sigma'}^{\dagger}(x) |0\rangle d^4x = \sum_{\sigma'} C_{\sigma\sigma'}(p) | p, \sigma' \rangle $$Thus the action of the operator $Q$ is a representation in the one particle states. The fact that $Q$ commutes with the Hamiltonian is responsible for the energy degeneracy of its action, i.e., the states $| p, \sigma \rangle$ and $Q| p, \sigma \rangle$ have the same energy.This post imported from StackExchange Physics at 2015-06-16 14:50 (UTC), posted by SE-user David Bar Moshe
|
Article Title Keywords
Doubly substochastic matrices, Sub-defect, Maximum diagonal sum
Abstract
Let $\Omega_n$ denote the convex polytope of all $n\times n$ doubly stochastic matrices, and $\omega_{n}$ denote the convex polytope of all $n\times n$ doubly substochastic matrices. For a matrix $A\in\omega_n$, define the sub-defect of $A$ to be the smallest integer $k$ such that there exists an $(n+k)\times(n+k)$ doubly stochastic matrix containing $A$ as a submatrix. Let $\omega_{n,k}$ denote the subset of $\omega_n$ which contains all doubly substochastic matrices with sub-defect $k$. For $\pi$ a permutation of symmetric group of degree $n$, the sequence of elements $a_{1\pi(1)},a_{2\pi(2)}, \ldots, a_{n\pi(n)}$ is called the diagonal of $A$ corresponding to $\pi$. Let $h(A)$ and $l(A)$ denote the maximum and minimum diagonal sums of $A\in \omega_{n,k}$, respectively. In this paper, existing results of $h$ and $l$ functions are extended from $\Omega_n$ to $\omega_{n,k}.$ In addition, an analogue of Sylvesters law of the $h$ function on $\omega_{n,k}$ is proved.
Recommended Citation
Cao, Lei; Chen, Zhi; Duan, Xuefeng; Koyuncu, Selcuk; and Li, Huilan.(2019),"Diagonal Sums of Doubly Substochastic Matrices",
Electronic Journal of Linear Algebra,Volume 35, pp. 42-52. DOI: https://doi.org/10.13001/1081-3810.3760
|
Mr yannick stoll (LAPTh)
18/03/2015, 19:30
Theory
YSF (Young Scientists Forum)
We investigate the consequences at low energies of a new MSSM-SU(5) induced symmetry relation in the up-squark sector. We show that this relation is not too much spoiled by the RGE running down to the electroweak scale and is kept relatively model independent. Therefore, it could bring us information on the possibility that a SU(5) symmetry holds at high energies assuming that the LHC will...
Mr Tom Cornelis (Universiteit Antwerpen)
18/03/2015, 19:37
Experiment
YSF (Young Scientists Forum)
The measurement of the electroweak production cross section of a Z-boson in association with two jets in proton-proton collisions by the CMS experiment is presented. The cross section is measured in dielectron and dimuon final states and the measurement, combining different methods and channels is in agreement with the theory prediction.The hadronic activity in events with Z-boson production...
Jory Sonneveld (RWTH Aachen)
18/03/2015, 19:44
Theory
YSF (Young Scientists Forum)
With new results and limits on constrained models of supersymmetry (SUSY) from the ATLAS and CMS collaborations at the LHC, questions arise about what these limits imply for more general models of SUSY or other models for physics beyond the Standard Model. Since SUSY has a vast array of parameters, both collaborations also quantify their search results in terms of simplified models, augmenting...
55. Search for new light gauge bosons in Higgs boson decays to four-lepton events in pp collisions at √s = 8 TeV with the ATLAS detector
Dr Daniela Paredes Hernández (Aristotle University of Thessaloniki)
18/03/2015, 19:51
Experiment
YSF (Young Scientists Forum)
Some models beyond the Standard Model (BSM) suggest that the Higgs boson, discovered at the LHC during run I, can be used as a portal to look for New Physics. These models predict new sectors coupled to the SM whose presence can be inferred by observing SM final states. This opens the possibility of processes such as Higgs decays to dark vector bosons in four-leptons events, $H\rightarrow...
Chris Malena Delitzsch (Université de Genève)
18/03/2015, 19:58
Experiment
YSF (Young Scientists Forum)
The high center-of-mass energy of the $pp$ collisions at the LHC enables searches for new particles with masses at the TeV scale. These heavy resonances can decay to final states with high $p_{\rm T}$ $W$- and $Z$-bosons. The hadronic decay modes of these bosons are of special interest of the potential increase in sensitivity for measurements and searches. However the cross-section of...
52. Measurement of the \phi_{\eta}* distribution of muon pairs with masses between 30 and 500 GeV in 10.4 fb−1 of p\bar{p} collisions
Mr Xingguo Li (University of Manchester)
18/03/2015, 20:05
Experiment
YSF (Young Scientists Forum)
We present a measurement of the distribution of the variable \phi_{\eta}* for muon pairs with masses between 30 and 500 GeV, using the complete Run II data set collected by the D0 detector at the Fermilab Tevatron proton-antiproton collider. This corresponds to an integrated luminosity of 10.4 fb^{−1} at \sqrt{s}= 1.96 TeV. The data are corrected for detector effects and presented in bins of...
Mrs Helena Kolesova (Czech Technical University in Prague)
18/03/2015, 20:12
Theory
YSF (Young Scientists Forum)
Since the main experimentally testable prediction of grand unified theories is the instability of the proton, precise determination of the proton lifetime for each particular model is desirable. Unfortunately, the corresponding computation usually involves theoretical uncertainties coming e.g. from ignorance of the mass spectrum or from the Planck-suppressed higher-dimensional operators, which...
|
By "measure of correlation", in this context, we mean the
mutual information$$I(X:Y)=H(X)+H(Y)-H(X,Y)=H(X)-H(X|Y).$$
Let us consider a few different classical scenarios and try to work out how this quantity should be computed for quantum states.
Full correlation
In the case in which observing $Y$ gives full information about $X$, we have $H(X|Y)=0$ and therefore the mutual information is maximal and equal to the total amount of information contained in $X$: $I(X:Y)=H(X)$. You can understand this as saying that, in this scenario, knowing $Y$ is the same as knowing $X$.
What is a quantum state which gives you the same effect? An easy example could be something like$$\newcommand{\ket}[1]{\lvert #1\rangle}\newcommand{\proj}[1]{\mathbb P(\,#1\,)}\newcommand{ketbra}[1]{\lvert #1\rangle\!\langle #1\rvert}\rho=\sum_i p_i \ketbra{i,i}.$$As is readily verified, for such a state you have $\rho^A=\sum_i p_i\ketbra i$ and thus $S(\rho^A)=H(\boldsymbol p)$, and moreover,
if $B$ measures in the computational basis, knowing the result of a measurement of $B$ fully determines the upcoming results of measuring $A$, so that $S(\rho^A|\rho^B)=0$ (again, in this choice of measurement basis).
Is this consistent with your definition of classical information via $J_{AB}(\rho)$? Yes it is, because with our choice of measurement we minimised the second term $\sum_i p_i S(\rho_i^A)$, by trivially having $S(\rho_i^A)=0$.
Finally, you might notice how taking the corresponding pure state$$\ket\psi=\sum_i \sqrt{p_i}\ket{i,i},$$we get the same exact results. Now this might seem strange, as the correlations given by $\rho$ and $\ket\psi$ are clearly very different, but it is consistent with the fact that $J_{AB}$ is only measuring the
classical correlations that can be given by these states. More general scenario
Let us now consider a more general scenario in which $H(X|Y)\neq 0$. This means that knowing $Y$ is not enough to fully determine $X$, and thus $I(X:Y)<H(X)$.
A class of states that reproduces this type of correlation is for example$$\rho=\sum_i p_i \rho_i^A\otimes \ketbra i.$$Again, $S(\rho_A)=H(\boldsymbol p)$, and $S(\rho_A|\rho_B)=\sum_i p_i S(\rho_A^i)$ can be any value depending on the choice of $\rho_A^i$.
Now, however, another problem arises: does this choice of measurement of $B$
maximise the mutual information between the observations? This is not obvious, as there could be another choice of measurement which makes $A$'s state collapse to a pure state, thus achieving larger correlations.Because we are looking for the maximum amount of classical correlation that can be obtained using the given state, it makes sense to define $J$ via the maximisation. Sure, but why classical correlations?
Because there is nothing quantum about the mutual information measured this way.
What $J$ quantifies is the amount of correlation between the measurements results of $A$ and $B$,
for a fixed measurement choice of $B$. This is the amount of correlation that can be used to implement a channel between $A$ and $B$ and transmit (classical) information.
If one restricts to this sort of scenario, there is no quantumness to be observed. Sure, we talked about "collapse" when discussing the results of measurements, but this sort of collapse is no different than the "collapse" of the probability distribution over $X$ induced by knowing that $Y=y$.
Saying it in yet another way, all correlations observable in such a scenario can be explained via local hidden variable models, by simply assuming the two parties to share some appropriate amount of (classical) correlation beforehand.
Quantum correlations, one the other hand, arise from the inability to explain the observation by simply assuming pre-shared correlations between the parties, and this can only be observed when different measurement bases are used, as otherwise it is not possible to observe the failure of classical models.The parameter $J_{AB}$ does not take this into account, and therefore cannot detect the "quantumness" of a given state.
|
Now, I kinda wonder, if I am going to proof to some one that 1 + 2 + ... + 100 = 5050, how can I do that? Here one way to do it.
First line up two line of sum this way, one from 1 to 100, another from 100 to 1.
1 + 2 + 3 + ... + 100 100 + 99 + 98 + ... + 1 --------------------------------------- 101 + 101 + 101 + ... + 101 = 101 * 100
Then, sum each column. As you can see, sum value in each column is equal 101. Since we know that there is going to be 100 column. So the sum of the last line, which is sum of those two line, is 101 * 100. Because we start with 2 lines, the sum of \(1 + 2 + 3 + ... + 100\) = \(101 \cdot 100 / 2 = 5050 \).
This way, we can also show that, for any given interger \(n \ge 1\)
$$ \displaystyle\sum_{i=1}^{n} i = 1 + 2 + ... + n = \frac{n\dot(n+1)}{2}$$
Nice!, It's probably require a genius to come up with the proof like this, don't you think?
Another way, we can use
mathematical inductionto proof this statement too. It's more systematic, and can be generalize to prove more complex statements. The proof by mathematical induction has two parts, basis step and induction step. Basically, we have to show the basis is true, then show if any given number is true, then the next one also true.
To proof by induction, we start with basis case: \(n = 1\)
\[ \displaystyle\sum_{i=1}^{1} i = 1 = \frac{1\cdot (1+1)}{2} \]
Then, we assume the statement is true for any given number \(k\),
\( \displaystyle\sum_{i=1}^{k} i = \frac{ k (k + 1) }{2} \)
we have to show that the statement is also true for \(k + 1\), and here it is:
\begin{align*}
\displaystyle\sum_{i=1}^{k + 1} i
&= \displaystyle\sum_{i=1}^{k } i + (k + 1)
\\ &= \frac{k (k+1)}{2} + (k + 1)
\\ &= \frac{k (k+1) + 2 (k + 1)}{2}
\\ &= \frac{(k + 1) ((k + 1) + 1)}{2}
\end{align*}
Compare to the first method, mathematical induction seem to be more complicated. However, this method can be extended to prove more complex statements on more well-founded structures, such as graph, and trees.
Next time we shall see how can we use mathematical inductions in other more complex problem.
|
Iiris Sundin,Peter Schulam,Eero Siivola,Aki Vehtari,Suchi Saria,Samuel Kaski;
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:6046-6055, 2019.
Abstract
Machine learning can help personalized decision support by learning models to predict individual treatment effects (ITE). This work studies the reliability of prediction-based decision-making in a task of deciding which action $a$ to take for a target unit after observing its covariates $\tilde{x}$ and predicted outcomes $\hat{p}(\tilde{y} \mid \tilde{x}, a)$. An example case is personalized medicine and the decision of which treatment to give to a patient. A common problem when learning these models from observational data is imbalance, that is, difference in treated/control covariate distributions, which is known to increase the upper bound of the expected ITE estimation error. We propose to assess the decision-making reliability by estimating the ITE model’s Type S error rate, which is the probability of the model inferring the sign of the treatment effect wrong. Furthermore, we use the estimated reliability as a criterion for active learning, in order to collect new (possibly expensive) observations, instead of making a forced choice based on unreliable predictions. We demonstrate the effectiveness of this decision-making aware active learning in two decision-making tasks: in simulated data with binary outcomes and in a medical dataset with synthetic and continuous treatment outcomes.
@InProceedings{pmlr-v97-sundin19a,title = {Active Learning for Decision-Making from Imbalanced Observational Data},author = {Sundin, Iiris and Schulam, Peter and Siivola, Eero and Vehtari, Aki and Saria, Suchi and Kaski, Samuel},booktitle = {Proceedings of the 36th International Conference on Machine Learning},pages = {6046--6055},year = {2019},editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan},volume = {97},series = {Proceedings of Machine Learning Research},address = {Long Beach, California, USA},month = {09--15 Jun},publisher = {PMLR},pdf = {http://proceedings.mlr.press/v97/sundin19a/sundin19a.pdf},url = {http://proceedings.mlr.press/v97/sundin19a.html},abstract = {Machine learning can help personalized decision support by learning models to predict individual treatment effects (ITE). This work studies the reliability of prediction-based decision-making in a task of deciding which action $a$ to take for a target unit after observing its covariates $\tilde{x}$ and predicted outcomes $\hat{p}(\tilde{y} \mid \tilde{x}, a)$. An example case is personalized medicine and the decision of which treatment to give to a patient. A common problem when learning these models from observational data is imbalance, that is, difference in treated/control covariate distributions, which is known to increase the upper bound of the expected ITE estimation error. We propose to assess the decision-making reliability by estimating the ITE model’s Type S error rate, which is the probability of the model inferring the sign of the treatment effect wrong. Furthermore, we use the estimated reliability as a criterion for active learning, in order to collect new (possibly expensive) observations, instead of making a forced choice based on unreliable predictions. We demonstrate the effectiveness of this decision-making aware active learning in two decision-making tasks: in simulated data with binary outcomes and in a medical dataset with synthetic and continuous treatment outcomes.}}
%0 Conference Paper%T Active Learning for Decision-Making from Imbalanced Observational Data%A Iiris Sundin%A Peter Schulam%A Eero Siivola%A Aki Vehtari%A Suchi Saria%A Samuel Kaski%B Proceedings of the 36th International Conference on Machine Learning%C Proceedings of Machine Learning Research%D 2019%E Kamalika Chaudhuri%E Ruslan Salakhutdinov%F pmlr-v97-sundin19a%I PMLR%J Proceedings of Machine Learning Research%P 6046--6055%U http://proceedings.mlr.press%V 97%W PMLR%X Machine learning can help personalized decision support by learning models to predict individual treatment effects (ITE). This work studies the reliability of prediction-based decision-making in a task of deciding which action $a$ to take for a target unit after observing its covariates $\tilde{x}$ and predicted outcomes $\hat{p}(\tilde{y} \mid \tilde{x}, a)$. An example case is personalized medicine and the decision of which treatment to give to a patient. A common problem when learning these models from observational data is imbalance, that is, difference in treated/control covariate distributions, which is known to increase the upper bound of the expected ITE estimation error. We propose to assess the decision-making reliability by estimating the ITE model’s Type S error rate, which is the probability of the model inferring the sign of the treatment effect wrong. Furthermore, we use the estimated reliability as a criterion for active learning, in order to collect new (possibly expensive) observations, instead of making a forced choice based on unreliable predictions. We demonstrate the effectiveness of this decision-making aware active learning in two decision-making tasks: in simulated data with binary outcomes and in a medical dataset with synthetic and continuous treatment outcomes.
Sundin, I., Schulam, P., Siivola, E., Vehtari, A., Saria, S. & Kaski, S.. (2019). Active Learning for Decision-Making from Imbalanced Observational Data. Proceedings of the 36th International Conference on Machine Learning, in PMLR 97:6046-6055
This site last compiled Mon, 16 Sep 2019 16:05:04 +0000
|
The
integral symbol: Contents History[edit]
The notation was introduced by the German mathematician Gottfried Wilhelm Leibniz in 1675 in his private writings;
[1] [2] it first appeared publicly in the article " De Geometria Recondita et analysi indivisibilium atque infinitorum" (On a hidden geometry and analysis of indivisibles and infinites), published in Acta Eruditorum in June 1686. [3] [4] The symbol was based on the ſ (long s) character and was chosen because Leibniz thought of the integral as an infinite sum of infinitesimal summands. Typography in Unicode and LaTeX[edit] Fundamental symbol[edit]
The original IBM PC code page 437 character set included a couple of characters ⌠ and ⌡ (codes 244 and 245 respectively) to build the integral symbol. These were deprecated in subsequent MS-DOS code pages, but they still remain in Unicode (U+2320 and U+2321 respectively) for compatibility.
Extensions of the symbol[edit] Meaning Unicode LaTeX Double integral ∬ U+222C
\iint
Triple integral ∭ U+222D
\iiint
Quadruple integral ⨌ U+2A0C
\iiiint
Contour integral ∮ U+222E
\oint
Clockwise integral ∱ U+2231 Counterclockwise integral ⨑ U+2A11 Clockwise contour integral ∲ U+2232
\varointclockwise
Counterclockwise contour integral ∳ U+2233
\ointctrclockwise
Closed surface integral ∯ U+222F
\oiint
Closed volume integral ∰ U+2230
\oiiint
Typography in other languages[edit]
This section
does not cite any sources. (April 2019) (Learn how and when to remove this template message)
In other languages, the shape of the integral symbol differs slightly from the shape commonly seen in English-language textbooks. While the English integral symbol leans to the right, the German symbol (used throughout Central Europe) is upright, and the Russian variant leans slight to the left to occupy less horizontal space.
By contrast, in German and Russian texts, the limits are placed above and below the integral symbol, and, as a result, the notation requires larger line spacing, but is more compact horizontally, especially when longer expressions are used in the limits:
See also[edit] Notes[edit] Gottfried Wilhelm Leibniz, Sämtliche Schriften und Briefe, Reihe VII: Mathematische Schriften, vol. 5: Infinitesimalmathematik 1674–1676, Berlin: Akademie Verlag, 2008, pp. 288–295 ("Analyseos tetragonisticae pars secunda", October 29, 1675) and 321–331 ("Methodi tangentium inversae exempla", November 11, 1675). Aldrich, John. "Earliest Uses of Symbols of Calculus". Retrieved 20 April 2017. Swetz, Frank J., Mathematical Treasure: Leibniz's Papers on Calculus – Integral Calculus, Convergence, Mathematical Association of America, retrieved February 11, 2017 Stillwell, John (1989). Mathematics and its History. Springer. p. 110. "Mathematical Operators – Unicode" (PDF). Retrieved 2013-04-26. "Supplemental Mathematical Operators – Unicode" (PDF). Retrieved 2013-05-05. References[edit] Stewart, James (2003). "Integrals". Single Variable Calculus: Early Transcendentals(5th ed.). Belmont, CA: Brooks/Cole. p. 381. ISBN 0-534-39330-6. Zaitcev, V.; Janishewsky, A.; Berdnikov, A. (1999), "Russian Typographical Traditions in Mathematical Literature" (PDF), Russian Typographical Traditions in Mathematical Literature, EuroTeX'99 Proceedings
|
The alternating group $A_5 $ has two conjugacy classes of order 5 elements, both consisting of exactly 12 elements. Fix one of these conjugacy classes, say $C $ and construct a graph with vertices the 12 elements of $C $ and an edge between two $u,v \in C $ if and only if the group-product $u.v \in C $ still belongs to the same conjugacy class.
Observe that this relation is symmetric as from $u.v = w \in C $ it follows that $v.u=u^{-1}.u.v.u = u^{-1}.w.u \in C $. The graph obtained is the icosahedron, depicted on the right with vertices written as words in two adjacent elements u and v from $C $, as indicated.
Kostant writes : “Normally it is not a common practice in group theory to consider whether or not the product of two elements in a conjugacy class is again an element in that conjugacy class. However such a consideration here turns out to be quite productive.”
Still, similar constructions have been used in other groups as well, in particular in the study of the largest sporadic group, the monster group $\mathbb{M} $.
There is one important catch. Whereas it is quite trivial to multiply two permutations and verify whether the result is among 12 given ones, for most of us mortals it is impossible to do actual calculations in the monster. So, we’d better have an alternative way to get at the icosahedral graph using only $A_5 $-data that is also available for the monster group, such as its character table.
Let $G $ be any finite group and consider three of its conjugacy classes $C(i),C(j) $ and $C(k) $. For any element $w \in C(k) $ we can compute from the character table of $G $ the number of different products $u.v = w $ such that $u \in C(i) $ and $v \in C(j) $. This number is given by the formula
$\frac{|G|}{|C_G(g_i)||C_G(g_j)|} \sum_{\chi} \frac{\chi(g_i) \chi(g_j) \overline{\chi(g_k)}}{\chi(1)} $
where the sum is taken over all irreducible characters $\chi $ and where $g_i \in C(i),g_j \in C(j) $ and $g_k \in C(k) $. Note also that $|C_G(g)| $ is the number of $G $-elements commuting with $g $ and that this number is the order of $G $ divided by the number of elements in the conjugacy class of $g $.
The character table of $A_5 $ is given on the left : the five columns correspond to the different conjugacy classes of elements of order resp. 1,2,3,5 and 5 and the rows are the character functions of the 5 irreducible representations of dimensions 1,3,3,4 and 5.
Let us fix the 4th conjugacy class, that is 5a, as our class $C $. By the general formula, for a fixed $w \in C $ the number of different products $u.v=w $ with $u,v \in C $ is equal to
$\frac{60}{25}(\frac{1}{1} + \frac{(\frac{1+\sqrt{5}}{2})^3}{3} + \frac{(\frac{1-\sqrt{5}}{2})^3}{3} – \frac{1}{4} + \frac{0}{5}) = \frac{60}{25}(1 + \frac{4}{3} – \frac{1}{4}) = 5 $
Because for each $x \in C $ also its inverse $x^{-1} \in C $, this can be rephrased by saying that there are exactly 5 different products $w^{-1}.u \in C $, or equivalently, that the valency of every vertex $w^{-1} \in C $ in the graph is exactly 5.
That is, our graph has 12 vertices, each with exactly 5 neighbors, and with a bit of extra work one can show it to be the icosahedral graph.
For the monster group, the Atlas tells us that it has exactly 194 irreducible representations (and hence also 194 conjugacy classes). Of these conjugacy classes, the involutions (that is the elements of order 2) are of particular importance.
There are exactly 2 conjugacy classes of involutions, usually denoted 2A and 2B. Involutions in class 2A are called “Fischer-involutions”, after Bernd Fischer, because their centralizer subgroup is an extension of Fischer’s baby Monster sporadic group.
Likewise, involutions in class 2B are usually called “Conway-involutions” because their centralizer subgroup is an extension of the largest Conway sporadic group.
Let us define the
monster graph to be the graph having as its vertices the Fischer-involutions and with an edge between two of them $u,v \in 2A $ if and only if their product $u.v $ is again a Fischer-involution.
Because the centralizer subgroup is $2.\mathbb{B} $, the number of vertices is equal to $97239461142009186000 = 2^4 * 3^7 * 5^3 * 7^4 * 11 * 13^2 * 29 * 41 * 59 * 71 $.
From the general result recalled before we have that the valency in all vertices is equal and to determine it we have to use the character table of the monster and the formula. Fortunately GAP provides the function ClassMultiplicationCoefficient to do this without making errors.
gap> table:=CharacterTable("M"); CharacterTable( "M" ) gap> ClassMultiplicationCoefficient(table,2,2,2); 27143910000
Perhaps noticeable is the fact that the prime decomposition of the valency $27143910000 = 2^4 * 3^4 * 5^4 * 23 * 31 * 47 $ is symmetric in the three smallest and three largest prime factors of the baby monster order.
Robert Griess proved that one can recover the monster group $\mathbb{M} $ from the monster graph as its automorphism group!
As in the case of the icosahedral graph, the number of vertices and their common valency does not determine the monster graph uniquely. To gain more insight, we would like to know more about the sizes of minimal circuits in the graph, the number of such minimal circuits going through a fixed vertex, and so on.
Such an investigation quickly leads to a careful analysis which other elements can be obtained from products $u.v $ of two Fischer involutions $u,v \in 2A $. We are in for a major surprise, first observed by John McKay:
Printing out the number of products of two Fischer-involutions giving an element in the i-th conjugacy class of the monster,
where i runs over all 194 possible classes, we get the following string of numbers : 97239461142009186000, 27143910000, 196560, 920808, 0, 3, 1104, 4, 0, 0, 5, 0, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
That is, the elements of only 9 conjugacy classes can be written as products of two Fischer-involutions! These classes are :
1A = { 1 } written in 97239461142009186000 different ways (after all involutions have order two) 2A, each element of which can be written in exactly 27143910000 different ways (the valency) 2B, each element of which can be written in exactly 196560 different ways. Observe that this is the kissing number of the Leech lattice leading to a permutation representation of $2.Co_1 $. 3A, each element of which can be written in exactly 920808 ways. Note that this number gives a permutation representation of the maximal monster subgroup $3.Fi_{24}’ $. 3C, each element of which can be written in exactly 3 ways. 4A, each element of which can be written in exactly 1104 ways. 4B, each element of which can be written in exactly 4 ways. 5A, each element of which can be written in exactly 5 ways. 6A, each element of which can be written in exactly 6 ways.
Let us forget about the actual numbers for the moment and concentrate on the orders of these 9 conjugacy classes : 1,2,2,3,3,4,4,5,6. These are precisely the components of the fundamental root of the extended Dynkin diagram $\tilde{E_8} $!
This is the content of
John McKay’s E(8)-observation : there should be a precise relation between the nodes of the extended Dynkin diagram and these 9 conjugacy classes in such a way that the order of the class corresponds to the component of the fundamental root. More precisely, one conjectures the following correspondence
This is similar to the classical McKay correspondence between finite subgroups of $SU(2) $ and extended Dynkin diagrams (the binary icosahedral group corresponding to extended E(8)). In that correspondence, the nodes of the Dynkin diagram correspond to irreducible representations of the group and the edges are determined by the decompositions of tensor-products with the fundamental 2-dimensional representation.
Here, however, the nodes have to correspond to conjugacy classes (rather than representations) and we have to look for another procedure to arrive at the required edges! An exciting proposal has been put forward recently by John Duncan in his paper Arithmetic groups and the affine E8 Dynkin diagram.
It will take us a couple of posts to get there, but for now, let’s give the gist of it : monstrous moonshine gives a correspondence between conjugacy classes of the monster and certain arithmetic subgroups of $PSL_2(\mathbb{R}) $ commensurable with the modular group $\Gamma = PSL_2(\mathbb{Z}) $. The edges of the extended Dynkin E(8) diagram are then given by the configuration of the arithmetic groups corresponding to the indicated 9 conjugacy classes! (to be continued…)7 Comments
|
Perturbation theory is a method for continuously improving a previously obtained approximate solution to a problem, and it is an important and general method for finding approximate solutions to the Schrödinger equation. We discussed a simple application of the perturbation technique previously with the Zeeman effect.
We use perturbation theory to approach the analytically unsolvable helium atom Schrödinger equation by focusing on the Coulomb repulsion term that makes it different from the simplified Schrödinger equation that we have just solved analytically. The electron-electron repulsion term is conceptualized as a correction, or perturbation, to the Hamiltonian that can be solved exactly, which is called a zero-order Hamiltonian. The perturbation term corrects the previous Hamiltonian to make it fit the new problem. In this way the Hamiltonian is built as a sum of terms, and each term is given a name. For example, we call the simplified or starting Hamiltonian, \(\hat {H} ^0\), the zero order term, and the correction term \(\hat {H} ^1\), the first order term. In the general expression below, there can be an infinite number of correction terms of increasingly higher order,
\[ \hat {H} = \hat {H} ^0 + \hat {H} ^1 + \hat {H} ^2 + \cdots \label {9-17}\]
but usually it is not necessary to have more terms than \(\hat {H} ^0\) and \(\hat {H} ^1\). For the helium atom,
\[\hat {H} ^0 = -\frac {\hbar ^2}{2m} \nabla ^2_1 - \frac {2e^2}{4 \pi \epsilon _0 r_1} - \frac {\hbar ^2}{2m} \nabla ^2_2 - \frac {2e^2}{4 \pi \epsilon _0 r_2} \label {9-18}\]
\[\hat {H} ^1 = \frac {2e^2}{4 \pi \epsilon _0 r_{12}} \label {9-19} \]
In the general form of perturbation theory, the wavefunctions are also built as a sum of terms, with the zero-order terms denoting the exact solutions to the zero-order Hamiltonian and the higher-order terms being the corrections.
\[\psi = \psi^0 + \psi ^1 + \psi ^2 + \cdots \label {9-20}\]
Similarly, the energy is written as a sum of terms of increasing order.
\[E = E^0 + E^1 + E^2 + \cdots \label {9-21}\]
To solve a problem using perturbation theory, you start by solving the zero-order equation. This provides an approximate solution consisting of \(E_0\) and \(\psi ^0\). The zero-order perturbation equation for the helium atom is
\[ \hat {H}^0 \psi ^0 = E^0 \psi ^0 \label {9-22}\]
We already solved this equation for the helium atom and found that \(E_0\) = -108 eV by using the product of two hydrogen atom wavefunctions for \(\psi ^0\) and omitting the electron-electron interaction from \(\hat {H} ^0\).
The next step is to improve upon the zero-order solution by including\(\hat {H}^1 , \hat {H} ^2\) etc. and finding \(\psi ^1\) and \(E_1\), \(\psi ^2\) and \(E_2\), etc. The solution is improved through the stepwise addition of other functions to the previously found result. These functions are found by solving a series of Schrödinger-like equations, the higher-order perturbation equations.
The first-order perturbation equation includes all the terms in the Schrödinger equation \(\hat {H} \psi = E \psi \) that represent the first order approximations to \(\hat {H} , \psi\) and E. This equation can be obtained by truncating \(\hat {H} , \psi\) and E after the first order terms.
\[ ( \hat {H} ^0 + \hat {H}^1 ) (\psi ^0 + \psi ^1 ) = (E^0 + E^1) (\psi ^0 + \psi ^1 ) \label {9-23}\]
Now clear the parentheses to get
\[\hat {H} ^0 \psi ^0 + \hat {H} ^0 \psi ^1 + \hat {H} ^1 \psi ^0 + \hat {H} ^1 \psi ^1 = E^0 \psi ^0 + E^0 \psi ^1 + E^1 \psi ^0 + \hat {E} ^1 \psi ^1 \label {9-24}\]
The order of the perturbation equation matches the sum of the superscripts for a given term in the equation above. To form the first-order perturbation equation, we can drop the \(\hat {H} ^0 \varphi ^0 \) and \(E^0 \psi ^{0}\) terms because they are zero-order terms and because they cancel each other out, as shown by Equation \(\ref{9-22}\) We can also drop the \(\hat {H}\psi ^1\) and \(\hat {E} ^1 \varphi ^1\) terms because they are second-order corrections formed by a product of two first-order corrections. The first order perturbation equation thus is
\[\hat {H} ^0 \psi ^1 + \hat {H} ^1 \psi ^0 = E^0 \psi ^1 + E^1 \psi ^0 \]
To find the first order correction to the energy take the first-order perturbation equation, multiply from the left by \(\psi ^{0*}\) and integrate over all the coordinates of the problem at hand.
\[\int \psi ^{0*} \hat {H} ^0 \psi ^1 d\tau + \int \psi ^{0*} \hat {H} ^1 \psi ^0 d\tau = E^0 \int \psi ^{0*} \psi ^1 d\tau + E^1\int \psi ^{0*} \psi ^0 d\tau \label {9-26} \]
The integral in the last term on the right hand side of Equation \(\ref{9-26}\) is equal to one because the wavefunctions are normalized. Because \(\hat {H} ^0\) is Hermitian, the first integral in Equation \(\ref{9-26}\) can be rewritten to make use of Equation \(\ref{9-22}\),
\[ \int \psi ^{0*} \hat {H} ^0 \psi ^1 d\tau = \int (\hat {H} ^{0*} \varphi ^{0*} ) \varphi ^1 d\tau = E^0 \int \varphi ^{0*} \varphi ^1 d\tau \label {9-27} \]
which is the same as and therefore cancels the first integral on the right-hand side. Thus we are left with an expression for the first-order correction to the energy
\[ E^1 = \int \psi ^{0*} \hat {H} ^1 \psi ^0 d\tau \label {9-28}\]
Since the derivation above was completely general, Equation \(\ref{9-28}\) is a general expression for the first-order perturbation energy, which provides an improvement or correction to the zero-order energy we already obtained. The integral on the right is in fact an expectation value integral in which the zero-order wavefunctions are operated on by \(\hat {H} ^1\), the first-order perturbation term in the Hamiltonian, to calculate the expectation value for the first-order energy. This derivation justifies, for example, the method we used for the Zeeman effect to approximate the energies of the hydrogen atom orbitals in a magnetic field. Recall that we calculated the expectation value for the interaction energy (the first-order correction to the energy) using the exact hydrogen atom wavefunctions (the zero-order wavefunctions) and a Hamiltonian operator representing the magnetic field perturbation (the first-order Hamiltonian term.)
Exercise 9.7
Without using mathematical expressions, explain how you would solve Equation \(\ref{9-28}\) for the first-order energy.
For the helium atom, the integral in Equation \(\ref{9-28}\) is
\[ E^1 = \int \int \varphi _{1s} (r_1) \varphi _{1s} (r_2) \frac {1}{r_{12}} \varphi _{1s} (r_1) \varphi _{1s} (r_2) d\tau _1 d\tau _2 \label {9-29}\]
where the double integration symbol represents integration over all the spherical polar coordinates of both electrons \(r_1, \theta _1, \varphi _1 , r_2 , \theta _2 , \varphi _2\). The evaluation of these six integrals is lengthy. When the integrals are done, the result is \(E^1\) = +34.0 eV so that the total energy calculated using our second approximation method, first-order perturbation theory, is
\[ E_{appr ox2} = E^0 + E^1 = - 74.8 eV \label {9-30}\]
\(E^1\) is the
average interaction energy of the two electrons calculated using wavefunctions that assume there is no interaction.
The new approximate value for the binding energy represents a substantial (~30%) improvement over the zero-order energy, so the interaction of the two electrons is an important part of the total energy of the helium atom. We can continue with perturbation theory and find the additional corrections, E
2, E 3, etc. For example, E 0 + E 1 + E 2 = -79.2 eV. So with two corrections to the energy, the calculated result is within 0.3% of the experimental value of -79.00 eV. It takes thirteenth-order perturbation theory (adding E1 through E 13 to E 0) to compute an energy for helium that agrees with experiment to within the experimental uncertainty.
Interestingly, while we have improved the calculated energy so that it is much closer to the experimental value, we learn nothing new about the helium atom wavefunction by applying the first-order perturbation theory because we are left with the original zero-order wavefunctions. In the next section we will employ an approximation that modifies zero-order wavefunctions in order to address one of the ways that electrons are expected to interact with each other.
Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
|
Angular resolution spatial resolutionresolutionresolvedresolving powerRayleigh criterionresolution limitspot sizeappears ascriteriondiffraction limited
Angular resolution or spatial resolution describes the ability of any image-forming device such as an optical or radio telescope, a microscope, a camera, or an eye, to distinguish small details of an object, thereby making it a major determinant of image resolution.wikipedia
320 Related Articles
resolutionresolvedresolve
The term resolution or minimum resolvable distance is the minimum distance between distinguishable objects in an image, although the term is loosely used by many users of microscopes and telescopes to describe resolving power.
This standard for separation is also known as the Rayleigh criterion.
radio telescopesradiotelescoperadio-telescope
Angular resolution or spatial resolution describes the ability of any image-forming device such as an optical or radio telescope, a microscope, a camera, or an eye, to distinguish small details of an object, thereby making it a major determinant of image resolution.
The angular resolution of a dish antenna is determined by the ratio of the diameter of the dish to the wavelength of the radio waves being observed.
telescopetelescopesoptical
Angular resolution or spatial resolution describes the ability of any image-forming device such as an optical or radio telescope, a microscope, a camera, or an eye, to distinguish small details of an object, thereby making it a major determinant of image resolution.
It is analogous to angular resolution, but differs in definition: instead of separation ability between point-light sources it refers to the physical area that can be resolved.
Lord RayleighRayleighJohn Strutt
In that case, the angular resolution of an optical system can be estimated (from the diameter of the aperture and the wavelength of the light) by the Rayleigh criterion defined by Lord Rayleigh: two point sources are regarded as just resolved when the principal diffraction maximum of one image coincides with the first minimum of the other.
In optics, Rayleigh proposed a well known criterion for angular resolution.
modified Bessel functionspherical Bessel functionBessel
This number is more precisely 1.21966989..., the first zero of the order-one Bessel function of the first kind J_{1}(x) divided by π.
Angular resolution
diffraction patterndiffractdiffracted
The imaging system's resolution can be limited either by aberration or by diffraction causing blurring of the image.
In object space, the corresponding angular resolution is
Airy diffraction patternAiry Patterndiffraction
Light passing through the lens interferes with itself creating a ring-shape diffraction pattern, known as the Airy pattern, if the wavefront of the transmitted light is taken to be spherical or plane over the exit aperture.
The Rayleigh criterion for barely resolving two objects that are point sources of light, such as stars seen through a telescope, is that the center of the Airy disk for the first object occurs at the first minimum of the Airy disk of the second.
aperture synthesis imagingsynthetic apertureinterferometric imaging
In order to perform aperture synthesis imaging, a large number of telescopes are required laid out in a 2-dimensional arrangement with a dimensional precision better than a fraction (0.25x) of the required image resolution.
Aperture synthesis or synthesis imaging is a type of interferometry that mixes signals from a collection of telescopes to produce images having the same angular resolution as an instrument the size of the entire collection.
astronomical interferometryinterferometerinterferometry
The highest angular resolutions can be achieved by arrays of telescopes called astronomical interferometers: These instruments can achieve angular resolutions of 0.001 arcsecond at optical wavelengths, and much higher resolutions at x-ray wavelengths.
The advantage of this technique is that it can theoretically produce images with the angular resolution of a huge telescope with an aperture equal to the separation between the component telescopes.
seeingatmospheric seeingatmospheric turbulence
A single optical telescope may have an angular resolution less than one arcsecond, but astronomical seeing and other atmospheric effects make attaining this very hard.
The FWHM of the point spread function (loosely called seeing disc diameter or "seeing") is the best possible angular resolution that can be achieved by an optical telescope in a long-exposure image, and corresponds to the FWHM of the fuzzy blob seen when observing a point-like source (such as a star) through the atmosphere.
aperturesaperture stoplens aperture
The lens' circular aperture is analogous to a two-dimensional version of the single-slit experiment.
Angular resolution
wavelengthsperiodsubwavelength
In that case, the angular resolution of an optical system can be estimated (from the diameter of the aperture and the wavelength of the light) by the Rayleigh criterion defined by Lord Rayleigh: two point sources are regarded as just resolved when the principal diffraction maximum of one image coincides with the first minimum of the other.
Diffraction is the fundamental limitation on the resolving power of optical instruments, such as telescopes (including radiotelescopes) and microscopes.
apertureshigh numerical aperturehyper-NA
Here NA is the numerical aperture, \theta is half the included angle \alpha of the lens, which depends on the diameter of the lens and its focal length, n is the refractive index of the medium between the lens and the specimen, and \lambda is the wavelength of light illuminating or emanating from (in the case of fluorescence microscopy) the sample.
In microscopy, NA is important because it indicates the resolving power of a lens.
diffraction limitdiffraction-limiteddiffraction limitation
Diffraction-limited system
The diffraction-limited angular resolution of a telescopic instrument is proportional to the wavelength of the light being observed, and inversely proportional to the diameter of its objective's entrance aperture.
Dawes resolution limit
The result, θ = 4.56/D, with D in inches and θ in arcseconds is slightly narrower than calculated with the Rayleigh criterion: A calculation using Airy discs as point spread function shows that at Dawes' limit there is a 5% dip between the two maxima, whereas at Rayleigh's criterion there is a 26.3% dip.
Dawes' limit is a formula to express the maximum resolving power of a microscope or telescope.
near-field scanning optical microscopynear field scanning optical microscopyscanning near-field optical microscopy
These include optical near-fields (Near-field scanning optical microscope) or a diffraction technique called 4Pi STED microscopy.
The minimum resolution (d) for the optical component are thus limited by its aperture size, and expressed by the Rayleigh criterion:
objectiveobjective lensobjectives
For a microscope, that distance is close to the focal length f of the objective.
A telescope's light-gathering power and angular resolution are both directly related to the diameter (or "aperture") of its objective lens or mirror.
Sparrow's resolution limit
Sparrow's Resolution Limit is an estimate of the angular resolution limit of an optical instrument.
20/20 visionvision20/20
Visual acuity
The maximum angular resolution of the human eye at a distance of 1 km is typically 30 to 60 cm. This gives an angular resolution of between 0.02 and 0.03 degrees, which is roughly 1.2–1.8 arc minutes per line pair, which implies a pixel spacing of 0.6–0.9 arc minutes.
apparent diameterangular sizeapparent size
Angular diameter
Angular resolution
image forming opticsimage-forming device
Angular resolution or spatial resolution describes the ability of any image-forming device such as an optical or radio telescope, a microscope, a camera, or an eye, to distinguish small details of an object, thereby making it a major determinant of image resolution.
microscopesmicroscopicmicroscopically
Angular resolution or spatial resolution describes the ability of any image-forming device such as an optical or radio telescope, a microscope, a camera, or an eye, to distinguish small details of an object, thereby making it a major determinant of image resolution.
camerasWebcamexposure control
Angular resolution or spatial resolution describes the ability of any image-forming device such as an optical or radio telescope, a microscope, a camera, or an eye, to distinguish small details of an object, thereby making it a major determinant of image resolution.
eyeeyeseyeball
Angular resolution or spatial resolution describes the ability of any image-forming device such as an optical or radio telescope, a microscope, a camera, or an eye, to distinguish small details of an object, thereby making it a major determinant of image resolution.
resolutionhigh resolutionhigh-resolution
Angular resolution or spatial resolution describes the ability of any image-forming device such as an optical or radio telescope, a microscope, a camera, or an eye, to distinguish small details of an object, thereby making it a major determinant of image resolution.
|
Li-Yorke chaos for dendrite maps with zero topological entropy and
ω-limit sets
University of Carthage, Faculty of Sciences of Bizerte, Department of Mathematics, Jarzouna, 7021, Tunisia
Let
X be a dendrite with set of endpoints $E(X)$ closed and let $f:~X \to X$ be a continuous map with zero topological entropy. Let $P(f)$ be the set of periodic points of f and let L be an ω-limit set of f. We prove that if L is infinite then $L\cap P(f)\subset E(X)^{\prime}$, where $E(X)^{\prime}$ is the set of all accumulations points of $E(X)$. Furthermore, if $E(X)$ is countable and L is uncountable then $L\cap P(f)=\emptyset$. We also show that if $E(X)^{\prime}$ is finite and L is uncountable then there is a sequence of subdendrites $(D_k)_{k ≥ 1}$ of X and a sequence of integers $n_k ≥ 2$ satisfying the following properties. For all $k≥1$, 1. $f^{α_k}(D_k)=D_k$ where $α_k=n_1 n_2 \dots n_k$, 2. $\cup_{k=0}^{n_j -1}f^{k α_{j-1}}(D_{j}) \subset D_{j-1}$ for all $j≥q 2$, 3. $L \subset \cup_{i=0}^{α_k -1}f^{i}(D_k)$, 4. $f(L \cap f^{i}(D_k))=L\cap f^{i+1}(D_k)$ for any $ 0≤q i ≤q α_{k}-1$. In particular, $L \cap f^{i}(D_k) ≠ \emptyset$, 5. $f^{i}(D_k)\cap f^{j}(D_k)$ has empty interior for any $ 0≤q i≠ j<α_k $. As a consequence, if f has a Li-Yorke pair $(x,y)$ with $ω_f(x)$ or $ω_f(y)$ uncountable then f is Li-Yorke chaotic. Mathematics Subject Classification:Primary: 37B45; Secondary: 37B99. Citation:Ghassen Askri. Li-Yorke chaos for dendrite maps with zero topological entropy and ω-limit sets. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 2957-2976. doi: 10.3934/dcds.2017127
References:
[1] [2] [3] [4] [5] [6] [7] [8] [9]
A. Blokh, Dynamical systems on one-dimensional branched manifolds Ⅰ,
[10]
A. Blokh, Dynamical systems on one-dimensional branched manifolds Ⅱ,
[11]
A. Blokh, Dynamical systems on one-dimensional branched manifolds Ⅲ,
[12]
A. Blokh, The connection between entropy and transitivity for one-dimensional mappings,
[13] [14] [15] [16] [17] [18] [19] [20] [21]
J. L. G. Guirao and M. Lampart,
Li and Yorke chaos with respect to the cardinality of the scrambled sets,
[22]
Z. Kocan, V. K. Kurkova and M. Malek,
Entropy, horseshoes and homoclinic trajectories on trees, graphs and dendrites,
[23] [24]
K. Kuratowski,
[25] [26]
M. Misiurewicz, Horseshoes for continuous mappings of an interval,
[27] [28] [29] [30] [31]
A. N. Sharkovski, The behavior of a map in a neighborhood of an attracting set, (Russian),
[32]
show all references
References:
[1] [2] [3] [4] [5] [6] [7] [8] [9]
A. Blokh, Dynamical systems on one-dimensional branched manifolds Ⅰ,
[10]
A. Blokh, Dynamical systems on one-dimensional branched manifolds Ⅱ,
[11]
A. Blokh, Dynamical systems on one-dimensional branched manifolds Ⅲ,
[12]
A. Blokh, The connection between entropy and transitivity for one-dimensional mappings,
[13] [14] [15] [16] [17] [18] [19] [20] [21]
J. L. G. Guirao and M. Lampart,
Li and Yorke chaos with respect to the cardinality of the scrambled sets,
[22]
Z. Kocan, V. K. Kurkova and M. Malek,
Entropy, horseshoes and homoclinic trajectories on trees, graphs and dendrites,
[23] [24]
K. Kuratowski,
[25] [26]
M. Misiurewicz, Horseshoes for continuous mappings of an interval,
[27] [28] [29] [30] [31]
A. N. Sharkovski, The behavior of a map in a neighborhood of an attracting set, (Russian),
[32]
[1]
Jakub Šotola.
Relationship between Li-Yorke chaos and positive topological sequence entropy in nonautonomous dynamical systems.
[2] [3]
Tomás Caraballo, David Cheban.
Almost periodic
and asymptotically almost periodic solutions of Liénard equations.
[4]
Fangfang Jiang, Junping Shi, Qing-guo Wang, Jitao Sun.
On the existence and uniqueness of a limit cycle for a Liénard system with a discontinuity line.
[5] [6]
Tiantian Ma, Zaihong Wang.
Periodic
solutions of Liénard equations with resonant isochronous
potentials.
[7]
Teresa Faria, Eduardo Liz, José J. Oliveira, Sergei Trofimchuk.
On a generalized Yorke condition for scalar delayed population models.
[8] [9]
Jianhe Shen, Maoan Han.
Bifurcations of canard limit cycles in several singularly perturbed
generalized polynomial Liénard systems.
[10]
Amer Rasheed, Aziz Belmiloudi, Fabrice Mahé.
Dynamics of dendrite growth in a binary alloy with magnetic field effect.
[11] [12]
Jaume Llibre, Claudia Valls.
On the analytic integrability of the Liénard analytic differential systems.
[13] [14] [15]
Wacław Marzantowicz, Piotr Maciej Przygodzki.
Finding periodic points of a map by use of a k-adic expansion.
[16]
Eduardo Liz, Victor Tkachenko, Sergei Trofimchuk.
Yorke and Wright 3/2-stability theorems from a unified point of view.
[17]
Isaac A. García, Jaume Giné, Jaume Llibre.
Liénard and Riccati differential equations related via Lie Algebras.
[18]
Jin Ma, Shige Peng, Jiongmin Yong, Xunyu Zhou.
A biographical note and tribute to xunjing li on his 80th birthday.
[19]
Jitsuro Sugie, Tadayuki Hara.
Existence and non-existence of homoclinic trajectories of the Liénard system.
[20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
For the better part of the 30ties, Ernst Witt (1) did hang out with the rest of the ‘Noetherknaben’, the group of young mathematicians around Emmy Noether (3) in Gottingen.
In 1934 Witt became Helmut Hasse‘s assistent in Gottingen, where he qualified as a university lecturer in 1936. By 1938 he has made enough of a name for himself to be offered a lecturer position in Hamburg and soon became an associate professor, the down-graded position held by Emil Artin (2) until he was forced to emigrate in 1937.
A former fellow student of him in Gottingen, Erna Bannow (4), had gone earlier to Hamburg to work with Artin. She continued her studies with Witt and finished her Ph.D. in 1939. In 1940 Erna Bannow and Witt married.
So, life was smiling on Ernst Witt that sunday january 28th 1940, both professionally and personally. There was just one cloud on the horizon, and a rather menacing one. He was called up by the Wehrmacht and knew he had to enter service in february. For all he knew, he was spending the last week-end with his future wife… (later in february 1940, Blaschke helped him to defer his military service by one year).
Still, he desperately wanted to finish his paper before entering the army, so he spend most of that week-end going through the final version and submitted it on monday, as the published paper shows.
In the 70ties, Witt suddenly claimed he did discover the Leech lattice $ {\Lambda} $ that sunday. Last time we have seen that the only written evidence for Witt’s claim is one sentence in his 1941-paper Eine Identität zwischen Modulformen zweiten Grades. “Bei dem Versuch, eine Form aus einer solchen Klassen wirklich anzugeben, fand ich mehr als 10 verschiedene Klassen in $ {\Gamma_{24}} $.”
But then, why didn’t Witt include more details of this sensational lattice in his paper?
Ina Kersten recalls on page 328 of Witt’s collected papers : “In his colloquium talk “Gitter und Mathieu-Gruppen” in Hamburg on January 27, 1970, Witt said that in 1938, he had found nine lattices in $ {\Gamma_{24}} $ and that later on January 28, 1940, while studying the Steiner system $ {S(5,8,24)} $, he had found two additional lattices $ {M} $ and $ {\Lambda} $ in $ {\Gamma_{24}} $. He continued saying that he had then given up the tedious investigation of $ {\Gamma_{24}} $ because of the surprisingly low contribution
$ \displaystyle | Aut(\Lambda) |^{-1} < 10^{-18} $
to the Minkowski density and that he had consented himself with a short note on page 324 in his 1941 paper.”
In the last sentence he refers to the fact that the sum of the inverse orders of the automorphism groups of all even unimodular lattices of a given dimension is a fixed rational number, the Minkowski-Siegel mass constant. In dimension 24 this constant is
$ \displaystyle \sum_{L} \frac{1}{| Aut(L) |} = \frac {1027637932586061520960267}{129477933340026851560636148613120000000} \approx 7.937 \times 10^{-15} $
That is, Witt was disappointed by the low contribution of the Leech lattice to the total constant and concluded that there might be thousands of new even 24-dimensional unimodular lattices out there, and dropped the problem.
If true, the story gets even better : not only claims Witt to have found the lattices $ {A_1^{24}=M} $ and $ {\Lambda} $, but also enough information on the Leech lattice in order to compute the order of its automorphism group $ {Aut(\Lambda)} $, aka the Conway group $ {Co_0 = .0} $ the dotto-group!
Is this possible? Well fortunately, the difficulties one encounters when trying to compute the order of the automorphism group of the Leech lattice from scratch, is one of the better documented mathematical stories around.
The books From Error-Correcting Codes through Sphere Packings to Simple Groups by Thomas Thompson, Symmetry and the monster by Mark Ronan, and Finding moonshine by Marcus du Sautoy tell the story in minute detail.
It took John Conway 12 hours on a 1968 saturday in Cambridge to compute the order of the dotto group, using the knowledge of Leech and McKay on the properties of the Leech lattice and with considerable help offered by John Thompson via telephone.
But then, John Conway is one of the fastest mathematicians the world has known. The prologue of his book On numbers and games begins with : “Just over a quarter of a century ago, for seven consecutive days I sat down and typed from 8:30 am until midnight, with just an hour for lunch, and ever since have described this book as “having been written in a week”.”
Conway may have written a book in one week, Ernst Witt did complete his entire Ph.D. in just one week! In a letter of August 1933, his sister told her parents : “He did not have a thesis topic until July 1, and the thesis was to be submitted by July 7. He did not want to have a topic assigned to him, and when he finally had the idea, he started working day and night, and eventually managed to finish in time.”
So, if someone might have beaten John Conway in fast-computing the dottos order, it may very well have been Witt. Sadly enough, there is a lot of circumstantial evidence to make Witt’s claim highly unlikely.
For starters, psychology. Would you spend your last week-end together with your wife to be before going to war performing an horrendous calculation?
Secondly, mathematical breakthroughs often arise from newly found insight. At that time, Witt was also working on his paper on root lattices “Spiegelungsgrupen and Aufzähling halbeinfacher Liescher Ringe” which he eventually submitted in january 1941. Contained in that paper is what we know as Witt’s lemma which tells us that for any integral lattice the sublattice generated by vectors of norms 1 and 2 is a direct sum of root lattices.
This leads to the trick of trying to construct unimodular lattices by starting with a direct sum of root lattices and ‘adding glue’. Although this gluing-method was introduced by Kneser as late as 1967, Witt must have been aware of it as his 16-dimensional lattice $ {D_{16}^+} $ is constructed this way.
If Witt wanted to construct new 24-dimensional even unimodular lattices in 1940, it would be natural for him to start off with direct sums of root lattices and trying to add vectors to them until he got what he was after. Now, all of the Niemeier-lattices are constructed this way, except for the Leech lattice!
I’m far from an expert on the Niemeier lattices but I would say that Witt definitely knew of the existence of $ {D_{24}^+} $, $ {E_8^3} $ and $ {A_{24}^+} $ and that it is quite likely he also constructed $ {(D_{16}E_8)^+, (D_{12}^2)^+, (A_{12}^2)^+, (D_8^3)^+} $ and possibly $ {(A_{17}E_7)^+} $ and $ {(A_{15}D_9)^+} $. I’d rate it far more likely Witt constructed another two such lattices on sunday january 28th 1940, rather than discovering the Leech lattice.
Finally, wouldn’t it be natural for him to include a remark, in his 1941 paper on root lattices, that not every even unimodular lattices can be obtained from sums of root lattices by adding glue, the Leech lattice being the minimal counter-example?
If it is true he was playing around with the Steiner systems that sunday, it would still be a pretty good story he discovered the lattices $ {(A_2^{12})^+} $ and $ {(A_1^{24})^+} $, for this would mean he discovered the Golay codes in the process!
Which brings us to our next question : who discovered the Golay code?
|
print("Python (and any language-specific) code still works as expected")
As does non-language code.
You may notice that there's a sidebar to the right (if your screen is wide enough).These are automatically generated from the headers that are present in your page.The sidebar will automatically capture all 2nd and 3rd level section headers.The best way to designate these headers is with
# characters at the beginningof a line.
This section is here purely to demonstrate the third-level header of the rendered page in the sidebar!
Jupyter Book uses the excellent MathJax library, along with the default Jupyter Notebook configuration, for rendering mathematics from latex-style syntax. For example, here's a mathematical expression rendered with MathJax:\begin{align*} P(A_1 \cup A_2 \cup A_3) ~ = ~ P(B \cup A_3) &= ~ P(B) + P(A_3) - P(BA_3) \\ &= ~ P(A_1) + P(A_2) - P(A_1A_2) + P(A_3) - P(A_1A_3 \cup A_2A_3)\\ &= ~ \sum_{i=1}^3 P(A_i) - \mathop{\sum \sum}_{1 \le i < j \le 3} P(A_iA_j) + P(A_1A_2A_3) \end{align*}
And here is the code that was used to generate it:
\begin{align*}P(A_1 \cup A_2 \cup A_3) ~ = ~ P(B \cup A_3) &= ~ P(B) + P(A_3) - P(BA_3) \\&= ~ P(A_1) + P(A_2) - P(A_1A_2) + P(A_3) - P(A_1A_3 \cup A_2A_3)\\&= ~ \sum_{i=1}^3 P(A_i) - \mathop{\sum \sum}_{1 \le i < j \le 3} P(A_iA_j) + P(A_1A_2A_3)\end{align*}
Note: If you print your page (using the print button), then mathematics may not showup properly in an output PDF. This is because MathJax isn't able to render the mathbefore converting to PDF. If you have a good idea for how to get around this, please doopen an issue!
You can reference external media like images from your markdown file. If you use relative paths, then they will continue to work when the markdown files are copied over, so long as they point to a file that's inside of the repository.
Here's an image relative to the book content root
You can even embed references to movies on the web! For example, here's a little gif for you!
This will be included in your book when it is built.
✨
experimental✨
While interactivity is nice, sometimes you need a static version of your book that's suitable for printing. Currently, Jupyter Book uses a tool called PrintJS to create rendered PDF versions of your book's content.
In the top of each page you'll find a "download" button. Hovering over this buttongives the reader an option to print to PDF. When clicked, PrintJS will convert
only the book's content (so no sidebar or in-page navigation) to PDF, and triggera print action. Note that results for this vary between devices or browsers, andPRs that improve this functionality are welcome!
|
I am working through Milnor's Characteristic classes and am currently working problems on the topic of oriented bundles and euler class. I am having trouble computing the euler class of the tangent bundle of a sphere. (The actual question is in the third paragraph)
The first part which worked out fine for me was to show that the total space of the tangent bundle is equal to $S^{n} \times S^{n} \setminus A$ where $A$ is the anti-diagonal subset of the product space and furthermore that $H^{*}(E,E_{0};Z) \cong H^{*}(S^{n} \times S^{n},A;Z)$ (Here $(E,E_{0})$ is the pair (total space, total space - zero section), as used in the book). This follows by excision and arguing that $S^{n}\times S^{n} \setminus (Diagonal) \sim A$.
The part I am having trouble with is showing that, in the $n$ even case, the euler class corresponds to twice a generator of $H^{n}(S^{n};Z)$. It seems obvious to me that the fundamental class ,$u$, is a generator of $Z = H^{n}(E,E_{0})$. Next, since $-\cup u$ is an isomorphism, $u \cup u$ is a generator of $Z = H^{2n}(E,E_{0})$ and finally, since the Thom isomorphism is in fact an isomorphism this seems to suggest that the euler class should in fact be a generator of $H^{n}(S^{n};Z)$.
For one thing, I haven't used the fact that $n$ is even, though I don't see where that would be relevant. I also am not 100% sure about my computation of the fundamental class or the cohomology groups. For reference I will state my computation of the cohomology below.
To start, $H^{i}(S^{n}\times S^{n};Z) = (Z, i = 0);(Z \oplus Z, i = n);(Z, i = 2n)$ as proved with the Kunneth formula. Next, using the long exact sequence of a triple $(S^{n}\times S^{n},A,\not{0})$, we get that $H^{i}(S^{n} \times S^{n},A;Z) = (Z,i = 0,n,2n); 0\,else$.
|
This set of Network Theory Multiple Choice Questions & Answers (MCQs) focuses on “Advanced Problems on Network Theorems – 1”.
1. The temperature coefficient of a metal as the temperature increases will ____________
a) Decreases b) Increases c) Remains unchanged d) Increases and remains same View Answer
Explanation: We know that the temperature coefficient is,
Given by, α = \(\frac{α_0}{1 + α_0 t}\)
Since temperature is present at the denominator, so with increase in temperature t, the denominator increases and hence the fraction decreases.
So, temperature coefficient decreases.
2. Given a wire of resistance R Ω. The resistance of a wire of the same material and same weight and double the diameter is ___________
a) 0.5 R b) 0.25 R c) 0.125 R d) 0.0625 R View Answer
Explanation: Since diameter is double, area of cross-section is four times and length is one-fourth.
It can be verified by the following equation,
R
2= \(\frac{\frac{ρl}{4}}{4A}\)
= \(\frac{ρl}{16 A} = \frac{R}{16}\).
3. The star equivalent resistance of 3 resistors having each resistance = 5 Ω is ____________
a) 1.5 Ω b) 1.67 Ω c) 3 Ω d) 4.5 Ω View Answer
Explanation: We know that for star connection, R
EQ= \(\frac{R X R}{R+R+R}\)
Given R = 5 Ω
So, R
EQ= \(\frac{5 X 5}{5+5+5}\)
= \(\frac{25}{15}\) = 1.67 Ω.
4. The charge associated with a bulb rated as 20 W, 200 V and used for 10 minutes is ____________
a) 36 C b) 60 C c) 72 C d) 50 C View Answer
Explanation: Charge Q= It
Given I = \(\frac{20}{200}\) = 0.1 A, t = 10 X 60 sec = 600 sec
So, Q = 0.1 X 600 = 60 C.
5. For a series RL circuit having L = 5 H, current = 1 A (at an instant). The energy stored in magnetic field is ___________
a) 3.6 J b) 2.5 J c) 1.5 J d) 3 J View Answer
Explanation: We know that, Energy, E = 0.5 LI
2
Or, E = 0.5 X 5 X 1
2= 2.5 J.
6. For a practical voltage source, which of the following is correct?
a) Cannot be less than source voltage b) Cannot be higher than source voltage c) Is always less than source voltage d) Is always equal to source voltage View Answer
Explanation: A practical voltage source has some resistance. Because of this resistance, some amount of voltage drop occurs across this resistance. Hence, the terminal voltage cannot be higher than source voltage. However, if current is zero, then terminal voltage and source voltage are equal.
7. Consider an electric motor having resistance of 10 Ω, 20 Ω and 30 Ω respectively. The percentage of energy dissipated by 10 Ω earthing plate is ____________
a) More than 50% of total energy b) Less than 50% of total energy c) Depends on the materials of the three plates d) May be more or less than 50% of total energy View Answer
Explanation: The parallel combination of 30 Ω and 20 Ω is 12 Ω. Since 12 Ω and 10 Ω are in parallel, the 10 ohm plate draws more than 50% current and dissipates more than 50% energy.
8. Consider a resistive network circuit, having 3 sources of 18 W, 50 W and 98 W respectively and a resistance R. When all source act together the maximum and minimum power is ____________
a) 98 W, 18 W b) 166 W, 18 W c) 450 W, 2 W d) 166 W, 2 W View Answer
Explanation: Let us suppose R = 1 Ω
Then, I
1= 32 A, I 2= 52 A and I 3= 72 A
Or, (I
1+ I 2+ I 3) 2R = (152) 2R = 450 Ω.
9. A current waveform is of the shape of a right angled triangle with period of t = 1 sec. Given a resistance R = 1 Ω. The average power is __________
a) 1 W b) 0.5 W c) 0.333 W d) 0.111 W View Answer
Explanation: We know that RMS current is,
I = 1 \( \int_0^1 (1t)^2 \,dt\)
= \( \int_0^1 t^2 \,dt\)
= \(\frac{1}{3}\) A
Now, power P = I
2R
= \(\frac{1}{9}\) X 1
= 0.111 W.
10. Given two voltages, V
1 = sin (ωt + 60°) and V 2 = cos (ωt). Which of the following is correct? a) V 1 is leading V 2 by 15° b) V 1 is leading V 2 by 30° c) V 2 is leading V 1 by 60° d) V 2 is leading V 1 by 30° View Answer
Explanation: Given that, V
1= sin (ωt + 30°) and V 2= cos (ωt)
Now, V
2can be written as,
V
2= sin (ωt + 90°).
Hence, V
2is leading V 1by (90 – 30) = 60°.
11. Given two mutually coupled coils have a total inductance of 1500 mH, the self-inductance of each coils if the coefficient of coupling is 0.2 is ____________
a) 325 mH b) 255 mH c) 625 mH d) 550 mH View Answer
Explanation: We know that, M = k\(\sqrt{L_1 L_2}\)
Given that, L
EQ= 1500 mH and k = 0.2
Again, total inductance = L
1+ L 2+ 2M
Or, 2L + 2kL = 1500 mH
Or, L (2 + 2 X 0.2) = 1500
Or, L = 625 mH.
12. For a series RL circuit, the impedance Z = 10 Ω at a frequency of 50 Hz. At 100 Hz the impedance is ___________
a) 10 Ω b) 20 Ω c) 1 Ω d) More than 1 Ω but less than 10 Ω 0 Ω View Answer
Explanation: We know that impedance, Z = \( \sqrt{R^2 + (\frac{1}{ω^2+c^2})^2}\)
Since frequency is doubled, so \(\frac{1}{ω^2+c^2}\) becomes one-fourth but R2 remains the same. Thus the impedance cannot be exactly measured but we can infer that the resistance is more than 1 Ω but less than 10 Ω.
13. Consider the self-inductances of two coils as 12 H and 5 H. 50 % of one flux links the other. The mutual inductance is ___________
a) 30 H b) 24 H c) 9 H d) 4.5 H View Answer
Explanation: We know that, M = k\(\sqrt{L_1 L_2}\)
Given that, L
1= 12 H, L 2= 5 H and k = 0.5
So, M = 0.5\(\sqrt{12 X 5}\)
= 0.5\(\sqrt{60}\)
= 0.5 X 7.75 = 3.875 H.
Explanation: For maximum power transfer to the load resistor R
L, R Lmust be equal to 100Ω.
∴ Maximum power = \(\frac{V^2}{4R_L}\)
= \(\frac{10^2}{4×100} = \frac{100}{400}\) = 0.25 W.
Explanation: Using Y-∆ transformation,
R
AB= (9+9 || 6) || (9||6)
= (18 || 6) || (9 || 6)
=\(\left(\frac{18×6}{18+6}\right) || \left(\frac{9×6}{9+6}\right)\)
= 4.5 || 3.6
= \(\frac{4.5×3.6}{4.5+3.6}\) = 2.8 Ω.
Sanfoundry Global Education & Learning Series – Network Theory.
To practice all areas of Network Theory,
here is complete set of 1000+ Multiple Choice Questions and Answers.
|
Event detail Student Algebraic Geometry Seminar: Geometry of the moduli space of curves of genus \(g\)
Seminar | April 17 | 4-5 p.m. | 891 Evans Hall
Ritvik Ramkumar, UC Berkeley
Given a variety \(X\) it's a basic question to ask if it's rational i.e. admits a birational map \( P^n \to X \) for some \(n\). For curves and surfaces there are explicit criterion that determine when a variety is rational. Answering this question in higher dimensions is much more difficult. In light of this, it's easier to ask for a weaker question: Does there exist a dominant rational map \(P^n\to X\). In this case, \(X\) is said to be unirational. Over the complex numbers, every unirational curve or surface is rational. For curves this is a consequence of the Riemann-Hurwitz Formula.
A very important geometric object is \(M_g\), the moduli space of curves of genus \(g\). We can ask for what \(g\), if any, is \(M_g\) rational or unirational. In 1915, Severi proved that \(M_g\) is unirational for \(g\leq 10\) and conjectured that it's unirational for all \(g\). However, in 1987, it was shown that \(M_g\) is of general type if \(g\geq 24\). In this talk I will review classical examples of unirational varieties, outline a modern proof of Severi's result and describe related conjectures and results. I will finish by describing how the geometry of \(M_g\) changes when \(g\geq 11\).
|
We have associated to a subgroup of the modular group $PSL_2(\mathbb{Z}) $ a
quiver (that is, an oriented graph). For example, one verifies that the fundamental domain of the subgroup $\Gamma_0(2) $ (an index 3 subgroup) is depicted on the right by the region between the thick lines with the identification of edges as indicated. The associated quiver is then
\[
\xymatrix{i \ar[rr]^a \ar[dd]^b & & 1 \ar@/^/[ld]^h \ar@/_/[ld]_i \\ & \rho \ar@/^/[lu]^d \ar@/_/[lu]_e \ar[rd]^f & \\ 0 \ar[ru]^g & & i+1 \ar[uu]^c} \]
The corresponding “dessin d’enfant” are the green edges in the picture. But, the red dot on the left boundary is identied with the red dot on the lower circular boundary, so the dessin of the modular subgroup $\Gamma_0(2) $ is
\[
\xymatrix{| \ar@{-}[r] & \bullet \ar@{-}@/^8ex/[r] \ar@{-}@/_8ex/[r] & -} \]
Here, the three red dots (all of them even points in the Dedekind tessellation) give (after the identification) the two points indicated by a $\mid $ whereas the blue dot (an odd point in the tessellation) is depicted by a $\bullet $. There is another ‘quiver-like’ picture associated to this dessin, a
quilt of the modular subgroup $\Gamma_0(2) $ as studied by John Conway and Tim Hsu.
On the left, a quilt-diagram copied from Hsu’s book Quilts : central extensions, braid actions, and finite groups, exercise 3.3.9. This ‘quiver’ has also 5 vertices and 7 arrows as our quiver above, so is there a connection?
A
quilt is a gadget to study transitive permutation representations of the braid group $B_3 $ (rather than its quotient, the modular group $PSL_2(\mathbb{Z}) = B_3/\langle Z \rangle $ where $\langle Z \rangle $ is the cyclic center of $B_3 $. The $Z $-stabilizer subgroup of all elements in a transitive permutation representation of $B_3 $ is the same and hence of the form $\langle Z^M \rangle $ where M is called the modulus of the representation. The arrow-data of a quilt, that is the direction of certain edges and their labeling with numbers from $\mathbb{Z}/M \mathbb{Z} $ (which have to satisfy some requirements, the flow rules, but more about that another time) encode the Z-action on the permutation representation. The dimension of the representation is $M \times k $ where $k $ is the number of half-edges in the dessin. In the above example, the modulus is 5 and the dessin has 3 (half)edges, so it depicts a 15-dimensional permutation representation of $B_3 $.
If we forget the Z-action (that is, the arrow information), we get a permutation representation of the modular group (that is a dessin). So, if we delete the labels and directions on the edges we get what Hsu calls a
modular quilt, that is, a picture consisting of thick edges (the dessin) together with dotted edges which are called the seams of the modular quilt. The modular quilt is merely another way to depict a fundamental domain of the corresponding subgroup of the modular group. For the above example, we have the indicated correspondences between the fundamental domain of $\Gamma_0(2) $ in the upper half-plane (on the left) and as a modular quilt (on the right)
That is, we can also get our quiver (or its opposite quiver) from the modular quilt by fixing the orientation of one 2-cell. For example, if we fix the orientation of the 2-cell $\vec{fch} $ we get our quiver back from the modular quilt
\[ \xymatrix{i \ar[rr]^a \ar[dd]^b & & 1 \ar@/^/[ld]^h \ar@/_/[ld]_i \\ & \rho \ar@/^/[lu]^d \ar@/_/[lu]_e \ar[rd]^f & \\ 0 \ar[ru]^g & & i+1 \ar[uu]^c} \]
This shows that the quiver (or its opposite) associated to a (conjugacy class of a) subgroup of $PSL_2(\mathbb{Z}) $ does not depend on the choice of embedding of the dessin (or associated cuboid tree diagram) in the upper half-plane. For, one can get the modular quilt from the dessin by adding one extra vertex for every connected component of the complement of the dessin (in the example, the two vertices corresponding to 0 and 1) and drawing a triangulation from them (the dotted lines or ‘seams’).
Similar Posts: the modular group and superpotentials (1) the modular group and superpotentials (2) Monstrous dessins 3 permutation representations of monodromy groups Modular quilts and cuboid tree diagrams Farey symbols in SAGE 5.0 Hexagonal Moonshine (3) Conway’s big picture the cartographers’ groups (2) Quiver-superpotentials
|
Currently I am going through the proof of the Stein-Weiss interpolation theorem for a seminar paper and in particular through the proof of Lemma 1 which begins at page 320 (I will use a particular function for $\Phi$). First I will establish the setting:
Let $S := \{z \in \mathbb{C} : 0 < \text{Re }z < 1\}$ and $F$ be holomorphic in $S$ and continuous on $\overline{S}$. Further define for $z$ in the closed unit circle with $z \neq \pm 1$ $$h(z) := \frac{1}{\pi i}\log \left( i\frac{1 + z}{1 - z}\right)$$ Then $h$ is holomorphic, $\log|F(h(z))|$ is subharmonic in the open unit disc and upper semicontinuous on the closed unit circle with $z \neq \pm 1$.
Now on p. 320, bottom (eq. 2.4), it is written that by the subharmonic character of $\log|F(h(z))|$ we have for $0 \leq \rho < R <1$ $$\log|F(h(\rho e^{i\theta}))| \leq \frac{1}{2\pi}\int_{-\pi}^\pi \log\vert F(h(Re^{it}))\vert\frac{R^2 - \rho^2}{R^2 - 2R\rho\cos\left( \theta - t \right) + \rho^2}dt$$ I wanted to know exactly why this is true. So my idea was to define a function $$ \begin{aligned} H(\rho e^{i\theta}) := \begin{cases} \displaystyle \log\vert F(h(Re^{i\theta}))\vert & \rho = R\\ \displaystyle \frac{1}{2\pi}\int_{-\pi}^\pi \log\vert F(h(Re^{it}))\vert\frac{R^2 - \rho^2}{R^2 - 2R\rho\cos\left( \theta - t \right) + \rho^2}dt & 0 \leq \rho < R \end{cases} \end{aligned} $$ and using the maximum modulus principle for subharmonic functions (see here p. 336) since $H$ is harmonic and continuous. But my problem is now, that I am not sure if $\log|F(h(z))|$ is continuous when $|z| = R$ (and this would make my arguing useless). So my question:
Is $\log|F(h(z))|$ continuous for $|z| = R$? Is there any other approach to show the inequality above?
Thanks.
|
Part 4 in the quest for the hydrogen molecule
In this part I would like to focus on the energy of a particle. A particle can have kinetic and potential energy. Suppose a particle is in a state \(|\psi\rangle\) where it has energy \(E\), then how does that state look like? Is there only one such state or are there many? What are the position or momentum properties of such a state?
We have already seen that the position and the momentum of a particle are one and the same property. Interestingly, the energy of a particle will typically depend on its position and momentum. So the goal is to find any states that describe the position and the momentum properties of the particle so that it has energy \(E\).
It turns out to be very useful to be able to write down in a formula the statement that “the value of property Q of state \(|\psi\rangle\) is equal to q”. It is written like this:
$$\hat Q |\psi\rangle = q |\psi\rangle$$
We have put a little hat above the \( \hat Q \) to denote that it measures the value of the property \(Q\) of the state that is to the right of it, and we call \( \hat Q \) an
operator. Note that since this is quantum mechanics, not all properties of a state are necessarily well-defined. For example, the momentum property \(\hat p \)of a state \(|p\rangle\) is well known:
$$\hat p |p\rangle = p |p\rangle,$$
but the position property \(\hat x \) of that state:
$$\hat x |p\rangle = ???$$
is not really clear (at the moment).
Let’s get started with some actual physical results! To start things simple, we will take the simplest case of a free particle in one dimension. For a free particle, its energy is given by the kinetic energy:
$$E=\frac{1}{2}mv^2 = \frac{p^2}{2m},$$
where \(p\) is the momentum of the particle. As required, the energy of \(|\psi\rangle\) must be \(E\), we will put this down in an equation we will call the
energy equation:
$$\hat E |\psi\rangle = E |\psi\rangle.$$
To move further, we use the fact that the energy is actually the kinetic energy, so we replace \(\hat E\) by the free-particle energy operator \(\frac{\hat p^2}{2m}\) that measures its energy in terms of momentum.
$$\frac{\hat p^2}{2m} |\psi\rangle= E |\psi\rangle.$$
Now we just apply the procedure we have seen before: we know that \(|\psi\rangle\) can be written as a superposition of position states\(|x\rangle\) or of momentum states \(|p\rangle\). We choose the latter:
$$| \psi \rangle = \int_{-\infty}^{+\infty} \psi(p) |p \rangle dp,$$
So the question now becomes, what is the value of \(\psi(p)\)? We break down both the left and right side in the energy equation as a superposition of momentum states:
$$\frac{\hat p^2}{2m} \int_{-\infty}^{+\infty} \psi(p) |p \rangle dp = E \int_{-\infty}^{+\infty} \psi(p) |p \rangle dp.$$
Now we can finally get rid of the “hats” since we know that \(\hat p |p\rangle= p |p\rangle\) and therefore also \(\hat{p}^2 |p\rangle= \hat{p}\hat{p} |p\rangle= \hat{p}p |p\rangle=p\hat{p} |p\rangle= pp |p\rangle= p^2|p\rangle\).
$$\frac{\hat p^2}{2m} \int_{-\infty}^{+\infty} \psi(p) |p \rangle dp =\\
\int_{-\infty}^{+\infty} \psi(p) \frac{\hat p^2}{2m} |p \rangle dp =\\ \int_{-\infty}^{+\infty} \psi(p) \frac{ p^2}{2m} |p \rangle dp.$$
Taking the last line and putting it back into the left side of the energy equation,
$$\int_{-\infty}^{+\infty} \frac{ p^2}{2m} \psi(p) |p \rangle dp=
\int_{-\infty}^{+\infty} E \psi(p) |p \rangle dp,$$
we can conclude that, in order for the left and right sides to be equal, the integrand has to be equal for every value of \(p\):
$$ \frac{ p^2}{2m}\psi(p) = E \psi(p).$$
We find that \(\psi(p)\) has to be \(0\) whenever \(p^2 \neq 2mE\). On the other hand, for \(p_-=-\sqrt{2mE}\) and \(p_+=\sqrt{2mE}\), \(\psi(p)\) can take any value we like. Let’s call those values \(A\) and \(B\), so that we conclude that the states
$$|\psi\rangle=A|p_-\rangle+B|p_+\rangle$$
are the states with energy \(E\). This may not seem too impressive to you, since what we find is basically what we expected: a state of a particle with momentum \(p_-=-\sqrt{2mE}\), or a state of a particle with momentum \(p_+=\sqrt{2mE}\), or any combination thereof, are the states with energy \(E\). But we have done a proper quantum-mechanical derivation of this result, and that procedure will come in handy in the next installment.
|
Compression after Lempel and Ziv (LZ78)
When reading about sequence learning and the prediction of elements in a time series you inevitably cross paths with the area of sequence compression at some point, and so did I. In the present post I’d like to outline a classical compression algorithm, the 1978 version of the sequence compression algorithm after Lempel and Ziv [LZ78].
Let $X$ be a finite alphabet and let $X^\ast$ denote the set of all possible strings over $X$. Let $\epsilon$ denote the empty string. The aim is to encode (compress) a sequence of inputs $d = x_1\ldots x_l \in X^\ast$. The approach of Lempel–Ziv splits into two phases: parsing and encoding. In the parsing phase the algorithm splits the sequence $d$ into adjacent non-empty words $ w_1,\ldots,w_p \in X^\ast$, i.e.
\[
d = w_1 \ldots w_p, \] such that the following conditions hold Except for the last word the $w_i$ are mutually distinct, i.e. for $i\neq j$ with $i,j < p$ we have $w_i \neq w_j.$ Each word is either just one element, or obtained by concatenation of one of its predecessors in the sequence and an element in $X$, i.e. for each $j$ there is an $x \in X$ such that either $w_j = x$ or there is an index $i < j$ with $ w_j = w_i x$.
Such a decomposition can be obtained by forming a dictionary whose keys (ordered by insertion) define the desired decomposition as follows:
At each step take the longest prefix of the sequence that is not yet contained (as a key) in the dictionary and add it to the dictionary. Continue with the remaining sequence (i.e. after removing the prefix) until the whole sequence is consumed. The last prefix of the sequence which contains the last element will either be a new word or it will already be in the dictionary (cf. the first condition in the above list).
With $w_0 := \epsilon$,where $\epsilon$ denotes the empty string, the conditions in the above list imply that each word $w$ in $\{w_1,\ldots,w_p\}$ can uniquely expressed in the form $w = w_ix$, hence we may define
\[ \texttt{tail-index}\thinspace w := i \ \ \ \text{ and } \ \ \ \texttt{head} \thinspace w := x. \] The compressed version of the sequence $d$ is a sequence in $\{0,\ldots,p\} \times X$ and is defined by \[ \texttt{LZ78}(d) := \big[ \big( \texttt{tail-index} \thinspace w_1 ,\texttt{head} \thinspace w_1 \big), \ldots , \big( \texttt{tail-index} \thinspace w_p ,\texttt{head} \thinspace w_p \big) \big]. \] The sequence can be decoded by forming a trie $T$ whose nodes are indexed by $i=1,\ldots,p$ and wich store their associated elements $\texttt{head} \thinspace w_i$. We further add a node indexed by $0$ storing the empty string $\epsilon$ — this will be the root of the tree. The parent of a node is given by its associated tail-index, i.e. two nodes $i<j$ are connected by an edge if $\texttt{tail-index} \thinspace w_j = i$. Each node has up to $| X |$ children, and no two children store similar elements, with potentially one exception, the node indexed by $p$ (recall that the last word is the only one that might have already appeared in the list). In case of non-uniqueness, we identify the node indexed by $p$ with the sibling that stores the similar input. The $i$th word $w_i$ can now easily be recovered from $T$ by traversing the path from the root (node $0$) down to node $i$, and concatenating the associated elements (of each traversed node). Example.
Suppose we want to compress $d = “abracadabrarabarbar”$. This sequences decomposes as:
Number the words from 1 to 11 going from left to right. The colored part corresponds to the part that can be found earlier in the dictionary. Replace the colored part by the corresponding position assigned earlier, where zero corresponds to the empty string, e.g. the tail ‘ra’ of the 8th word ‘rab’ can be found at position 7, and hence translates to ‘7b’. The resulting sequence reads as
0a | 0b | 0r | 1c | 1d | 1b | 3a | 7b | 1r | 2a | 3.
References [1] Abraham Lempel and Jacob Ziv, Compression of individual sequences via variable-rate coding, IEEE Transactions of Information Theory (1978).
|
The term ``blue moon'' in the present context describes rare events,
i.e. events that happen once in a blue moon. The blue moon ensemble approach was introduced by Ciccotti and coworkers as a technique for computing the free energy profile along a reaction coordinate direction characterized by one or more barriers high enough that they would not likely be crossed in a normal thermostatted molecular dynamics calculation.
Suppose a process of interest can be monitored by a single reaction coordinate \( q_1=f_1({\bf r}_1,...,{\bf r}_N) \) so that eqns. (29) and (30) reduce to
\(P(s) \)
\(\underline { {C_N \over Q(N,V,T)}\int\;d^N{\bf p}\;d^N{\bf r}e^{-\beta H({\bf p},{\bf r})}\delta(f_1({\bf r}_1,...,{\bf r}_N)-s) } \)
\(\underline { {1 \over N!\lambda^{3N} Q(N,V,T)}\int\;d^N{\bf r}e^{-\beta U({\bf r})}\delta(f_1({\bf r}_1,...,{\bf r}_N)-s) } \)
\(A (s) \) \(-kT\ln P(s) \) (31) The ``1'' subscript on the value \(s\) of \(\underline {q_1} \) is superfluous and will be dropped throughout this discussion. In the second line, the integration over the momenta has been performed giving the thermal prefactor factor \(\lambda ^{3N} \). In the blue moon ensemble approach, a holonomic constraint \( \sigma({\bf r}_1,...,{\bf r}_N) =f_1({\bf r}_1,...,{\bf r}_N)-s \) is introduced in a molecular dynamics calculation as a means of ``driving'' the reaction coordinate from an initial value \(\underline {s_i} \) to a final value \(\underline {s_f} \) via a set of intermediate points \( \underline {s_1,...,s_n } \) between \(\underline {s_i} \) and \(\underline {s_f} \). Unfortunately, the introduction of a holonomic, constraint does not yield the single \(\delta \)-function condition \( \delta(\sigma({\bf r}) =\delta(f_1({\bf r})-s) \), where \(\underline {{\bf r}\equiv {\bf r}_1,...,{\bf r}_N }\) required by eqn. (31) but rather the product of \(\delta \)-functions \(\delta(\sigma({\bf r}))\delta(\dot{\sigma}({\bf r},{\bf p})) \), since both the constraint and its first time derivative are imposed in a constrained dynamics calculation. We will return to this point a bit later in this section. In addition to this, the blue moon ensemble approach does not yield \(A(s) \) directly but rather the derivative
\[ {dA \over ds} = -{kT \over P(s)}{dP \over ds} \] (32)
from which the free energy profile \(A (q) \) along the reaction coordinate and the free energy difference \( Delta A = A(s_f)-A(s_i) \) are given by the integrals
\[ A(q) = A(s_i) + \int_{s_i}^q ds {dA \over ds}\;\;\;\;\;\;\;\;\;\;\Delta A = \int_{s_i}^{s_f} ds {dA \over ds} \] (33)
In the free-energy profile expression \(A (s_i)\) is just an additive constant that can be left off. The values \(\underline {s_1,...,s_n }\) at which the reaction coordinate is constrained can be chosen at equally-spaced intervals between \(\underline {s_i} \) and \(\underline {s_f} \), in which a standard numerical quadrature can \(q=f_1({\bf r}) \) be applied for evaluating the integrals in eqn. (33), or they can be chosen according to a more sophisticated quadrature scheme.
We next turn to the evaluation of the derivative in eqn. (32). Noting that \( P(s) = \langle \delta(f_1({\bf r})-s)\rangle \), the derivative can be written as
\[ {1 \over P(s)}{dP \over ds} ={C_N \over Q(N,V,T)}{\int\;d^N {\bf p} d^N {\bf r} e^{-\beta H_{(p,r)}} {\partial \over \partial s}\delta(f_1({\bf r})-s)\over \langle \delta(f_1({\bf r})-s)\rangle} \] (34)
In order to avoid evaluating the derivative of the \(\delta \)-function, an integration by parts can be used. First, we introduce a complete set of \(3N \) generalized coordinates:
\[ q_{\alpha} = f_{\alpha}({\bf r}_1,...,{\bf r}_N) \]
(35)
and their conjugate momenta \(\underline {p_{\alpha} } \). Such a transformation has a unit Jacobian so that \( d^N{\bf p}\;d^N{\bf r}= d^{3N}p\;d^{3N}q \). Denoting the transformed Hamiltonian as \(\underline {\tilde{H}(p,q) }\), eqn. (34) becomes
\[ {1 \over P(s)}{dP \over ds} ={C_N \over Q(N,V,T)}{\int\;d^{3N} P d^{3N} q e^{-\beta \tilde {H} (p, q)}{\partial \over \partial s}\delta(q_1-s)\over \langle \delta(q_1-s)\rangle} \] (36)
Changing the derivative in front of the \(\delta \)-function from \(\partial/\partial s \) to \(\partial/\partial q_1\), which introduces an overall minus sign, and then integrating by parts yields
\({1 \over P(s)}{dP \over ds} \)
\({C_N \over Q(N,V,T)}{\int\;d^{3N}p\;d^{3N}q\;\left[{\partial \over \partial q_1}e^{-\beta \tilde{H}(p,q)}\right]\delta(q_1-s)\over \langle \delta(q_1-s)\rangle} \)
\(-{\beta C_N \over Q(N,V,T)}{\int\;d^{3N}p\;d^{3N}q\;{\partial\tilde {H} \over \partial q_1}e^{-\beta \tilde{H}(p,q)}\delta(q_1-s)\over \langle \delta(q_1-s)\rangle}\)
\(-\beta {\left<\left({\partial \tilde{H} \over \partial q_1}\right)\delta(q_1-s)\right>\over \langle \delta(q_1-s)\rangle} \)
(37) The last line defines a new ensemble average, specifically an average subject to the condition (not constraint) that the coordinate \(\underline {q_1} \) have the particular value \(s\). This average will be denoted \(\langle\cdots\rangle^{\rm cond}_{s} \). Thus, the derivative becomes
\[ {1 \over P(s)}{dP \over ds} =-\beta\left<{\partial \tilde{H} \over \partial q_1}\right>^{\rm cond}_s \] (38)
Substituting eqn. (38) yields a free energy profile of the form
\[ A(q) = A(s_i) + \int_{s_i}^q\;ds\;\left<{\partial \tilde{H} \over \partial q_1}\right>^{\rm cond}_s \] (39)
from which \(\Delta A \) can be computed by letting \(\underline {q = s_f} \). Given that - \(\underline {\langle \partial \tilde{H}/\partial q_1\rangle^{\rm cond}_s } \) is the expression for the average of the generalized force on \(\underline {q_1} \) when \(\underline {q_1 = s}\), the integral represents the work done
on the system, i.e. the negative of the work done by the system, in moving from \(\underline {s_i}\) to an arbitrary final point \(\underline {q} \). Since the conditional average implies a full simulation at each fixed value of \(\underline {q_1}\), the thermodynamic transformation is certainly carried out reversibly, so that eqn. (39) is consistent with the Clausius inequality.
Although eqn. (39) provides a very useful insight into the underlying statistical mechanical expression for the free energy, technically, the need for a full canonical transformaion of both coordinates and momenta is inconvenient since, from the chain rule
\[ {\partial \tilde{H} \over \partial q_1} =\sum_{i=1}^N\left [ {\partial H \over \partial {\bf p} _i } \cdot {\partial {\bf p}_i \over \partial q_1} + {\partial H \over \partial {\bf r}_i }\cdot{\partial {\bf r}_i \over \partial q_1}\right] \] (40)
A more useful expression results if the momenta integrations are performed before introducing the transformation to generalized coordinates. Starting again with eqn. (34), we carry out the momentum integrations, yielding
\[ {1 \over P(s)}{dP \over ds} ={1 \over N!\lambda^{3N}Q(N,V,T )} {\int d^N {\bf r} e^{-\beta U (r)} {\partial \over \partial s} \delta(f_1({\bf r})-s)\over \langle \delta(f_1({\bf r})-s)\rangle} \]
(41)
Now, we introduce
only the transformation of the coordinates to generalized coordinates \( q_{\alpha} = f_{\alpha}({\bf r}_1,...,{\bf r}_N) \). However, because there is no corresponding momentum transformation, the Jacobian of the transformation is not unity. Let \( J(q)\equiv J(q_1,...,q_{3N}) =\partial ({\bf r}_1,...,{\bf r}_N)/\partial (q_1,...,q_{3N}) \) denote the Jacobian of the transformation. Then, eqn. (41) becomes
\({1 \over P(s)}{dP \over ds} \)
\(\underline { {1 \over N!\lambda^{3N}Q(N,V,T)}{\int\;d^{3N}q\;J(q)e^{-\beta {\tilde U} (q)}{\partial \over \partial s}\delta(q_1-s)\over \langle \delta(q_1-s)\rangle}} \)
\({1 \over N!\lambda^{3N}Q(N,V,T)}{\int\;d^{3N}q\;e^{-\beta \left(U(q) - kT J(q) \right )}{\partial \over \partial s}\delta(q_1-s)\over \langle \delta(q_1-s)\rangle}\)
(42) where, in the last line, the Jacobian has been exponentiated. Changing the derivative \(\partial/\partial s \) to \( \partial/\partial q_1 \) and performing the integration by parts as was done in eqn. (37), we obtain
\({1 \over P(s)}{dP \over ds}\)
\({1 \over N!\lambda^{3N}Q(N,V,T)}{\int\;d^{3N}q\;{\partial \over \partial q_1}e^{-\beta \left (\tilde{U}(q)-kT\ln J(q)\right)}\delta(q_1-s)\over \langle \delta(q_1-s)\rangle}\)
\(-{\beta \over N!\lambda^{3N}Q(N,V,T)}{\int\;d^{3N}q\;\left[{\partial \tilde {U} \over partial q_1} -KT{\partial \over \partial q_1} \ln J (q) \right]e^{-\beta \left (\tilde{U}(q)-kT\ln J(q)\right)}\delta(q_1-s)\over \langle \delta(q_1-s)\rangle}\)
\(\underline {-\beta\left<\left[{\partial\tilde{U} \over \partial q_1}-kT{\partial \over \partial q_1}\ln J(q)\right]\right>^{\rm cond}_s }\)
(43) Therefore, the free energy profile becomes
\[ A(q) = A(s_i) +\int_{s_i}^q\;ds\;\left<\left[{\partial\tilde {U} \over \partial q_1} - KT {\partial \over \partial q_1}\ln J(q)\right]\right>^{\rm cond}_s\] (44)
Again, the derivative of \(\underline {\tilde U} \), the transformed potential, can be computed form the untransformed potential via the chain rule
\[ {\partial \tilde{U} \over \partial q_1} =\sum_{i=1}^N {\partial U \over \partial {\bf r}_i}\cdot {\partial {\bf r}_i \over \partial q_1} \]
(45)
Eqn. (44) is useful for simple reaction coordinates in which the full transformation to generalized coordinates is known. We will see shortly how the expression for \(A (q) \) can be further simplified in a way that does not require knowledge of the transformation at all. First, however, we must tackle the problem alluded to earlier of computing the conditional ensemble averages from the constrained dynamics employed by the blue moon ensemble method.
|
Polygon is a word derived from The Greek language, where poly means many and gonna means angle. So we can say that in a plane, closed figure with many angles is called a polygon. The given diagram is how a polygon looks like:
There are many properties in a polygon like sides, diagonals, area, angles, etc. Lets know how to find using these polygon formulae.
Polygon formula to find area:
\[\large Area\;of\;a\;regular\;polygon=\frac{1}{2}n\; sin\left(\frac{360^{\circ}}{n}\right)s^{2}\]
Polygon formula to find interior angles:
\[\large Interior\;angle\;of\;a\;regular\;polygon=\left(n-2\right)180^{\circ}\]
Polygon formula to find the triangles:
\[\large Interior\;of\;triangles\;in\;a\;polygon=\left(n-2\right)\]
Where,
n is the number of sides and S is the length from center to corner. Solved Examples Question: A polygon is a octagon and its length is 5 cm. Calculate its area ? Solution:
Given :
The polygon is octagon. Hence, n = 8.
Area of a regular polygon = $\frac{1}{2}$ n sin $\frac{360^o}{n}$ S
2
Where,
s is the length from center to corner.
Area of a octagon = $\frac{1}{2}$ $\times$ sin $\frac{360^o}{8}$ 5
2
= 0.5 $\times$ sin $\frac{360^o}{8}$ 5
2
= 0.5 $\times$ 0.707 $\times$ 25
= 8.83 m
2.
|
Last time we have that that one can represent (the conjugacy class of) a finite index subgroup of the modular group $\Gamma = PSL_2(\mathbb{Z}) $ by a
Farey symbol or by a dessin or by its fundamental domain. Today we will associate a quiver to it.
For example, the modular group itself is represented by the Farey symbol
[tex]\xymatrix{\infty \ar@{-}[r]_{\circ} & 0 \ar@{-}[r]_{\bullet} & \infty}[/tex] or by its dessin (the green circle-edge) or by its fundamental domain which is the region of the upper halfplane bounded by the red and blue vertical boundaries. Both the red and blue boundary consist of TWO edges which are identified with each other and are therefore called a and b. These edges carry a natural orientation given by circling counter-clockwise along the boundary of the marked triangle (or clockwise along the boundary of the upper unmarked triangle having $\infty $ as its third vertex). That is the edge a is oriented from $i $ to $0 $ (or from $i $ to $\infty $) and the edge b is oriented from $0 $ to $\rho $ (or from $\infty $ to $\rho $) and the green edge c (which is an inner edge so carries no identifications) from $\rho $ to $i $. That is, the fundamental region consists of two triangles, glued together along their boundary which is the oriented cycle $\vec{abc} $ consistent with the fact that the compactification of $\mathcal{H}/\Gamma $ is the 2-sphere $S^2 = \mathbb{P}^1_{\mathbb{C}} $. Under this identification the triangle-boundary abc can be seen to circle the equator whereas the top triangle gives the upper half sphere and the lower triangle the lower half sphere. Emphasizing the orientation we can depict the triangle-boundary as the quiver
[tex]\xymatrix{i \ar[rd]_a & & \rho \ar[ll]_c \\ & 0 \ar[ru]_b}[/tex]
embedded in the 2-sphere. Note that
quiver is just a fancy name for an oriented graph…
Okay, let’s look at the next case, that of the unique index 2 subgroup $\Gamma_2 $ represented by the Farey symbol [tex]\xymatrix{\infty \ar@{-}[r]_{\bullet} & 0 \ar@{-}[r]_{\bullet} & \infty}[/tex] or the dessin (the two green edges) or by its fundamental domain consisting of the 4 triangles where again the left and right vertical boundaries are to be identified in parts.
That is we have 6 edges on the 2-sphere $\mathcal{H}/\Gamma_2 = S^2 $ all of them oriented by the above rule. So, for example the lower-right triangle is oriented as $\vec{cfb} $. To see how this oriented graph (the quiver) is embedded in $S^2 $ view the big lower region (cdab) as the under hemisphere and the big upper region (abcd) as the upper hemisphere. So, the two green edges together with a and b are the equator and the remaining two yellow edges form the two parts of a bigcircle connecting the north and south pole. That is, the graph are the cut-lines if we cut the sphere in 4 equal parts. The corresponding quiver-picture is
[tex]\xymatrix{& i \ar@/^/[dd]^f \ar@/_/[dd]_e & \\
\rho^2 \ar[ru]^d & & \rho \ar[lu]_c \\ & 0 \ar[lu]^a \ar[ru]_b &}[/tex]
As a mental check, verify that the index 3 subgroup determined by the Farey symbol [tex]\xymatrix{\infty \ar@{-}[r]_{\circ} & 0 \ar@{-}[r]_{\circ} & 1 \ar@{-}[r]_{\circ} & \infty}[/tex] , whose fundamental domain with identifications is given on the left, has as its associated quiver picture
[tex]\xymatrix{& & \rho \ar[lld]_d \ar[ld]^f \ar[rd]^e & \\
i \ar[rrd]_a & i+1 \ar[rd]^b & & \omega \ar[ld]^c \\ & & 0 \ar[uu]^h \ar@/^/[uu]^g \ar@/_/[uu]_i &}[/tex]
whereas the index 3 subgroup determined by the Farey symbol [tex]\xymatrix{\infty \ar@{-}[r]_{1} & 0 \ar@{-}[r]_{1} & 1 \ar@{-}[r]_{\circ} & \infty}[/tex], whose fundamental domain with identifications is depicted on the right, has as its associated quiver
[tex]\xymatrix{i \ar[rr]^a \ar[dd]^b & & 1 \ar@/^/[ld]^h \ar@/_/[ld]_i \\
& \rho \ar@/^/[lu]^d \ar@/_/[lu]_e \ar[rd]^f & \\ 0 \ar[ru]^g & & i+1 \ar[uu]^c}[/tex]
Next time, we will use these quivers to define superpotentials…
|
NTS ABSTRACTSpring2019
Return to [1]
Jan 23
Yunqing Tang Reductions of abelian surfaces over global function fields For a non-isotrivial ordinary abelian surface $A$ over a global function field, under mild assumptions, we prove that there are infinitely many places modulo which $A$ is geometrically isogenous to the product of two elliptic curves. This result can be viewed as a generalization of a theorem of Chai and Oort. This is joint work with Davesh Maulik and Ananth Shankar. Jan 24
Hassan-Mao-Smith--Zhu The diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$ Abstract: Assume a polynomial-time algorithm for factoring integers, Conjecture~\ref{conj}, $d\geq 3,$ and $q$ and $p$ prime numbers, where $p\leq q^A$ for some $A>0$. We develop a polynomial-time algorithm in $\log(q)$ that lifts every $\mathbb{Z}/q\mathbb{Z}$ point of $S^{d-2}\subset S^{d}$ to a $\mathbb{Z}[1/p]$ point of $S^d$ with the minimum height. We implement our algorithm for $d=3 \text{ and }4$. Based on our numerical results, we formulate a conjecture which can be checked in polynomial-time and gives the optimal bound on the diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$. March 28
Shamgar Gurevitch Harmonic Analysis on GLn over finite fields Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters.
For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}: $$trace (\rho(g))/dim (\rho),$$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (TAMU).
|
The particle-in-a box model is used to approximate the Hamiltonian operator for the \(\pi\) electrons because the full Hamiltonian is quite complex. The full Hamiltonian operator for each electron consists of the kinetic energy term \(\dfrac {-\hbar ^2}{2m} \dfrac {d^2}{dx^2}\) and the sum of the Coulomb potential energy terms \(\dfrac {q_1q_2}{4\pi \epsilon _0 r_{12}}\) for the interaction of each electron with all the other electrons and with the nuclei (\(q\) is the charge on each particle and \(r\) is the distance between them). Considering these interactions, the Hamiltonian for electron i given below.
\[ \hat {H} _i = \dfrac {- \hbar ^2}{2m} \dfrac {d^2}{dx^2} + \underset{\text{sum over electrons}}{ \sum _{j } \dfrac {e^2}{4 \pi \epsilon _0 r_{i, j}}} - \underset{\text{sum over nuclei}}{ \sum _{n} \dfrac {e^2 Z_n}{4 \pi \epsilon _0 r_{i,n}} } \label {4-1}\]
The Schrödinger equation obtained with this Hamiltonian cannot be solved analytically by anyone because of the electron-electron interaction terms. Some approximations for the potential energy must be made.
We want a model for the dye molecules that has a particularly simple potential energy function because we want to be able to solve the corresponding Schrödinger equation easily. The particle-in-a-box model has the necessary simple form. It also permits us to get directly at understanding the most interesting feature of these molecules, their absorption spectra.
Figure \(\PageIndex{1}\): A diagram of the particle-in-a-box potential energy superimposed on a somewhat more realistic potential. The bond length is given by β, the overshoot by δ, and the length of the box by L = bβ + 2δ, where b is the number of bonds.
As mentioned in the previous section, we assume that the π-electron motion is restricted to left and right along the chain in one dimension. The average potential energy due to the interaction with the other electrons and with the nuclei is taken to be a constant except at the ends of the molecule. At the ends, the potential energy increases abruptly to a large value; this increase in the potential energy keeps the electrons bound within the conjugated part of the molecule. Figure \(\PageIndex{1}\) shows the classical particle-in-a-box potential function and the more realistic potential energy function. We have defined the constant potential energy for the electrons within the molecule as the zero of energy. One end of the molecule is set at \(x = 0\), the other at \(x = L\), and the potential energy is goes to infinity at these points.
For one electron located within the box, i.e. between \(x = 0\) and \(x = L\), the Hamiltonian is
\[\hat {H} = \dfrac {-\hbar ^2}{2m} \dfrac {d^2}{dx^2}\]
because \(V =0\), and the (time-independent) Schrödinger equation that needs to be solved is then
\[\dfrac {- \hbar ^2}{2m} \dfrac {d^2}{dx^2} \psi (x) = E \psi (x) \label {4-2}\]
We need to solve this differential equation to find the wavefunction and the energy. In general, differential equations have multiple solutions (solutions that are families of functions), so actually by solving this equation, we will find all the wavefunctions and all the energies for the particle-in-a-box. There are many ways of solving differential equations, and you will see some of them illustrated here and in subsequent chapters. One way is to recognize functions that might satisfy the equation. This equation says that differentiating the function twice produces the function times a constant. What kinds of functions have you seen that regenerate the function after differentiating twice? Exponential functions and sine and cosine functions come to mind.
Example \(\PageIndex{1}\)
Use \(\sin(kx)\), \(\cos(kx)\), and \(e^{ikx}\) for the possible wavefunctions in Equation \(\ref{4-2}\) and differentiate twice to demonstrate that each of these functions satisfies the Schrödinger equation for the particle-in-a-box.
Exercise \(\PageIndex{1}\) leads you to the following three equations.
\[\dfrac {\hbar ^2 k^2}{2m} \sin (kx) = E \sin (kx) \label {4-3}\]
\[\dfrac {\hbar ^2 k^2}{2m} \cos (kx) = E \cos (kx) \label {4-4}\]
\[\dfrac {\hbar ^2 k^2}{2m} e^{ikx} = E e^{ikx} \label {4-5}\]
For the equalities expressed by these equations to hold, \(E\) must be given by
\[E = \dfrac {\hbar ^2 k^2}{2m} \label {4-6}\]
Kinetic energy is the momentum squared divided by twice the mass \(p^2/2m\), so we conclude from Equation \(\ref{4-6}\) that \(ħ^2k^2 = p^2\).
Solutions to differential equations that describe the real world also must satisfy conditions imposed by the physical situation. These conditions are called boundary conditions.
For the particle-in-a-box, the particle is restricted to the region of space occupied by the conjugated portion of the molecule, between \(x = 0\) and \(x = L\). If we make the large potential energy at the ends of the molecule infinite, then the wavefunctions must be zero at \(x = 0\) and \(x = L\) because the probability of finding a particle with an infinite energy should be zero. Otherwise, the world would not have an energy resource problem. This boundary condition therefore requires that \(ψ(0) = ψ(L) = 0\).
Example \(\PageIndex{2}\)
Which of the functions \(sin(kx)\), \(cos(kx)\), or \(e^{ikx}\) is 0 when x = 0?
As you discovered in Exercise \(\PageIndex{2}\) for these three functions, only \(sin(kx) = 0\) when \(x = 0\). Consequently only \(sin(kx)\) is a physically acceptable solution to the Schrödinger equation.
The boundary condition described above also requires us to set \(ψ(L) = 0\).
\[ψ(L) = \sin(kL) = 0 \label {4.7}\]
The sine function will be zero if \(kL = nπ\) with \(n = 1,2,3, \cdots\). In other words,
\[ k = \dfrac {n \pi}{L} \label {4-8}\]
with \(n = 1, 2, 3 \cdots\)
Note that \(n = 0\) is
not acceptable here because this makes the wave vector zero \(k = 0\), so \(\sin(kx) = 0\), and thus \(ψ(x)\) is zero everywhere. If the wavefunction were zero everywhere, it means that the probability of finding the electron is zero. This clearly is not acceptable because it means there is no electron.
Example \(\PageIndex{3}\)
Show that \(\sin(kx) = 0\) at \(x = L\) if \(k = nπ/L\) and \(n\) is an integer.
Negative Quantum Numbers
It appears that a negative integer also would work for \(n\) because
\[\sin \left ( \dfrac {-n \pi}{L} x \right ) = - \sin \left ( \dfrac {n \pi}{L} x \right ) \label {4.9}\]
which also satisfies the boundary condition at \(x = L\). The reason negative integers are not used is a bit subtle. Changing \(n\) to \(–n\) just changes the sign (also called the phase) of the wavefunction from + to -, and does not produce a function describing a new state of the particle. Note that the probability density for the particle is the absolute square of the function, and the energies are the same for \(n\) and \(–n\). Also, since the wave vector k is associated with the momentum (p = ħk), n > 0 means k > 0 corresponding to momentum in the positive direction, and \(n < 0\) means \(k < 0\) corresponding to momentum in the negative direction. By using Euler’s formula one can show that the sine function incorporates both \(k\) and \(–k\) since
\[ \sin (kx) = \dfrac {1}{2i} ( e^{ikx} - e^{-ikx} ) \label {4-10}\]
so changing \(n\) to \(–n\) and \(k\) to \(–k\) does not produce a function describing new state, because both momentum states already are included in the sine function.
The set of wavefunctions that satisfies both boundary conditions is given by
\[ \psi _n (x) = N \sin \left ( \dfrac {n \pi}{L} x \right ) \text {with } n = 1, 2, 3, \cdots \label {4-11}\]
The normalization constant N is introduced and evaluated to satisfy the normalization requirement.
\[ \int \limits _0^L \psi ^* (x) \psi (x) dx = 1 \label {4- 12}\]
\[N^2 \int \limits _0^L \sin ^2 \left ( \dfrac {n \pi x}{L} \right ) dx = 1 \label {4-13}\]
\[N = \sqrt{ \dfrac{1}{\int \limits _0^L \sin ^2 \dfrac {n\pi x}{L} dx} } \label {4-14}\]
\[ N = \sqrt{ \dfrac {2}{L}} \label {4-15}\]
Finally we write the wavefunction:
\[ \psi _n (x) = \sqrt{ \dfrac {2}{L} } \sin \left ( \dfrac {n \pi}{L} x \right ) \label {4-16}\]
Example \(\PageIndex{4}\)
Evaluate the integral in Equation \(\ref{4-13}\) and show that \(N = (2/L)^{1/2}\).
By finding the solutions to the Schrödinger equation and imposing boundary conditions, we have found a whole set of wavefunctions and corresponding energies for the particle-in-a box. The wavefunctions and energies depend upon the number n, which is called a quantum number. In fact there are an infinite number of wavefunctions and energy levels, corresponding to the infinite number of values for n \(n = 1 \rightarrow \infty\). The wavefunctions are given by Equation \(\ref{4-16}\) and the energies by Equation \(\ref{4-6}\). If we substitute the expression for k from Equation \(\ref{4-8}\) into Equation \(\ref{4-6}\), we obtain the equation for the energies \(E_n\)
\[ E_n = \dfrac {n^2 \pi ^2 \hbar ^2}{2mL^2} = n^2 \left (\dfrac {h^2}{8mL^2} \right ) \label {4-17}\]
Example \(\PageIndex{4}\)
Substitute the wavefunction, Equation \(\ref{4-16}\), into Equation \(\ref{4.2}\) and differentiate twice to obtain the expression for the energy given by Equation \(\ref{4-17}\).
From Equation \(\ref{4-17}\) we see that the energy is quantized in units of \(\dfrac {h^2}{8mL^2}\); i.e. only certain values for the energy of the particle are possible. This quantization, the dependence of the energy on integer values for n, results from the boundary conditions requiring that the wavefunction be zero at certain points. We will see in other chapters that quantization generally is produced by boundary conditions and the presence of Planck’s constant in the equations.
The lowest-energy state of a system is called the ground state. Note that the ground state (\(n = 1\)) energy of the particle-in-a-box is not zero. This energy is called the zero-point energy.
Example \(\PageIndex{5}\)
Here is a neat way to deduce or remember the expression for the particle-in-a-box energies. The momentum of a particle has been shown to be equal to \(ħk\). Show that this momentum, with \(k\) constrained to be equal to \(nπ/L\), combined with the classical expression for the kinetic energy in terms of the momentum \((p^2/2m)\) produces Equation \(\ref{4.17}\). Determine the units for \(\dfrac {h^2}{8mL^2}\) from the units for \(h\), \(m\), and \(L\).
Example \(\PageIndex{6}\)
Why must the wavefunction for the particle-in-a-box be normalized? Show that φ(x) in Equation \(\ref{4-16}\) is normalized.
Example \(\PageIndex{6}\)
Use a spreadsheet program, Mathcad, or other suitable software to construct an accurate energy level diagram and to plot the wavefunctions and probability densities for a particle-in-a-box with \(n = 1\) to \(6\). You can make your graphs universal, i.e. apply to any particle in any box, by using the quantity \((h^2/8mL^2)\) as your unit of energy and \(L\) as your unit of length. To make these universal graphs, plot \(n^2\) on the y-axis of the energy-level diagram, and plot \(x/L\) from \(0\) to \(1\) on the x-axis of your wavefunction and probability density graphs.
Example \(\PageIndex{7}\)
How does the energy of the electron depend on the size of the box and the quantum number n? What is the significance of these variations with respect to the spectra of cyanine dye molecules with different numbers of carbon atoms and pi electrons? Plot \(E(n_2)\), \(E(L_2)\), and \(E(n)\) on the same figure and comment on the shape of each curve.
The quantum number serves as an index to specify the energy and wavefunction or state. Note that \(E_n\) for the particle-in-a-box varies as \(n^2\) and as \(1/L^2\), which means that as \(n\) increases the energies of the states get further apart, and as \(L\) increases the energies get closer together. How the energy varies with increasing quantum number depends on the nature of the particular system being studied; be sure to take note of the relationship for each case that is discussed in subsequent chapters.
Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
|
Hitchhiker Server Sky Test Orbits
Testing the first server sky thinsat arrays may be challenging - they don't belong in a common orbit, they won't last long below 1000 km altitude due to ram drag. It would be nice to deploy the first tests at M288, but that will require a custom launch.
Instead, let's dispense as a co-payload from a GTO transfer orbit. We will test in a highly inclined orbit, in three steps:
1)
Deploy:use a cold gas thruster to slightly lower apogee and significantly raise perigee. This will be the orbit for our experiment. The inclination of this orbit will be a function of launch latitude, and the ascending node will be on the equatorial plane.
2)
Dispenseapproximately 100 thinsats near apogee. The subsequent fate of the dispenser is undefined; perhaps we can use it later for collision experiments.
3)
Experiment:the thinsats maneuver into arrays and begin the test. The perigee drag is high enough that the apogee will decay significantly with each orbit. Thinsats will orient for minimum drag, but because they are curved the drag will be significant.
4)
Deorbit:if damage has not set the thinsats tumbling, lock the optical thrusters to produce a rapid yaw tumble. This will greatly increase perigee drag.
5)
Reentry:after apogee decays to perigee, the orbit will rapidly spiral in. When tumbling thinsats reach ISS altitude, they will reenter one or two orbits later.
The starting GTO transfer orbit will have a perigee radius of r_{P-GTO} (perhaps 7000 km) and an apogee radius of r_{A-GTO} (perhaps 42,100 km ) one or two orbits after release from the GTO upper stage. Using these numbers and the gravitational coefficient \mu , we can compute the perigee and apogee velocities v_{P-GTO} and p_{A-GTO} , the semimajor axis a , the eccentricity \epsilon , and the characteristic velocity v_0 .
Computing the experimental orbits will require numerical integration, with estimated perturbations telling us how difficult it will be to keep arrays together. We can approximate all thinsats in an array in one trajectory in two regimes: (1) highly elliptical, dominated by perigee drag on a thinsat oriented for minimum drag, and (2) a spiral, with decay increasing rapidly as thermospheric density increases exponentially and the thinsats tumble. It may be possible to arrange and observe a collision between one or two thinsats and the heavier and slow-orbit-decay dispenser, which will still be in a high velocity, high eccentricity orbit.
Gas density will low during quiet sun periods around 2016, and maximum about 5 years later in 2021. If possible, we should schedule the test so that all probably-disabled thinsats will reenter during that solar maximum.
Computing the Experimental orbit
The initial orbit
Starting with r_{A-GTO} and r_{P-GTO} , we can compute the semimajor axis a and other parameters:
\large a ~=~ 0.5 * ( r_{A-GTO} + r_{P-GTO} ) ~~~~~~~ semimajor axis
\large \epsilon ~ = ~ ( r_{A-GTO} - r_{P-GTO} ) / ( r_{A-GTO} + r_{P-GTO} ~~~~~ eccentricity
\large T ~=~ 2\pi \sqrt{ a^3/\mu } ~~~~~ orbital period (sidereal)
\large v_0 ~=~ \sqrt{ \mu / ( a (1 - {\epsilon}^2) ) } ~~~~~ characteristic velocity
apogee velocity: ~ \large v_a ~=~ v_0 ( 1 - {\epsilon} ) ~~~~~~~~ perigee velocity: ~ \large v_p ~=~ v_0 ( 1 + {\epsilon} )
For the first approximation, ignore J_2 and other complexities in the gravitational field, but include them in the numerical model. They will cause the elliptical orbit to precess towards the east.
Find \epsilon , a , and T from r_p , v_p and the constant \mu :
\large r_p ~=~ ( 1 - \epsilon ) a ~~~~~~~~ \large ( 1 - \epsilon ) ~=~ \Large { r_p \over a } ~~~
\large v_p ~=~ \LARGE \sqrt{{{2\mu}\over{r_p}}-{{\mu}\over{a}}} ~~~~~~~~ \Large { v_p^2 \over \mu } ~=~ { 2 \over r_p } - { 1 \over a } ~~~~~~~~ \Large { 1 \over a } ~=~ { 2 \over r_p } - { v_p^2 \over \mu } ~~~~~~~~ { r_p \over a } ~=~ 2 - { { r_p v_p^2 } \over \mu } ~~~~~~~~ \large ( 1 - \epsilon ) ~=~ 2 - \Large { { r_p v_p^2 } \over \mu }
Atmospheric Drag
The atmosphere becomes exponentially thinner with altitude until it is as low as the density of interplanetary space. The density can be approximated by
\large \rho ( r ) ~ = ~ \rho (r) ~ e^{ - ( r - r_0 ) / H }
where {\rho}_0 is the density at altitude r_0 , a function of temperature, solar activity, and longitude, which vary with time. H is the scale height, the distance over which the atmosphere density decays by a factor of e = 2.718 . Solar activity can triple temperature, and greatly increase the density of the atmosphere at very high altitude. Most of the gas at these high altitudes is hydrogen and helium. Because the sun heats the atmosphere during the day, the temperature and density peaks in the afternoon 1400 hours position. The air is coldest at before dawn, at the 0400 hours position. The mean free path of the air molecules above 1000 km is larger than the scale height, so the atoms rarely collide.
When an object passes through this thin gas at very high orbital speeds, it collides with the gas molecules, and experiences a force proportional to the gas density \rho times the area A times the velocity squared . The gas is thickest at perigee, a density of \rho (r_p) , where the scale height (which increases with altitude) is H_p . The modified density equation is:
\large \rho ( r ) ~ = ~ \rho (r_p) ~ e^{ - ( r - r_p ) / H(r_p) } ~ ~ ~ ~ density near perigee
A small change in radius makes a large change in drag, and the velocity is inversely proportional to radius. We will assume the velocity near perigee is a constant v_p . So, the drag force for area A (the area perpendicular to the velocity) is
\large F ~ = ~ {1\over 2} \rho (r) A { v_p}^2 ~ ~ ~ ~ drag force
The acceleration for a thinsat with mass M is:
\large acc ~ = ~{ \Large { \rho (r) A } \over { 2 M } } { v_p}^2 ~ ~ ~ ~ acceleration
Experiment and Deorbit : Decaying Orbit from Elliptical to Circular
Assuming about 1 cm of curvature on 20 cm wide thinsat, and a noontime or midnight perigee, the drag area is about 15 cm
2 or 1.5e-3 m 2. During intentional deorbiting, a thinsat is tumbled around an axis perpendicular to the orbit, increasing drag area to 160 cm 2 or 1.6e-2 m 2. Intentional deorbiting after the experiment is complete will reduce the time in orbit and potential space debris hazards to other satellites.
When r_a - r_p is much greater than the scale height H_p , we can crudely approximate the orbital decay as if it all occurs near perigee.
The radial velocity v_r is zero at perigee, and increases in magnitude at angles to the side:
\large v_r ~ \approx ~ \epsilon v_0 \sin( \theta ) ~ ~ ~ ~ radial velocity at orbit angle \theta
v_0 = v_p / ( 1 + \epsilon )
\large v_r ~ = ~ { \Large \epsilon \over { ( 1 + \epsilon ) } } v_p \theta ~ ~ ~ ~ radial velocity near perigee for small \theta
\Large { { d \theta } \over { d t } } ~ \large \approx ~ \Large { { v_p } \over { r_p } } ~ ~ ~ ~ angular velocity near perigee
Define y = r - r_p
\Large { dy \over dt } \large ~ = ~ \Large { dr \over dt } \large ~ = ~ v_r \approx ~ { \Large \epsilon \over { ( 1 + \epsilon ) } } v_p \theta ~ ~ ~ ~ radial velocity
Assuming angular velocity is approximately constant, and starting with t = 0 at perigee:
\theta ~ \approx ~ { v_p \over r_p } t
so \Large { dy \over dt } \large ~ \approx ~ { \Large \epsilon \over { ( 1 + \epsilon ) } } { {v_p}^2 \over { r_p } } ~ t
Integrating: \large y ~ \approx ~ { \Large \epsilon \over { 2 ( 1 + \epsilon ) } } { {v_p}^2 \over { r_p } } ~ t^2
Solving for t as a function of y: ~~~ \large t ~ \approx ~ { \Large \sqrt{ { { 2 ( 1 + \epsilon ) } \over \epsilon } ~ { { r_p } \over {v_p}^2 } ~ \large y } }
derivative: \large dt ~ \approx ~ { \Large \sqrt{ \left( 1 + { 1 \over \epsilon } \right) { { r_p } \over { {v_p}^2 y } } } } ~ ~ dy
The decrease of v_p during one perigee pass is the integral of the acceleration over time :
\large \Delta v_p ~ = ~ { \LARGE \int_{-\infty}^{\infty} } acc ~ dt ~ ~ ~ ~ Equation S0
From symmetry: \large \Delta v_p ~ = ~ 2 { \LARGE \int_0^{\infty} } acc ~ dt ~ = ~ 2 { \LARGE \int_0^{\infty} } { { \rho(r_p) A } \over { 2 M } } {v_p}^2 e^{-y / H } \large ~ dt
Recasting as an integral over y:
\large \Delta v_p ~ = ~ { \LARGE \int_0^{\infty} } { { \rho(r_p) A } \over M } {v_p}^2 e^{-y / H } ~ { \sqrt{ \left( 1 + { 1 \over \epsilon } \right) { { r_p } \over {v_p}^2 y } } } \large ~ dy
Moving and combining constants out of the integral:
\large \Delta v_p ~ = ~ { { \rho (r_p) ~ A ~ v_p } \over M } \sqrt{\left( 1 + { 1 \over \epsilon } \right) r_p } { \LARGE \int_0^{\infty} } e^{-y / H } \sqrt{ 1 / y } ~ dy ~ ~ ~
The integral can be solved with a change in variables, let ~ y = H b^2 ~ so that ~ dy = 2 H b ~ db ~ and ~~~ \sqrt{ 1 / y } ~~ dy ~ = ~ 2 \sqrt{ H } ~~ db
{ \LARGE \int_0^{\infty} } e^{-y / H } \sqrt{ 1 / y } ~ dy ~ = ~ 2 \sqrt{ H } { \LARGE \int_0^{\infty} } e^{-b^2} ~ db ~ = ~ 2 \sqrt{ H } \sqrt{ \pi }/ 2 ~ = ~ \sqrt{ \pi H } ~~~
\large \Delta v_p ~ = ~ { { \rho (r_p) ~ A ~ v_p } \over M } \sqrt{ \left( 1 + { 1 \over \epsilon } \right) \pi ~ r_p ~ H } ~~~ Equation S1
resulting in:
Equation S3: \large \Delta v_p = \Large { { \rho (r_p) ~ A ~ v_p } \over M } \sqrt{ { \pi ~ r_p ~ H } \over { 1 - \mu / r_p v_p^2 } } ~~~ Apogee Radius: \large r_a = \Large { r_p \over { 1 - { { 2 \mu } \over { r_p { v_p }^2 } } } } ~~~ Orbit Period: \large T = \Large { 2 \pi \mu \left( r_p \over { 2 \mu - r_p { v_p }^2 } \right)^{3/2} }
Solve equation S3 repeatedly, decrementing v_p , until the apogee drops below a threshold and the assumptions become invalid. If the eccentricity drops to zero (circular orbit at r_p = r_a ) the eccentricity drops to 0, as does the denominator of the square root, and \Delta v_p goes asymptotic.
We will arbitrarily assume this happens when r_a - r_p = 2 H , that is, below a velocity threshold v_{pt} :
\large a_t ~=~ r_p + H ~~~ semimajor axis at threshold
Equation S4: \large v_{pt} ~=~ \Large \sqrt{ { {2 \mu} \over {r_p} } - { \mu \over {r_p + H } } }
If v_p < v_{p-TH} , the orbit has decayed to a circular orbit, the experimental and deorbit phases are over, and it is time to begin the re-entry phase, to clean the debris of our experiment out of orbit. By tumbling the thinsats to begin accelerated deorbiting, the effective area A is increased.
Re-Entry: Decaying Circular Orbit
In this phase, we begin with r_p = r_a = r_{p-EXP} or experimental perigee radius, v_p = v_a = \sqrt{ \mu / r_{p-EXP} } , circular orbital velocity at that experimental perigee radius. Atmospheric density is not symmetric with longitude, with maximum density occuring 2 hours after noon and minumum density occuring 4 hours after midnight, the average density is sufficient to estimate re-entry time though not the exact trajectory. Assume that the mean free path is longer than the width of the thinsat - this is true down to 110 km altitude.
Drag acceleration reduces energy, which lowers the orbit. Solving for dr / dt :
acceleration: ~ \large a = { \Large { \rho(r) A } \over { 2 M } } v^2 ~~~~~~ velocity: \large v = \Large \sqrt{ \mu \over r } specific energy: \large E = { { - \mu } \over { 2 r } } ~~~~~~~~~~~~~~ \large { {d E} \over {d r} } = { \mu \over { 2 r^2 } } ~~~~~~~~~ \large { {d r} \over {d E} } = { { 2 r^2 } \over \mu }
\Large { {d E} \over {d t} } {\large = - a v = } { { - \rho(r) A v^3 } \over { 2 M } } ~ = { { - \rho(r) } \over 2 } { A \over M } \left( \mu \over r \right)^{3/2}
\large { {d r} \over {d t} } = { {d E} \over {d t} } { {d r} \over {d E} } = { { - \rho(r) } \over 2 } { A \over M } \left( \mu \over r \right)^{3/2} { { 2 r^2 } \over \mu } = - \rho (r) { A \over M } \sqrt{ \mu r } ~
Equation S5: \large d t = { -1 \over { \rho(r) \LARGE { A \over M } \sqrt{ \mu r } } } ~ d r NOTE: preliminary numerical simulations suggest an additional factor of 1.5 - the time is 50% longer, or equivalently, dr/dt is 2/3 smaller. I suspect the problem is a missing 2 \dot{ r } \dot{ \theta } term in the horizontal acceleration.
Computer Models and Graphs
We will use the nrlmsise-00 atmosphere model and this program:
To make these plots:
exp.png
decay.png
Numerical Simulation of Atmospheric Entry from Circular Orbit
Summary: Depending on sail ratio and solar activity, descending thinsats pass through ISS altitude as slowly as 100 meters per hour and as fast as 100 meters per minute; they will likely linger at higher altitudes during low solar activity, so the downwards density that can collide with ISS will be low. Depending on sail ratio, they heat up to between 1100K and 1300K; given turbulence and gee forces, the aluminum substrate will melt and oxidize, and the memory chips will be damaged and erased. The chance of damaging impact on the ground is zero, and the chance of sensitive data surviving reentry is miniscule.
Atmospheric entry was simulated with three levels of solar activity ( F107 = 70, 150, and 250 ) and two sail ratios ( 2.5 and 5.0 square meters per kilogram ). The sail ratios are hundreds of times greater than a cubesat or other small satellite, so descent is much faster, and collisions will deposit much less energy per area on a target. Collision closing velocities for ISS will be around 6000 m/s, lower than collisions with eccentric and inclined objects, and much more slowly than meteors. About 30 tons of meteoritic material falls to earth at ISS-intersecting latitudes every day, at typical speeds of 30 km/sec. The energy-time flux of a re-entering 1 kilogram thinsat experiment is 18 Megajoule-hours; the energy-time flux of rapidly descending meteors is 125 Megajoules per day, so the experimental risk to ISS is about that of 3.5 hours of meteorite flux. A cost (mostly to the ISS solar panels) but not a big one.
Avoiding ISS
The International Space Station is in a 51.6° inclination orbit, crossing the equator twice per T = 5570 second orbit at an altitude ranging from 400 km to 420 km. Thinsats are small and light - nevertheless, we should minimize the chances of hitting it, so our experimental perigee should be above 450 kilometers or so, and the decay rate through the ISS altitude should be fast. Assuming a launch from India's Satish Dhawan or the ESS Kourou launch sites, the experimental orbit will be close to equatorial. Closing velocity will be the orbital velocity 7.66 km/s times sin(51.6°) or 6 km/s . The orbit circumference C is 42650 kilometers. The ISS solar panels are 2500 m
2, assume the whole station area is A = 10000 m 2. For N (est 200 ) pointlike thinsats descending at velocity V ( 0.1 m/s at solar minimum, 1.0 m/s at solar maximum ), the chance of one collision P is
\large P = N { { 2 A } \over { C T V } } = 200 * 2 * 10000 / ( 42650000 * 5570 * 7660 ) = 2E-9
Since thinsats are gossamer, and the collision energy is smaller than a normal satellite, and spread out (1.5 MJ over 20 square centimeters), even in the very unlikely event of a collision it will not penetrate the hull; it might damage a solar cell. We can expect tumbling thinsats to spread around a band of latitudes as they pass through ISS orbit. This doesn't change the probabilities, but it does eliminate the 1E-11 chance of all of them hitting as a group.
Both the ISS orbit and the array orbit will precess with J_2_ oblateness perturbations, at different rates because of the different orbits.
Experiment Time
An experimental perigee of 450 km results in an experiment duration of 20 years - which is way too long
. I need to check the math again.
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
Consider the scattering of a quantum particle in one dimension, caused by a step in the potential (this appears in many undergrad level QM books):
$$ V(x) = \begin{cases} V_1 & x<0 \\ V_2 & x>0\end{cases}. $$
The particle is incident from the left, so it's wavefunction is:
$$ \psi(x) = \begin{cases} e^{i k_1 x} + r e^{-i k_1 x} & x<0 \\ t e^{i k_2 x} & x>0\end{cases}, $$
where $k_i =\sqrt{2m(E-V_i)}/\hbar$.
Matching the wavefunction and its derivative at $x=0$ gives:
$$ r = \frac{k_1-k_2}{k_1+k_2} ~~~;~~~ t = \frac{2 k_1}{k_1+k_2}.$$
Now we put another step in the potential at some distance $L$, which makes it a box potential:
$$ V(x) = \begin{cases} V_1 & x<0 \\ V_2 & 0<x<L \\ V_1 & L<x\end{cases}. $$
We solve this in a similar manner as before, with the wavefunction:
$$ \psi(x) = \begin{cases} e^{i k_1 x} + r e^{-i k_1 x} & x<0 \\ a e^{i k_2 x} + b e^{-i k_2 x} & 0<x<L \\ t e^{i k_1 x} & L<x \end{cases}. $$
Matching the wavefunction and its derivative at $x=0,L$ gives:
$$ r = \frac{k_1^2-k_2^2}{k_1^2+k_2^2+2 i k_1 k_2 \cot{(k_2 L)} } ~~~;~~~ t = \text{(something)}.$$
How come the second scattering problem doesn't reproduce the first scattering problem in the limit $L \rightarrow \infty$? I'm looking only at the value of $r$. I send a particle in, it scatters, and I get something back with an amplitude $r$. It seems unphysical that if the potential changed at $x=L$, it changes the scattering at $x=0$, no matter how far $L$ is.
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
A
tetrahedral snake, sometimes called a Steinhaus snake, is a collection of tetrahedra, linked face to face.
Steinhaus showed in 1956 that the last tetrahedron in the snake can never be a translation of the first one. This is a consequence of the fact that the group generated by the four reflexions in the faces of a tetrahedron form the free product $C_2 \ast C_2 \ast C_2 \ast C_2$.
For a proof of this, see Stan Wagon’s book The Banach-Tarski paradox, starting at page 68.
The thread $(3|3)$ is the
spine of the $(9|1)$-snake which involves the following lattices \[ \xymatrix{& & 1 \frac{1}{3} \ar@[red]@{-}[dd] & & \\ & & & & \\ 1 \ar@[red]@{-}[rr] & & 3 \ar@[red]@{-}[rr] \ar@[red]@{-}[dd] & & 1 \frac{2}{3} \\ & & & & \\ & & 9 & &} \] It is best to look at the four extremal lattices as the vertices of a tetrahedron with the lattice $3$ corresponding to its point of gravity.
The congruence subgroup $\Gamma_0(9)$ fixes each of these lattices, and the arithmetic group $\Gamma_0(3|3)$ is the conjugate of $\Gamma_0(1)$
\[ \Gamma_0(3|3) = \{ \begin{bmatrix} \frac{1}{3} & 0 \\ 0 & 1 \end{bmatrix}.\begin{bmatrix} a & b \\ c & d \end{bmatrix}.\begin{bmatrix} 3 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} a & \frac{b}{3} \\ 3c & 1 \end{bmatrix}~|~ad-bc=1 \} \] We know that $\Gamma_0(3|3)$ normalizes the subgroup $\Gamma_0(9)$ and we need to find the moonshine group $(3|3)$ which should have index $3$ in $\Gamma_0(3|3)$ and contain $\Gamma_0(9)$.
So, it is natural to consider the finite group $A=\Gamma_0(3|3)/\Gamma_9(0)$ which is generated by the co-sets of
\[ x = \begin{bmatrix} 1 & \frac{1}{3} \\ 0 & 1 \end{bmatrix} \qquad \text{and} \qquad y = \begin{bmatrix} 1 & 0 \\ 3 & 0 \end{bmatrix} \] To determine this group we look at the action of it on the lattices in the $(9|1)$-snake. It will fix the central lattice $3$ but will move the other lattices.
Recall that it is best to associate to the lattice $M.\frac{g}{h}$ the matrix
\[ \alpha_{M,\frac{g}{h}} = \begin{bmatrix} M & \frac{g}{h} \\ 0 & 1 \end{bmatrix} \] and then the action is given by right-multiplication.
\[
\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}.x = \begin{bmatrix} 1 & \frac{1}{3} \\ 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} 1 & \frac{1}{3} \\ 0 & 1 \end{bmatrix}.x = \begin{bmatrix} 1 & \frac{2}{3} \\ 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} 1 & \frac{2}{3} \\ 0 & 1 \end{bmatrix}.x=\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \] That is, $x$ corresponds to a $3$-cycle $1 \rightarrow 1 \frac{1}{3} \rightarrow 1 \frac{2}{3} \rightarrow 1$ and fixes the lattice $9$ (so is rotation around the axis through the vertex $9$).
To compute the action of $y$ it is best to use an alternative description of the lattice, replacing the roles of the base-vectors $\vec{e}_1$ and $\vec{e}_2$. These latices are projectively equivalent
\[ \mathbb{Z} (M \vec{e}_1 + \frac{g}{h} \vec{e}_2) \oplus \mathbb{Z} \vec{e}_2 \quad \text{and} \quad \mathbb{Z} \vec{e}_1 \oplus \mathbb{Z} (\frac{g’}{h} \vec{e}_1 + \frac{1}{h^2M} \vec{e}_2) \] where $g.g’ \equiv~1~(mod~h)$. So, we have equivalent descriptions of the lattices \[ M,\frac{g}{h} = (\frac{g’}{h},\frac{1}{h^2M}) \quad \text{and} \quad M,0 = (0,\frac{1}{M}) \] and we associate to the lattice in the second normal form the matrix \[ \beta_{M,\frac{g}{h}} = \begin{bmatrix} 1 & 0 \\ \frac{g’}{h} & \frac{1}{h^2M} \end{bmatrix} \] and then the action is again given by right-multiplication.
In the tetrahedral example we have
\[ 1 = (0,\frac{1}{3}), \quad 1\frac{1}{3}=(\frac{1}{3},\frac{1}{9}), \quad 1\frac {2}{3}=(\frac{2}{3},\frac{1}{9}), \quad 9 = (0,\frac{1}{9}) \] and \[ \begin{bmatrix} 1 & 0 \\ \frac{1}{3} & \frac{1}{9} \end{bmatrix}.y = \begin{bmatrix} 1 & 0 \\ \frac{2}{3} & \frac{1}{9} \end{bmatrix},\quad \begin{bmatrix} 1 & 0 \\ \frac{2}{3} & \frac{1}{9} \end{bmatrix}. y = \begin{bmatrix} 1 & 0 \\ 0 & \frac{1}{9} \end{bmatrix}, \quad \begin{bmatrix} 1 & 0 \\ 0 & \frac{1}{9} \end{bmatrix}. y = \begin{bmatrix} 1 & 0 \\ \frac{1}{3} & \frac{1}{9} \end{bmatrix} \] That is, $y$ corresponds to the $3$-cycle $9 \rightarrow 1 \frac{1}{3} \rightarrow 1 \frac{2}{3} \rightarrow 9$ and fixes the lattice $1$ so is a rotation around the axis through $1$.
Clearly, these two rotations generate the full rotation-symmetry group of the tetrahedron
\[ \Gamma_0(3|3)/\Gamma_0(9) \simeq A_4 \] which has a unique subgroup of index $3$ generated by the reflexions (rotations with angle $180^o$ around axis through midpoints of edges), generated by $x.y$ and $y.x$.
The moonshine group $(3|3)$ is therefore the subgroup generated by
\[ (3|3) = \langle \Gamma_0(9),\begin{bmatrix} 2 & \frac{1}{3} \\ 3 & 1 \end{bmatrix},\begin{bmatrix} 1 & \frac{1}{3} \\ 3 & 2 \end{bmatrix} \rangle \]
|
In the microcanonical ensemble, the entropy \(S\) is a natural function of \(N\),\(V\) and \(E\), i.e., \(S=S(N,V,E)\). This can be inverted to give the energy as a function of \(N\), \(V\), and \(S\), i.e., \(E=E(N,V,S)\). Consider using Legendre transformation to change from \(S\) to \(T\) using the fact that
\[T= \left(\frac {\partial E}{\partial S}\right)_{N,V}\]
The Legendre transform \(\tilde{E}\) of \(E(N,V,S)\) is
\[ \tilde {E} (N, V, T ) = E (N,V,S(T)) - S \frac {\partial E}{\partial S}\]
\[ = E(N,V,S(T)) - TS \]
The quantity \(\tilde{E}(N,V,T)\) is called the
Hemlholtz free energy and is given the symbol \(A(N,V,T)\) and is the fundamental energy in the canonical ensemble. The differential of \(A\) is
However, from \(A = E - TS \), we have
\[ dA = dE - TdS - SdT \]
From the first law, \(dE\) is given by
\[ dE = TdS - PdV + \mu dN \]
Thus,
\[ dA = - PdV - S dT + \mu dN \]
Comparing the two expressions, we see that the thermodynamic relations are
\[ S = -\left(\frac {\partial A}{\partial T}\right)_{N,V}\]
\[ P = -\left(\frac {\partial A}{\partial V}\right)_{N,T}\]
\[ \mu = -\left(\frac {\partial A}{\partial N}\right)_{V,T}\]
|
I decided to start working on my own answer after I asked the question, so here's the result of a few days' work.
An excellent (and very recent) paper regarding accretion is Debnath (2015), which can be applicable at least for material gathered onto the surface of the red dwarf.
Debnath assumes a static
1, spherically symmetric metric:$$ds^2=-A(r)dt^2+\frac{1}{B(r)}dr^2+r^2(d \theta^2+ \sin \theta d \phi^2) \tag{1}$$which uses the (-,+,+,+) sign convention. For now, we can leave $A$ and $B$ undetermined functions of $r$. We have to treat the surrounding matter as a perfect fluid, with a stress-energy tensor of$$T_{\mu \nu}=(\rho+p)u_{\mu}u_{\nu}+pg_{\mu \nu} \tag{2}$$with $\rho$ and $p$ being the density and pressure, respectively. 2 $u_{\alpha}$ is the four-vector, with the condition that $u_{\alpha}u^{\alpha}=-1$. For this fluid, though,$$u^{\alpha}=(u^0,u^1,0,0)$$We can then re-write the earlier condition as$$g_{00}u^0u^0+g_{11}u^1u^1=-1$$Substituting in that $g_{00}=g_{tt}=A(t)$ and $g_{11}=g_{rr}=\frac{1}{B(t)}$, as well as assuming (for simplicity) that $u^1=u$, we get$$\left(u^0\right)^2=\frac{\left(u^1\right)^2+B}{AB} \to u_0=g_{00}u^0=\sqrt{\frac{A(u^2+B)}{B}}$$We can also calculate $\sqrt{-g}=\sqrt{\frac{A}{B}}r^2 \sin \theta$.
The law of conservation of energy states that $u{_\mu}T_{; \nu}^{\mu \nu}=0$.
3 Putting this together with $(2)$ gives$$u^{\mu} \rho_{, \mu}+(\rho+p)u_{; \mu}^{\mu}=0$$Doing it out, we find that$$C=-ur^2M^{-2}\sqrt{\frac{A}{B}} \exp \left[\int_{\rho_{\infty}}^{\rho_R} \frac{1}{\rho + p(\rho)} d \rho\right] \tag{3}$$where $\rho_R$ is the density at the radius of the red dwarf and $C$ is a constant (which will be used later).
The rate of change of mass of the black hole, $\dot{M}$ (the negative rate of change of the mass of the fluid) is expressed as
4$$\dot{M}=\int T_0^1dS \tag{4}$$where$$dS=\sqrt{-g}d \theta d \phi$$From $(2)$, we get$$\dot{M}=4 \pi CM^2(\rho + p) \tag{5}$$
Abramowicz & Fragile (2013) give a slightly different expression in place of $(4)$ (Equation 125):$$\dot{M} = \int \sqrt{-g} \rho u^r d \theta d \phi \tag{6}$$and use $(5)$ for the energy flux. Both expressions are applied to jets on black hole accretion disks.
Working off of Debnath, the total mass transferred to the red dwarf's surface is$$M=\int_{t_0}^{t_f} \left[\int \sqrt{-g} \rho u^r d \theta d \phi \right] dt \tag{7}$$where $t_0$ and $t_f$ are the initial and final times during which the red dwarf accretes material.
I haven't quite figured out the full Roche lobe calculations just yet, but I was able to find some of the major equations. Paczynski (1971) mentions that the radius of the Roche lobe of the red dwarf is$$r_1=\left[\frac{2}{3^{4/3}} \left(\frac{M_1}{M_1+M_2} \right)^{1/3} \right]A \tag{8}$$where $M_1$ is the mass of the red dwarf and $M_2$ is the mass of the B-type star, and $A$ is the distance between them.
The issue is that this is typically applied in binary systems, while, presumably, the red dwarf is traveling at a speed greater than the B-type star's escape velocity. It is, therefore, not orbiting it. So I'm not sure if the formula is valid.
Let's say that the civilization places a planet-sized object into the disk, inserting it at an orbital velocity $V_0$. It would then undergo Bondi accretion, as shown in Bondi (1951).
5 In that paper, he goes from the expressions derived by Hoyle & Littleton and Bondi & Hoyle to get the accretion rate of$$\dot{M} = 2 \pi (GM)^2 (v^2 + c_s^2)^{-3/2} \rho \tag{9}$$where $v$ is relative to the fluid. Taking the limit as $v \to 0$ gives the approximation shown on Wikipedia, although it differs by a factor of 2.
We can't just use this, though, because there are other things to consider. First, all the gas and dust in the disk is orbiting at the same rate as this object, so $V_0 \neq v$. Second, the conditions change. For each orbit the object makes, the density of the matter in the path through the disk changes, because it has been swept up. Finally, the object may be severely affected by Stokes drag.
The density issue can be dealt with by simply assigning the object a number of orbits $n$ at time $t$, and saying that during each orbit, it accretes $x$ percent of the gas and dust in its way. Once this is known, an expression can be derived for the accretion during each orbit.
The Stokes drag is slightly more interesting. As shown in a derivation by Gavnholt et. al. (2004), the formula is$$D=6 \pi \mu U a \left(1+\frac{3Re}{8} \right) \tag{10}$$where $U=v$, $a$ is the object's radius, $\mu$ is the viscosity and $Re$ is the Reynolds number. This means that$$\frac{dv}{dt} \propto v$$Knowing that, and placing the object in a circular orbit such that$$F_g=F_c \to G\frac{M_sm_o}{r^2}=\frac{mv^2}{r}$$where $M_s$ is the mass of the B-type star, $m_o$ is the mass of the object and $r$ is the distance between them. we can write $v$ as a function of time and then solve for $r$ as a function of $v$, eventually witnessing orbital decay. Also, if $\rho$ is a function of $r$, we can further complicate everything. This also goes for the accretion experienced by the red dwarf.
I feel bad about not doing any actual calculations (i.e. with actual numbers), so I'll discuss a special case here: A dust disk surrounding a spherically symmetric body.
In a dust solution, $p=0$, so our generic equation of state $p=p(\rho)$ goes to 0. Imposing spherically symmetry means that $A=B$. Accounting for all this turns $(3)$ into$$C=-r^2uM^{-2}\sqrt{\frac{1}{1}} \exp \left[ \int_{\rho_{\infty}}^{\rho_R} \frac{1}{\rho + 0} d \rho \right]$$$$=-r^2uM^{-2} \exp \left[ \ln \frac{\rho_R}{\rho_{\infty}} \right]$$$$=-r^2uM^{-2} \frac{\rho_R}{\rho_{\infty}} $$
Plugging this into $(5)$ gives us$$\dot{M}=-4 \pi r^2u \frac{\rho_R}{\rho_{\infty}} \rho$$Just take a given $\rho$, pick a velocity, and solve for $\dot{M}$.
I still haven't put in any numbers, but it's at the point where you don't have to do much to find the result.
Accretion by planetary-mass objects in debris disk has been observed, such as in Epsilon Eridani's debris disk (Greaves et. al. (2005); explored also by Backman et. al. (2008)). A good overview of the process is given by Janson et. al. (2013), while it was simulated by Stark & Kuchner (2009) and Nesvold & Kuchner (2014). The only issue now is to establish whether or not a Type II civilization could build such an object.
Footnotes 1 This means we have to neglect rotation, which could be a problem. 2 Were we to assume vanishing pressure, as in a true dust solution, things could get simpler (and, perhaps, more interesting). For now, though, we'll treat it as a perfect fluid, and treat it as homogenous. 3 I'm using the convention in which a comma indicates a partial derivative and a semicolon indicates a covariant derivative. 4 Raising and lowering indices via the metric tensor. 5 In "thin disk" scenarios, the red dwarf might not undergo spherical accretion.
References Abramowicz, M. A. and Fragile, P. C. "Foundations of Black Hole Accretion Disk Theory" (2013)
Backman, D. et. al. "Epsilon Eridani’s Planetary Debris Disk:Structure and Dynamics based onSpitzer and CSO Observations" (2008)
Bondi, H. "On Spherically Symmetric Accretion" (1951)
Debnath, U. "Accretion and Evaporation of Modified Hayward Black Hole" (2015)
Gavnholt, J. et. al. "Calculations of the FlowAround a Sphere in a Fluid" (2004)
Janson, M. et. al. "The SEEDS Direct Imaging Survey for Planets and Scattered Dust Emissionin Debris Disk Systems" (2013)
Nesvold, E. R. and Kuchner, M. J. "Gap Clearing by Planets in a Collisional Debris Disk" (2014)
Paczynski, B. "Evolutionary Processes in Close Binary Systems" (1971)
Stark, C. C. and Kuchner, M. J. "A New Algorithm for Self-Consistent 3-D Modeling of Collisionsin Dusty Debris Disks" (2009)
|
The answer to my question is as follows. The Hamiltonian given by
$$H^{can} = \lim_{\rho \to \pi/2} (cos\rho)^{2-d}\int d^{d-1}\Omega \dfrac{h_{tt}}{16\pi G_N}$$
is a surface term, which is characteristic of gravity. In general the Hamiltonian can have a volume term which has contribution from the Hamiltonian density and a surface term which has contribution from the surface integral. However if one does an ADM decomposition of the Einstein-Hilbert action, then one finds that the Hamiltonian density is a constraint which is set to zero. Therefore there is no volume contribution to the Hamiltonian.
So one is left with the surface term only. It was shown by [Regge and Teitelboim][1] that in order to properly implement the variational principle for the Hamiltonian so as to get the equations of motion for pure gravity, the surface term of the above form is necessary. This was shown not only for asymptotically $AdS$ spacetime for which we have the above Hamiltonian but also for any spacetime with a boundary in the linked paper.
[1]: https://www.sciencedirect.com/science/article/pii/0003491674904047
|
\[ E[X] = \sum_{i=1}^{\infty} x_i p(x_i) \]
The standard deviation is the square root of the variance, and the variance is given in terms of the expected value.
\[ Var(X) = E[X^2] - (E[X])^2 \]
Except that $$E[X^2]$$ is of course completely different from $$(E[X])^2$$, but it gets worse, because $$E[X^2]$$ makes
no notational sense whatsoever. In any other function, in math, doing $$f(x^2)$$ means going through and substitution $$x$$ with $$x^2$$. In this case, however, $$E[X]$$ actually doesn't have anything to do with the resulting equation, because $$X \neq x_i$$, and as a result, the equation for $$E[X^2]$$ is this:
\[ E[X^2] = \sum_i x_i^2 p(x_i) \]
Only the first $$x_i$$ is squared. $$p(x_i)$$ isn't, because it doesn't make any sense in the first place. It should really be just $$P_{Xi}$$ or something, because it's a
discrete value, not a function! It would also explain why the $$x_i$$ inside $$p()$$ isn't squared - because it doesn't even exist, it's just a gross abuse of notation. This situation is so bloody confusing I even explicitely laid out the equation for $$E[X^2]$$ in my own notes, presumably to prevent me from trying to figure out what the hell was going on in the middle of my final.
That, however, was only the beginning. Another question required them to find the covariance between two seperate discrete distributions, $$X$$ and $$Y$$. I have never actually done covariance, so my notes were of no help here, and I was forced to return to wikipedia, which gives this helpful equation.
\[ cov(X,Y) = E[XY] - E[X]E[Y] \]
Oh shit. I've already established that $$E[X^2]$$ is impossible to determine because the notation doesn't rely on any obvious rules, which means that $$E[XY]$$ could evaluate to
god knows what. Luckily, wikipedia has an alternative calculation method:
\[ cov(X,Y) = \frac{1}{n}\sum_{i=1}^{n} (x_i - E(X))(y_i - E(Y)) \]
This almost works, except for two problems. One, $$\frac{1}{n}$$ doesn't actually work because we have a nonuniform discrete probability distribution, so we have to substitute multiplying in the probability mass function $$p(x_i,y_i)$$ instead. Two, wikipedia refers to $$E(X)$$ and $$E(Y)$$ as the
means, not the expected value. This gets even more confusing because, at the beginning of the Wikipedia article, it used brackets ($$E[X]$$), and now it's using parenthesis ($$E(X)$$). Is that the same value? Is it something completely different? Calling it the mean would be confusing because the average of a given data set isn't necessarily the same as finding what the average expected valueof a probability distribution is, which is why we call it the expected value. But naturally, I quickly discovered that yes, the mean and the average and the expected value are all exactly the same thing!Also, I still don't know why Wikipedia suddenly switched to $$E(X)$$ instead of $$E[X]$$ because it stills means the exact same goddamn thing.
We're up to, what, five different ways of saying
the same thing?At least, I'm assuming it's the same thing, but there could be some incredibly subtle distinction between the two that nobody ever explains anywhere except in some academic paper locked up behind a paywall that was published 30 years ago, because apparently mathematicians are okay with this.
Even then, this is just
one instancewhere the ambiguity and redundancy in our mathematical notation has caused enormous confusion. I find it particularly telling that the most difficult part about figuring out any mathematical equation for me is usually to simply figure out what all the goddamn notation even means, because usually most of it isn't explained at all. Do you know how many ways we have of taking the derivative of something?
$$f'(x)$$ is the same as $$\frac{dy}{dx}$$ or $$\frac{df}{dx}$$ even $$\frac{d}{dx}f(x)$$ which is the same as $$\dot x$$ which is the same as $$Df$$ which is technically the same as $$D_xf(x)$$ and also $$D_xy$$ which is also the same as $$f_x(x)$$ provided x is the only variable, because taking the partial derivative of a function with only one variable is the exact same as taking the derivative in the first place, and I've actually seen math papers abuse this fact instead of use some other sane notation for the derivative. And that's just for the derivative!
Don't even get me started on multiplication, where we use $$2 \times 2$$ in elementary school, $$*$$ on computers, but use $$\cdot$$ or simply stick two things next to each other in traditional mathematics. Not only is using $$\times$$ confusing as a multiplicative operator when you have $$x$$ floating around, but it's a
real operator!It means cross product in vector analysis. Of course, the $$\cdot$$ also doubles as meaning the Dot Product, which is at least nominally acceptable since a dot product does reduce to a simple multiplication of scalar values. The Outer Product is generally given as $$\otimes$$, unless you're in Geometric Algebra, in which case it's given by $$\wedge$$, which of course means ANDin binary logic. Geometric Algebra then re-uses the cross product symbol $$\times$$ to instead mean commutator product, and also defines the regressive productas the dual of the outer product, which uses $$\nabla$$. This conflicts with the gradient operator in multivariable calculus, which uses the exact same symbol in a totally different context, and just for fun it also defined $$*$$ as the "scalar" product, just to make sure every possible operator has been violently hijacked to mean something completely unexpected.
This is just
one areaof mathematics - it is common for many different subfields of math to redefine operators into their own meaning and god forbid any of these fields actually come into contact with each other because then no one knows what the hell is going on. Math is a language that is about as consistent as English, and that's on a good day.
I am sick and tired of people complaining that nobody likes math when they refuse to admit that mathematical notation sucks, and is a major roadblock for many students. It is useful only for advanced mathematics that take place in university graduate programs and research laboratories. It's hard enough to teach people calculus, let alone expose them to something useful like statistical analysis or matrix algebra that is relevant in our modern world when the notation looks like Greek and makes about as much sense as the English pronunciation rules. We simply cannot introduce people to advanced math by writing a bunch of incoherent equations on a whiteboard. We need to find a way to separate the underlying mathematical
conceptsfrom the arcane scribbles we force students to deal with.
Personally, I understand most of higher math by reformulating it in terms of lambda calculus and type theory, because they map to real world programs I can write and investigate and
explore. Interpreting mathematical concepts in terms of computer programs is just one way to make math more tangible. There must be other ways we can explain math without having to explain the extraordinarily dense, outdated notation that we use.
|
Definition:Null Relation Definition $\mathcal R \subseteq S \times T: \mathcal R = \O$ That is, no element of $S$ relates to any element in $T$: $\mathcal R: S \times T: \forall \tuple {s, t} \in S \times T: \neg s \mathrel {\mathcal R} t$ Also known as
This is also sometimes referred to as
a trivial relation by some authors, but to save confusion it is better to use that term specifically to mean this one.
Other sources prefer to call it the
empty relation. Also see Results about the null relationcan be found here. Sources 1960: Paul R. Halmos: Naive Set Theory... (previous) ... (next): $\S 7$: Relations 1975: T.S. Blyth: Set Theory and Abstract Algebra... (previous) ... (next): $\S 4$. Relations; functional relations; mappings: Example $4.3$ 1977: Gary Chartrand: Introductory Graph Theory... (previous) ... (next): Appendix $\text{A}.2$: Cartesian Products and Relations: Problem Set $\text{A}.2$: $11$
|
Keywords
Inverse eigenvalue problem, nonnegative matrix, prescribed diagonal entries
Abstract
The problem of the existence and construction of nonnegative matrices with prescribed eigenvalues and diagonal entries is an important inverse problem, interesting by itself, but also necessary to apply a perturbation result, which has played an important role in the study of certain nonnegative inverse spectral problems. A number of partial results about the problem have been published by several authors, mainly by H. \v{S}migoc. In this paper, the relevance of a Brauer's result, and its implication for the nonnegative inverse eigenvalue problem with prescribed diagonal entries is emphasized. As a consequence, given a list of complex numbers of \v{S}migoc type, or a list $\Lambda = \left\{\lambda _{1},\ldots ,\lambda _{n} \right \}$ with $\operatorname{Re}\lambda _{i}\leq 0,$ $\lambda _{1}\geq -\sum\limits_{i=2}^{n}\lambda _{i}$, and $\left\{-\sum\limits_{i=2}^{n}\lambda _{i},\lambda _{2},\ldots ,\lambda _{n} \right\}$ being realizable; and given a list of nonnegative real numbers $% \Gamma = \left\{\gamma _{1},\ldots ,\gamma _{n} \right\}$, the remarkably simple condition $\gamma _{1}+\cdots +\gamma _{n} = \lambda _{1}+\cdots +\lambda _{n}$ is necessary and sufficient for the existence and construction of a realizing matrix with diagonal entries $\Gamma .$ Conditions for more general lists of complex numbers are also given.
Recommended Citation
Soto, Ricardo L.; Julio, Ana I.; and Collao, Macarena A..(2019),"Brauer's theorem and nonnegative matrices with prescribed diagonal entries",
Electronic Journal of Linear Algebra,Volume 35, pp. 53-64. DOI: https://doi.org/10.13001/1081-3810.3886
|
Cone is a three-dimensional structure having a circular base where a set of line segments, connecting all of the points on the base to a common point called apex. A cone can be seen as a set of non-congruent circular discs that are stacked on one another such that ratio of the radius of adjacent discs remains constant. You can think of a cone as a triangle which is being rotated about one of its vertices. There is a predefined set of formulas for the calculation of curved surface area and total surface area of a cone which is collectively called as cone formula.
Curved surface area of a cone = \(\pi rl\) Total surface area of a cone = \(\pi r\left ( l+r \right )\) \(l=\sqrt{h^{2}+r^{2}}\)
Where, r is the base radius, h is the height and
l is the slant height of the cone. Derivation:
In order to calculate the curved surface area and total surface area of a cone, we divide it into a circular base and the top slanted part. The area of the slanted part gives you the curved surface area. Total surface area is the sum of this circular base and curved surface areas.
Area of the circular base:
The base is a simple circle and we know that area of a circle is given as:
Area of a circle=πr²
Where
is the base radius of the cone r Area of the curved surface:
Now if open the curved top and cut into small pieces, so that each cut portion is a small triangle, whose height is the slant height
l of the cone.
Now the area of each triangle =1/2× base of each triangle ×
l.
∴Area of the curved surface = sum of the areas of all the triangles
\(=\frac{1}{2}\times b_{1}\times l+ \frac{1}{2}\times b_{2}\times l+\frac{1}{2}\times b_{3}\times l+……… +\frac{1}{2}\times b_{n}\times 1\)
\(=\frac{1}{2}l\left ( b_{1}+b_{2}+b_{3}+……+b_{n} \right )\)
\(=\frac{1}{2}l\left ( curved surface \right )\)
From the figure, we know that, the curved surface is equivalent to the perimeter of the base of the cone.
The circumference of the base of the cone = \(2\pi r\)
∴ Area of the curved surface = \(\frac{1}{2}\times l\times 2\pi r\)
Area of the curved surface= πrl
Total Surface Area of a Cone = Area of the circular base + Area of the curved surface
Total Surface Area of a Cone =
\(\pi r^{2}+\pi rl\) Total surface area of a cone = πr (l + r) Solved Examples: Question 1: Find the total surface area of a cone, if radius is 8.2 cm and height is 16 cm. Solution: Given,
Radius r= 8.2 cm
Height h= 16 cm
Slant height, \(l=\sqrt{h^{2}+r^{2}}\)
\(l=\sqrt{16^{2}+8.2^{2}}\)
\(l=17.98cm\)
Total surface area of a cone = \(\pi r\left ( l+r \right )\)
Total surface area of a cone = 3.14 x 8.2(17.98 + 8.2) = 674.4cm²
Question 2: Calculate the curved surface of a cone having base radius 5 cm and a slant height of 20 cm. (Take \(\pi =\frac{22}{7}\) ) Solution: Given,
Radius r= 8 cm
Slant height l= 20 cm
Using the formula of curved surface area of a cone,
Area of the curved surface = πrl
Area of the curved surface = \(\frac{22}{7}\times 5\times 20=314.08cm^{2}\)
To solve more problems on the topic, download Byju’s -The Learning App.
|
ISSN:
1551-0018
eISSN:
1547-1063
All Issues
Mathematical Biosciences & Engineering
2006 , Volume 3 , Issue 1
Select all articles
Export/Reference:
Abstract:
Zhien Ma's love for mathematics has strongly shaped his educational pursuits. He received formal training from the strong Chinese School of Dynamical System over a period of two years at Peking University and during his later visit to Nanjing University where the internationally renowned professor Yanqian Ye mentored him. Yet, it is well known that Zhien's curiosity and love of challenges have made him his own best teacher. Hence, it is not surprising to see his shift from an outstanding contributor to the field of dynamical systems to a pioneer in the field of mathematical biology. Zhien's vision and courage became evident when he abandoned a promising career in pure mathematics and enthusiastically embraced a career in the field of mathematical biology soon after his first visit to the United States in 1985. His rapid rise to his current role as an international leader and premier mentor to 11 Ph.D. students has facilitated the placement of Chinese scientists and scholars at the forefront of research in the fields of mathematical, theoretical and computational biology.
For the full paper, please click the "Full Text" button above.
Abstract:
The SARS epidemic of 2002-3 led to the study of epidemic models including management measures and other generalizations of the original 1927 epidemic model of Kermack and McKendrick. We consider some natural extensions of the Kermack-McKendrick model and show that they share the main properties of the original model.
Abstract:
In this paper we consider the bifurcations that occur at the trivial equilibrium of a general class of nonlinear Leslie matrix models for the dynamics of a structured population in which only the oldest class is reproductive. Using the inherent net reproductive number n as a parameter, we show that a global branch of positive equilibria bifurcates from the trivial equilibrium at $n=1$ despite the fact that the bifurcation is nongeneric. The bifurcation can be either supercritical or subcritical, but unlike the case of a generic transcritical bifurcation in iteroparous models, the stability of the bifurcating positive equilibria is not determined by the direction of bifurcation. In addition we show that a branch of single-class cycles also bifurcates from the trivial equilibrium at $n=1$. In the case of two population classes, either the bifurcating equilibria or the bifurcating cycles are stable (but not both) depending on the relative strengths of the inter- and intra-class competition. Strong inter-class competition leads to stable cycles in which the two population classes are temporally separated. In the case of three or more classes the bifurcating cycles often lie on a bifurcating invariant loop whose structure is that of a cycle chain consisting of the different phases of a periodic cycle connected by heteroclinic orbits. Under certain circumstances, these bifurcating loops are attractors.
Abstract:
The dynamics of a differential functional equation system representing an allelopathic competition is analyzed. The delayed allelochemical production process is represented by means of a distributed delay term in a linear quorum-sensing model. Sufficient conditions for local asymptotic stability properties of biologically meaningful steady-state solutions are given in terms of the parameters of the system. A global asymptotic stability result is also proved by constructing a suitable Lyapunov functional. Some simulations confirm the analytical results.
Abstract:
A heterogenous environment usually impacts, and sometimes determines, the structure and function of organisms in a population. We simulate the effects of a chemical on a population in a spatially heterogeneous environment to determine perceived stressor and spatial effects on dynamic behavior of the population. The population is assumed to be physiologically structured and composed of individuals having both sessile and mobile life history stages, who utilize energetically-controlled, resource-directed, chemical-avoidance advective movements and are subjected to random or density dependent diffusion. From a modeling perspective, the presence of a chemical in the environment requires introduction of both an exposure model and an effects module. The spatial location of the chemical stressor determines the exposure levels and ultimately the effects on the population while the relative location of the resource and organism determines growth. We develop a mathematical model, the numerical analysis for this model, and the simulation techniques necessary to solve the problem of population dynamics in an environment where heterogeneity is generated by resource and chemical stressor. In the simulations, the chemical is assumed to be a nonpolar narcotic and the individuals respond to the chemical via both physiological response and by physical movement. In the absence of a chemical stressor, simulation experiments indicate that despite a propensity to move to regions of higher resource density, organisms need not concentrate in the vicinity of high levels of resource. We focus on the dynamical variations due to advection induced by the toxicant. It is demonstrated that the relationship between resource levels and toxicant concentrations is crucial in determining persistence or extinction of the population.
Abstract:
In this paper we outline some methods of finding limit cycles for planar autonomous systems with small parameter perturbations. Three ways of studying Hopf bifurcations and the method of Melnikov functions in studying Poincaré bifurcations are introduced briefly. A new method of stability-changing in studying homoclinic bifurcation is described along with some interesting applications to polynomial systems.
Abstract:
For a reaction-diffusion model of microbial flow reactor with two competing populations, we show the coexistence of weakly coupled traveling wave solutions in the sense that one organism undergoes a population growth while another organism remains in a very low population density in the first half interval of the space line; the population densities then exchange the position in the next half interval. This type of traveling wave can occur only if the input nutrient slightly exceeds the maximum carrying capacity for these two populations. This means, lacking an adequate nutrient, two competing organisms will manage to survive in a more economical way.
Abstract:
We formulate differential susceptibility and differential infectivity models for disease transmission in this paper. The susceptibles are divided into
ngroups based on their susceptibilities, and the infectives are divided into mgroups according to their infectivities. Both the standard incidence and the bilinear incidence are considered for different diseases. We obtain explicit formulas for the reproductive number. We define the reproductive number for each subgroup. Then the reproductive number for the entire population is a weighted average of those reproductive numbers for the subgroups. The formulas for the reproductive number are derived from the local stability of the infection-free equilibrium. We show that the infection-free equilibrium is globally stable as the reproductive number is less than one for the models with the bilinear incidence or with the standard incidence but no disease-induced death. We then show that if the reproductive number is greater than one, there exists a unique endemic equilibrium for these models. For the general cases of the models with the standard incidence and death, conditions are derived to ensure the uniqueness of the endemic equilibrium. We also provide numerical examples to demonstrate that the unique endemic equilibrium is asymptotically stable if it exists. Abstract:
In this paper, an SIR epidemic model for the spread of an infectious disease transmitted by direct contact among humans and vectors (mosquitoes) which have an incubation time to become infectious is formulated. It is shown that a disease-free equilibrium point is globally stable if no endemic equilibrium point exists. Further, the endemic equilibrium point (if it exists) is globally stable with respect to a ''weak delay''. Some known results are generalized.
Abstract:
A competition model of the chemostat with an external inhibitor is considered. This inhibitor is lethal to one competitor and results in the decrease of growth rate of this competitor. The existence and stability of the extinction equilibria are discussed by using Liapunov function. The necessary and sufficient condition guaranteeing the existence of the interior equilibrium is given. It is found by numerical simulation that the system may be globally stable or have a stable limit cycle if the interior equilibrium exists.
Abstract:
By using the theory of planar dynamical systems to the travelling wave equation of a higher order nonlinear wave equations of KdV type, the existence of smooth solitary wave, kink wave and anti-kink wave solutions and uncountably infinite many smooth and non-smooth periodic wave solutions are proved. In different regions of the parametric space, the sufficient conditions to guarantee the existence of the above solutions are given. In some conditions, exact explicit parametric representations of these waves are obtain.
Abstract:
The permanence of the following Lotka-Volterra system with time delays
$\dot{x}_ 1(t) = x_1(t)[r_1 - a_1x_1(t) + a_11x_1(t - \tau_11) + a_12x_2(t - \tau_12)]$,
$\dot{x}_ 2(t) = x_2(t)[r_2 - a_2x_2(t) + a_21x_1(t - \tau_21) + a_22x_2(t - \tau_22)]$,
is considered. With intraspecific competition, it is proved that in competitive case, the system is permanent if and only if the interaction matrix of the system satisfies condition (C1) and in cooperative case it is proved that condition (C2) is sufficient for the permanence of the system.
Abstract:
A patchy model for the spatial spread of West Nile virus is formulated and analyzed. The basic reproduction number is calculated and compared for different long-range dispersal patterns of birds, and simulations are carried out to demonstrate discontinuous or jump spatial spread of the virus when the birds' long-range dispersal dominates the nearest neighborhood interaction and diffusion of mosquitoes and birds.
Abstract:
In this paper we derive threshold conditions for eradication of diseases that can be described by seasonally forced susceptible-exposed-infectious-recovered (SEIR) models or their variants. For autonomous models, the basic reproduction number $\mathcal{R}_0 < 1$ is usually both necessary and sufficient for the extinction of diseases. For seasonally forced models, $\mathcal{R}_0$ is a function of time $t$. We find that for models without recruitment of susceptible individuals (via births or loss of immunity), max$_t{\mathcal{R}_0(t)} < 1$ is required to prevent outbreaks no matter when and how the disease is introduced. For models with recruitment, if the latent period can be neglected, the disease goes extinct if and only if the basic reproduction number $\bar{\mathcal{R}}$ of the time-average systems (the autonomous systems obtained by replacing the time-varying parameters with their long-term time averages) is less than 1. Otherwise, $\bar{\mathcal{R}} < 1$ is sufficient but not necessary for extinction. Thus, reducing $\bar{\mathcal{R}}$ of the average system to less than 1 is sufficient to prevent or curtail the spread of an endemic disease.
Abstract:
We consider the following Lotka-Volterra predator-prey system with two delays:
$x'(t) = x(t) [r_1 - ax(t- \tau_1) - by(t)]$
$y'(t) = y(t) [-r_2 + cx(t) - dy(t- \tau_2)]$ (E)
We show that a positive equilibrium of system (E) is globally asymptotically stable for small delays. Critical values of time delay through which system (E) undergoes a Hopf bifurcation are analytically determined. Some numerical simulations suggest an existence of subcritical Hopf bifurcation near the critical values of time delay. Further system (E) exhibits some chaotic behavior when $tau_2$ becomes large.
Abstract:
The nonlinear $L^2$-stability (instability) of the equilibrium states of two-species population dynamics with dispersal is studied. The obtained results are based on (i) the rigorous reduction of the $L^2$-nonlinear stability to the stability of the zero solution of a linear binary system of ODEs and (ii) the introduction of a particular Liapunov functional V such that the sign of $\frac{dV}{dt}$ along the solutions is linked directly to the eigenvalues of the linear problem.
Abstract:
The goal of this paper is to study the global spread of SARS. We propose a multiregional compartmental model using medical geography theory (central place theory) and regarding each outbreak zone (such as Hong Kong, Singapore, Toronto, and Beijing) as one region. We then study the effect of the travel of individuals (especially the infected and exposed ones) between regions on the global spread of the disease.
Abstract:
The frequency-dependent (standard) form of the incidence is used for the transmission dynamics of an infectious disease in a competing species model. In the global analysis of the SIS model with the birth rate independent of the population size, a modified reproduction number $\mathbf{R}_1$ determines the asymptotic behavior, so that the disease dies out if $\mathbf{R}_1 \leq 1$ and approaches a globally attractive endemic equilibrium if $\mathbf{R}_1 > 1$. Because the disease- reduced reproduction and disease-related death rates are often different in two competing species, a shared disease can change the outcome of the competition. Models of SIR and SIRS type are also considered. A key result in all of these models with the frequency-dependent incidence is that the disease must either die out in both species or remain endemic in both species.
Abstract:
Based on some important experimental dates, in this paper we shall introduce time delays into Mehrs's non-linear differential system model which is used to describe proliferation, differentiation and death of T cells in the thymus (see, for example, [3], [6], [7] and [9]) and give a revised nonlinear differential system model with time delays. By using some classical analysis techniques of functional differential equations, we also consider local and global asymptotic stability of the equilibrium and the permanence of the model.
Abstract:
Ecstasy has gained popularity among young adults who frequent raves and nightclubs. The Drug Enforcement Administration reported a 500 percent increase in the use of ecstasy between 1993 and 1998. The number of ecstasy users kept growing until 2002, years after a national public education initiative against ecstasy use was launched. In this study, a system of differential equations is used to model the peer-driven dynamics of ecstasy use. It is found that backward bifurcations describe situations when sufficient peer pressure can cause an epidemic of ecstasy use. Furthermore, factors that have the greatest influence on ecstasy use as predicted by the model are highlighted. The effect of education is also explored, and the results of simulations are shown to illustrate some possible outcomes.
Abstract:
Epidemic models with behavior changes are studied to consider effects of protection measures and intervention policies. It is found that intervention strategies decrease endemic levels and tend to make the dynamical behavior of a disease evolution simpler. For a saturated infection force, the model may admit a stable disease-free equilibrium and a stable endemic equilibrium at the same time. If we vary a recovery rate, numerical simulations show that the boundaries of the region for the persistence of the disease undergo the changes from the separatrix of a saddle to an unstable limit cycle. If the inhibition effect from behavior changes is weak, we find two limit cycles and obtain bifurcations of the model as the population size changes. We also find that the disease may die out although there are two endemic equilibria.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]
|
Following the along the same lines as benguin, let $m$ denote the betting amount and let $A,B$ be multiples of $m$ for simplicity. WLOG assume $m=1$ and adjust $A,B$ accordingly so they simply denote a number of steps to either win or lose. This much simpler setup can be transformed back again to cover all cases anyway.
Even cases where $A,B$ are not exact multiples of $m$ essientially have specific multiples of $m$ as either winning or losing, so the same probabilities as for some exact-multiple situation will come into play.
Let $p$ denote $P(T)$ for brevity and similarly let $q$ denote $1-P(T)$. Also let $p_x$ denote the probability of winning given we have started with $x$ dollars. Then$$\begin{align}p_B&=1\\p_x&=p\cdot p_{x+1}+q\cdot p_{x-1},\quad\text{for }0<x<B\\p_0&=0\end{align}$$We quickly see that$$p_1=p\cdot p_2$$and it turns out that in general we can write $p_x$ in terms of $p_{x+1}$ in the following way$$\alpha_x p_x=\beta_x p\cdot p_{x+1}$$Using the relation $p_x=p\cdot p_{x+1}+q\cdot p_{x-1}$ from before but for $p_{x+1}$ and multiplying by $\alpha_x$ on both sides we obtain:$$\begin{align}\alpha_x p_{x+1}&=\alpha_x(p\cdot p_{x+2}+q\cdot p_x)\\&=\alpha_x p\cdot p_{x+2}+q\cdot\alpha_x p_x\\&=\alpha_x p\cdot p_{x+2}+q\cdot\beta_x p\cdot p_{x+1}\\&\Updownarrow\\\underbrace{(\alpha_x-\beta_x pq)}_{\alpha_{x+1}}p_{x+1}&=\underbrace{\alpha_x}_{\beta_{x+1}}p\cdot p_{x+2}\end{align}$$So we find that $\beta_{x+1}=\alpha_x$ or equivalently $\beta_x=\alpha_{x-1}$ and thus$$\begin{align}\alpha_{x+1}&=\alpha_x-\beta_x pq\\&=\alpha_x-\alpha_{x-1}pq\end{align}$$Or we could express this in terms of $\beta$'s, namely$$\beta_x=\beta_{x-1}-\beta_{x-2}pq$$This recurrence relation with initial conditions $\beta_0=0$ and $\beta_1=1$ can be solved to have:$$\beta_x=\frac{(1+\sqrt{1-4pq})^x-(1-\sqrt{1-4pq})^x}{2^x\sqrt{1-4pq}}$$which by applying the binomial theorem can be rewritten as$$\beta_x=\frac{2}{2^x}\sum_{i=0}^{\lfloor x/2\rfloor}\binom x{2i+1} (1-4pq)^i$$
This is all very interesting since we have$$p_x=\frac{\beta_x p}{\alpha_x}\cdot p_{x+1}$$so if we can just work out the quotients $\gamma_x=\beta_x p/\alpha_x$ we can work our way all the way from $p_B=1$ down to $p_A$ by simply forming the quotient $\gamma_{B-1}\cdot\gamma_{B-2}\cdots\gamma_A$ which will then in fact equal $p_A$.
As far as I can see, the most practical form of $\beta_x$ to use when computing $\gamma_x$ appears to be the first one, and since $\alpha_x=\beta_{x+1}$ we have$$\gamma_x=p\cdot\frac{\beta_x}{\beta_{x+1}}=2p\cdot\frac{(1+\sqrt{1-4pq})^x-(1-\sqrt{1-4pq})^x}{(1+\sqrt{1-4pq})^{x+1}-(1-\sqrt{1-4pq})^{x+1}}$$which easily can be used at least numerically to determine $\gamma_x$'s and thereby $p_A$'s for specific paramter settings. This is where I give up chasing a closed form. But at least we see that $\gamma_{B-k}$ must be less that $(2p)^k$. It follows that$$p_x<(2p)^{1+2+...+(B-x)}=(2p)^{(B-x)(B+1-x)/2}$$
I ran a couple of Monte Carlo trials where I computed $p_A$ numerically using this formula for $\gamma_x$, and it appears to be legit.
A slightly tighter bound than $\gamma_x<2p$ happens to be:$$\gamma_x<\gamma_{\max}:=2p\cdot\frac{1}{1+\sqrt{1-4pq}}=\frac{1-\sqrt{1-4pq}}{q}$$if $p$ is not too close to zero we have for relatively small values of $x$ that this bound is pretty tight. Notably $\gamma_x$ converges to this bound as $x$ tends to infinity. This finally leads to
An approximate answer to the original question
Given $p<0.5$, a strategy defined by values of $A,B$ is said to be less than optimal if we have$$p_A<p$$Let $n:=B-A$ denote the number of steps to win. Then we can find an estimate for $n$ rendering $A,B$ less than optimal by considering$$p_A\approx(\gamma_{\max})^n=p\iff n=\frac{\log(p)}{\log(\gamma_{\max})}$$Here is a plot of this estimate for $n$ as a function of $p$:
For many parameter settings I have tested, this estimate is reasonably close.
I also searched some specific cases for an optimal strategy. For instance if $p=0.49$ and $A=200,B=300$ the optimal strategy turns out to be to bet $m=38$ so that the strategy essentially becomes $A'=\lfloor A/m\rfloor=5$ and $B'=\lfloor B/m\rfloor=8$ in which case we have $p_{A'}=0.5871$. The smaller the value of $p$ the more likely it becomes that you should bet somewhere around $m\approx B-A$ to finish the game as quickly as possible.
But contrary to what you thought, betting $m=A$ almost never is a good strategy, since that leaves you only a single step to lose and thus no hope for recovery.
|
Self-Consistent Schrödinger-Poisson Results for a Nanowire Benchmark
The
Schrödinger-Poisson Equation multiphysics interface simulates systems with quantum-confined charge carriers, such as quantum wells, wires, and dots. Here, we examine a benchmark model of a GaAs nanowire to demonstrate how to use this feature in the Semiconductor Module, an add-on product to the COMSOL Multiphysics® software.
The Schrödinger-Poisson Equation Multiphysics Interface
The
Schrödinger-Poisson Equation multiphysics interface, available as of COMSOL Multiphysics® version 5.4, creates a bidirectional coupling between the Electrostatics interface and the Schrödinger Equation interface to model charge carriers in quantum-confined systems. The electric potential from the electrostatics contributes to the potential energy term in the Schrödinger equation. A statistically weighted sum of the probability densities from the eigenstates of the Schrödinger equation contributes to the space charge density in the electrostatics. All spatial dimensions (1D, 1D axial symmetry, 2D, 2D axial symmetry, and 3D) are supported. Solving the Schrödinger-Poisson System
The Schrödinger-Poisson system is special in that a stationary study is necessary for the electostatics, and an eigenvalue study is necessary for the Schrödinger equation. To solve the two-way coupled system, the Schrödinger equation and Poisson’s equation are solved iteratively until a self-consistent solution is obtained. The iterative procedure consists of the following steps:
Step 1
To provide a good initial condition for the iterations, we solve Poisson’s equation
(1)
for the electric potential, V, in which \epsilon is the permittivity and \rho is the space charge density.
In this initialization step, \rho is given by the best initial estimate from physical arguments; for example, using the Thomas-Fermi approximation.
Step 2
The electric potential, V, from the previous step contributes to the potential energy term, V_e, in the Schrödinger equation
(2)
where q is the charge of the carrier particle, which is given by
(3)
where z_q is the charge number and e is the elementary charge.
Step 3
With the updated potential energy term given by Eq. 2, the Schrödinger equation is solved, producing a set of eigenenergies, E_i, and a corresponding set of normalized wave functions, \Psi_i.
Step 4
The particle density profile, n_\mathrm{sum}, is computed using a statistically weighted sum of the probability densities
(4)
where the weight, N_i, is given by integrating the Fermi-Dirac distribution for the out-of-plane continuum states (thus depending on the spatial dimension of the model).
(5)
(6)
(7)
where g_i is the valley degeneracy factor, E_f is the Fermi level, k_B is the Boltzmann constant, T is the absolute temperature, m_d is the density of state effective mass, and F_0 and F_{-1/2} are Fermi-Dirac integrals.
For simplicity, the weighted sum in Eq. 4 shows only one index, i, for the summation. There can be, of course, more than one index in the summation. For example, in the nanowire model discussed here, the summation is over both the azimuthal quantum number and the eigenenergy levels (for each azimuthal quantum number).
Step 5
Given the particle density profile, n_\mathrm{sum}, we reestimate the space charge density, \rho , and then re-solve Poisson’s equation to obtain a new electric potential profile, V. The straightforward formula for the new space charge density
(8)
almost always leads to divergence of the iterations. A much better estimate is given by
(9)
where V_\mathrm{old} is the electric potential from the previous iteration and \alpha is an additional tuning parameter.
The formula is motivated by the observation that the particle density, n_\mathrm{sum}, is the result from V_\mathrm{old} and would change once Poisson’s equation is re-solved to obtain a new V. In other words, Eq. 8 can be written more explicitly as
(10)
since n_\mathrm{sum} is the result from V_\mathrm{old}, and \rho is used to re-solve Poisson’s equation to get a new V.
To achieve a self-consistent solution, a better formula would be
(11)
At this point, n_\mathrm{sum,new} is unknown to us, since it comes from the solution to the Schrödinger equation in the next iteration. However, we can formulate a prediction for it using Boltzmann statistics, which provides a simple exponential relation between the potential energy, V_e=qV, and the particle density, n_\mathrm{sum}.
(12)
This leads to Eq. 9 for the case of \alpha=0. This works well at high temperatures, where Boltzmann statistics is a good approximation. At lower temperatures, setting \alpha to a positive number helps accelerate convergence.
Step 6
Once a new electric potential profile, V, is obtained by re-solving Poisson’s equation, compare it with the electric potential from the previous iteration, V_\mathrm{old}. If the two profiles agree within the desired tolerance, then self-consistency is achieved; otherwise, go to step 2 to continue the iteration.
A dedicated
Schrödinger-Poissonstudy type is available to automatically generate the steps outlined above in the solver sequence. Benchmark Example: The Nanowire Model
The GaAs nanowire tutorial model is based on a paper by J.H. Luscombe, A.M. Bouchard, and M. Luban titled “Electron confinement in quantum nanostructures: Self-consistent Poisson-Schrödinger theory”.
Given the assumption of an infinite length and cylindrical symmetry, we choose the 1D axisymmetric space dimension. We then select the
Schrödinger-Poisson Equation multiphysics interface under the Semiconductor branch, which adds the Schrödinger Equation and Electrostatics interfaces together with the Schrödinger-Poisson Coupling multiphysics coupling in the Model Builder. Selecting the Schrödinger-Poisson Equation interface for the nanowire model.
Following the description in the paper, the radius of the nanowire is set to 50 nm. The electron effective mass is set to 0.067 times the free electron mass (as suggested by the Fermi-temperature result in the paper), and the dielectric constant is assumed to be 12.9. The Fermi energy level in the model is set to 0 V and the electric potential at the wall to −0.7 V in order to match the Fermi-level-pinning boundary condition described by the researchers. We model the case of 2·10
18 cm –3 uniform ionized dopants at a temperature of 10 K to compare with Figures 2 and 3 in the paper. The numbers above are entered as global parameters in the model. Global parameters for the nanowire model.
Following the approach of the paper, we first solve for the Thomas-Fermi approximate solution, then use it as the initial condition for the fully coupled Schrödinger-Poisson equation. The formulas for the Thomas-Fermi approximation are entered as local variables in the model.
Local variables for the nanowire model.
With the global parameters and local variables defined, it is straightforward to use them to fill the various input fields in the geometry, material, and physics nodes in the Model Builder. Here are a few things to note:
The azimuthal quantum number m is parameterized to allow sweeping and summing over its values, as mentioned above, and is entered in the Settingswindow of the Schrödinger Equationphysics node Recall from a previous blog post on computing the band gap for superlattices that the eigenvalue scale λ scaleworks as a multiplication factor for the dimensionless eigenvalue λ to produce the eigenenergy, E_i (E_i = λ scaleλ) For instance, if λ scaleequals 1 eV, then an eigenvalue of 1.23 indicates an eigenenergy of 1.23 eV For instance, if λ For the Electrostaticsinterface, an Electric Potentialboundary condition is added to set the value at the wall of the nanowire, as mentioned above In addition, two Space Charge Densitydomain conditions are added, one for the ionized dopants and the other for the Thomas-Fermi approximation (the latter should be turned off for the Schrödinger-Poissonstudy) Setting Up the Schrödinger-Poisson Multiphysics Coupling
In the
Settings window for the Schrödinger-Poisson Coupling multiphysics node, expand the Equation section to see the equations implemented in this node — they should look familiar if you’ve read the Solving the Schrödinger-Poisson System section above. The Coupled Interfaces section in the settings allows the selection of the two coupled physics interfaces. The Model Input section sets the temperature of the system, as shown in the screenshot below: Upper part of the Settings window for the Schrödinger-Poisson Coupling node.
The
Particle Density Computation section (screenshot below) specifies the statistically weighted sum of the probability densities, as described in Eq. 4. If the default option of Fermi-Dirac statistics, parabolic band is selected, then Eq. 5–Eq.7 are used to compute the weights, N_i. A user-defined option is also available for entering different expressions for the weights.
To take into account the pairs of degenerate azimuthal quantum numbers (m = ±1, ±2, etc.), we use the formula
1+(m>0) for the
Degeneracy factor, g_i, which evaluates to 1 for m = 0 and 2 for m > 0. Lower part of the Settings window for the Schrödinger-Poisson Coupling node.
The
Charge Density Computation section (screenshot above) takes the input for the Charge number, z_q, for Eq. 3. If the default option of Modified Gummel iteration is selected, then Eq. 9 is used to compute the new space charge density, \rho. Other options are also available, including a user-defined option where you can enter your own mathematical expressions.
The default expression for the
Global error variable,
(schrp1.max(abs(V-schrp1.V_old)))/1[V], computes the maximum difference between the electric potential fields from the two most recent iterations, in the unit of V. Note that the prefix
schrp1 should match the
Name input field of the Schrödinger-Poisson Coupling node, and the variable name
V should match the dependent variable name for the
Electrostatics interface. These names may change from the default in a more complicated model, and the expression will turn yellow if the names do not match. In this case, some manual editing is needed. Setting Up the Schrödinger-Poisson Study Step
The dedicated
Schrödinger-Poisson study step under Study 2 automatically generates the self-consistent iterations in the solver sequence. The iteration scheme is outlined in Solving the Schrödinger-Poisson System above.
If we are dealing with a completely new problem, then for the
Eigenfrequency search method menu under the Study Settings section, it is often necessary to use the default Manual search option to find the range of the eigenenergies. Once the range is found, we can switch to the Region search option with appropriate settings for the range and number of eigenvalues in order to ensure that all significant eigenstates are found by the solver. For this tutorial, the estimated energy range is between -0.15 and 0.05 eV. This corresponds to -0.15 and 0.05 for the unitless eigenvalue, as discussed earlier.
The real and imaginary parts of the input fields refer to the real and imaginary parts of the eigenvalue, respectively. To look for the eigenenergies of bound states, we set the input fields for the real parts to the expected energy range and set the input fields for the imaginary parts to a small range around 0 to capture numerical noise or slightly leaky quasibound states, as shown below:
Upper part of the Settings window for the Schrödinger-Poisson study step.
As we have pointed out earlier, the second
Space Charge Density domain condition is only used for the Thomas-Fermi approximation solution in Study 1. It is thus disabled under the Physics and Variables Selection section, as shown in the screenshot above.
Under the
Iterations section, the default option for the Termination method drop-down menu is Minimization of global variable, which automatically updates a result table that displays the history of the global error variable after each iteration during the solution process. The built-in global error variable
schrp1.global_err computes the maximum difference between the electric potential fields from the two most recent iterations, in the unit of V, as already configured in the
Schrödinger-Poisson Coupling multiphysics node. (Note that the prefix
schrp1 should match the
Name input field of the Schrödinger-Poisson Coupling node.) Setting the tolerance to
1E-6 thus means that the iteration ends after the max difference is less than 1 uV. See the screenshot below for these settings:
Lower part of the Settings window for the Schrödinger-Poisson study step.
Under the
Values of Dependent Variables section, we select the Thomas-Fermi approximate solution from Study 1 as the initial condition for this study. We then use the Auxiliary sweep functionality to solve for a list of nonnegative azimuthal quantum numbers
m. The negative ones are taken into account using the formula
1+(m>0) for the degeneracy factor, g_i, as discussed earlier. The dedicated solver sequence automatically performs the statistically weighted sum of the probability densities for all of the eigenstates.
Examining the Self-Consistent Results
The solver converges in eight iterations thanks to the good initial condition provided by the Thomas-Fermi approximation and the good forward estimate of the space charge density given by Eq. 9. The plot of the electron density, potential energy, and partial orbital contributions agree well with the figure published in the reference paper.
Comparison of the electron density, potential energy, and partial orbital contributions with the figure published in the reference paper.
The plot below shows the Friedel-type spatial oscillations present in both the electron density and the potential energy profiles.
Zoomed-in plot of the Friedel-type spatial oscillations in the electron density and potential energy profiles. Next Step
In this blog post, we have demonstrated that the
Schrödinger-Poisson Equation interface and the Schrödinger-Poisson study type make it simple to set up and solve a Schrödinger-Poisson system, using the Self-Consistent Schrödinger-Poisson Results for a GaAs Nanowire benchmark model as an example. To try this model yourself, click the button below to go to the Application Gallery, where you can download the documentation and, with a valid software license, the MPH-file for this tutorial.
We hope you find these new features useful and we would love to hear how you apply them to your research.
Reference J.H. Luscombe, A.M. Bouchard, and M. Luban, “Electron confinement in quantum nanostructures: Self-consistent Poisson-Schrödinger theory,” Phys. Rev. B, vol. 46, no. 16, p. 10262, 1992. Comments (13) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
The other answers here, describing oxygen toxicity are telling what can go wrong if you have too much oxygen, but they are not describing two important concepts that should appear with their descriptions. Also, there is a basic safety issue with handling pressure tanks of high oxygen fraction.An important property of breathed oxygen is its partial pressure....
It does. You would find the average percentage of the atmosphere that is argon is very slightly higher at the floor of valleys. However, bear in mind first of all it wouldn't be anywhere near a complete stratification -- a layer of pure argon, then another of pure N2, and so on. A mixture of nearly ideal gases doesn't do that, at least at equilibrium, ...
The common saying is a hold over from when STP was defined to be $\pu{273.15 K}$ and $\pu{1 atm}$. However, IUPAC changed the definition in 1982 so that $\pu{1 atm}$ became $\pu{1 bar}$. I think the main issue is a lot of educators didn't get the memo and went right along either teaching STP as $\pu{1 atm}$ or continuing with the line they were taught ("$\pu{...
The ideal gas law is a very good approximation of how gases behave most of the timeThere is no logical flaw in the laws. Most gases most of the time behave in a way that is close to the ideal gas equation. And, as long as you recognise the times they don't, the equation is good description of the way they behave.The ideal gas equations assume that the ...
Our body is used to the environment around us. Once you change part of the environment, you have to be ready for the consequences.Inhaling pure oxygen is the cause for what is known as oxygen toxicity.Oxygen toxicity is a condition resulting from the harmful effects of breathing molecular oxygen $\ce{(O2)}$ at increased partial pressures.High ...
Essentially, because the carbon dioxide sublimates from solid (dry ice) to gas at a very low temperature (roughly −78 °C at 1 atm), it causes water vapour in the air to condense, causing a visible fog. Thus what you are seeing is not carbon dioxide, but rather water.When we exhale and it is reasonably warm, the carbon dioxide expelled is roughly body ...
It's been known since 1941 that the answer to your question is in the negative, i.e. that there will never be a closed form equation of state for a nonideal gas.In 1941 Mayer and Montroll developed what is now known as the cluster expansion for the partition function of a nonideal gas whose particles have pairwise interactions. This cluster expansion ...
At the end of the tunnel, you're still trying to approximate the statistical average of interactions between individual molecules using macroscopic quantities. The refinements add more parameters because you're trying to parametrise the overall effect of those individual interactions for every property that is involved for each molecule.You're never going ...
The differences in acceleration due to gravity is not the main factor in comparing how accurate the approximation is for each planet.The main factor is the mass of gas each planet's atmosphere contains.Mercury has almost no atmosphere. The total mass of all gas in Mercury's atmosphere is only 10000 kg! The pressure is less than $10^{-14}$ bar. The ...
A big point of confusion is that it is still taught (at least in the mid-2000's) that STP is defined with respect to $\pu{273 K}$ and $\pu{1 atm}$ of pressure, or $\pu{1.01325 bar}$ of pressure, even though IUPAC changed their definition to be with respect to $\pu{1 bar}$ of pressure. By using the ideal gas law on the old STP definition, you get that the ...
You must consider this:The question whether a physical system follows a particular law is not a "yes or no" question. There is always an error when you compare what you measure with what the law predicts. The error can be at the 17th digit, but it's still there. Let me quote a very insightful passage by H. Jeffreys about this:It is not true that ...
I didn't know that balloons expanded during the fly because of thermodynamics, and I didn't know how high they can fly, but a rapid search tells that a partially unfilled regular balloon can fly until an altitude of around $\pu{25 km}$.Now, $\pu{25 km}$ means that it reaches the first part of the stratosphere, with temperatures of $\pu{-60 ^\circ C}$, that ...
While most everything the previous answer states is correct, I would point out that taking four times the volume of a single particle has nothing to do with experiment and arises mathematically.In deriving the VDW equation, the particles are still assumed to be hard spheres, but this assumption is corrected for with the parameter $a$.The hard sphere ...
PreliminariesConsider $U = U(V,T, p)$. However, assuming that it is possible to write an equation of state of the form $p = f(V,T)$, I don't have to explicitly address the $p$ dependence of $U$, and I can write the following differential:$$\mathrm{d}U = \underbrace{\left ( \frac{\partial U}{\partial V} \right)_T}_{\pi_T} \mathrm{d}V + \underbrace{\...
If the balloon is closed, then yes, both volume and pressure will increase when the gas inside is heated. Let's look at two simpler cases first.If the gas were completely free to expand against ambient pressure (say, inside of a container sealed with a freely moving piston, with no friction), then the heated gas would expand until it created as much force ...
The heat capacities are defined as$$C_p = \left(\frac{\partial H}{\partial T}\right)_{\!p} \qquad \qquad C_V = \left(\frac{\partial U}{\partial T}\right)_{\!V} \tag{1}$$and since $H = U + pV$, we have$$\begin{align}C_p - C_V &= \left(\frac{\partial H}{\partial T}\right)_{\!p} - \left(\frac{\partial U}{\partial T}\right)_{\!V} \tag{2} \\&= \...
As a certified SCUBA diver, I learned that breathing pressurized pure oxygen leads to oxygen toxicity, which can be fatal. However, I'm not anywhere near an expert on the mechanism of oxygen toxicity, but I believe it has to do with resulting in a lot more reactive oxygen species which can cause oxidative stress and lipid peroxidation. I'm not really ...
Does this mean that both1 mole of $\ce O$ would occupy $22.4~\mathrm L$ (or if this doesn't usually occur in nature, say 1 mole of $\ce{He}$ or another monoatomic gas)1 mole of $\ce{O2}$ would occupy $22.4~\mathrm L$Yes, it means exactly that. And you're right, a stable gas of $\ce O$ atoms is a pretty exotic thing, so $\ce{He}$ is a much ...
That's because of two reasons. One is entropy, the ultimate force of chaos and disorder. Sure, gases would like to be arranged according to their density, but even above that, they would like to be mixed, because mixing creates a great deal of entropy. If you prevent the mixing, then they would behave just as you expected. Indeed, a balloon filled with $\ce{...
You may recall the ideal gas law: $$PV = nRT.$$Here, $P$ is pressure, $V$ is volume, $n$ is the amount of gas present (in moles), $R$ is the ideal gas constant, and $T$ is temperature.In an enclosed system, with no gas flowing in or out, $n$ is constant (as is also, obviously, $R$). We can rearrange the equation above to pull all the constant terms to ...
This is merely a shard of a fact which does not make much sense in and by itself. After all, in systems with gas/liquid equilibrium there is nothing really special about $\left(\dfrac{\partial\mathfrak p}{\partial V}\right)_T=0$. On the contrary, this is pretty typical. See all those points where the blue lines (isotherms) are horizontal? They make up a ...
The van der Waals equation can't be derived from first principles. It is an ad-hoc formula. There is a "derivation" in statistical mechanics from a partition function that is engineered to give the right answer. It also cannot be derived from first principles.A gas is a collection of molecules that do not cohere strongly enough to form a liquid or a ...
I edited the first van der Waals equations in your question, because it was incorrect.First, the volume available to the gas is pretty much was you think: it's the space left for it to occupy, i.e. the volume delimited by the container. If you think of a gas tank, it's the interior volume of the tank. For systems of macroscopic dimensions, there is no real ...
Carbon Dioxide (CO2) readily dissolve in water and form Carbonic Acid (i.e H2CO3 (aq) )This is the formation of bonds.Then Carbonic Acid (i.e H2CO3 (aq) ) dissociate in water as follows.So water gets H+ ions, so that cause water acidic.The following shows dissociation of Carbonic Acid (i.e H2CO3 (aq) ) more clearly.Carbon Monoxide (CO) do not ...
You're actually on the right track. Looking at the percent composition, you've correctly identified that the ratio of $\ce{C}$ to $\ce{F}$ atoms is 1:1, however, you cannot assume that the formula is just $\ce{CF}$ (which isn't a known compound), it could be any compound with that ratio, $\ce{C2F2}$, $\ce{C3F3}$, $\ce{C4F4}$, etc.The way to narrow it down ...
There is a liquid state for carbon dioxide. Borrowing the $\ce{CO2}$ phase diagram from Wikipedia, we can see that $\ce{CO2}$ will condense at a few atmospheres, dependent on temperature. At still higher pressures, the liquid will solidify. Below the triple point temperature, the gas will transition directly to solid.
General estimates have placed a can of Coca-Cola to have 2.2 grams of $\ce{CO2} $ in a single can. As a can is around 12 fluid ounces, or 355 ml, the amount of $\ce{CO2}$ in a can is:$$\text{2.2 g} \ \ce{CO2}* \frac{\text{1 mol} \ \ce{CO2}}{\text{44 g} \ \ce{CO2} } = 0.05 \ \text{mol}$$$$ \text{355 mL} * \frac{\text{1 L}}{\text{1000 mL}} = 0.355 \ \text{...
If one rearranges the ideal gas law equation, you can obtain the following (assuming $n$ and $T$ are non-zero):$$\frac{PV}{nT} = R$$$R$ is a constant, and there are in fact infinitely many possible sets of values $(P, V, n, T)$ that satisfy the equation. Let $(P_1, V_1, n_1, T_1)$ denote one such set, and let $(P_2, V_2, n_2, T_2)$ denote a second one. ...
TL;DR: Spray cans don't actually get colder when shaken. However, shaking a can does increase heat conduction from your hand to the can, making it feel colder.Humans don't actually sense external temperature directly; our thermoreceptors are located under the skin, and thus effectively measure the rate at which body heat is lost through the skin. This is ...
Avogadro's law, which can be written as $V \propto n$, where $V$ is the volume of the gas and $n$ is the amount of substance of the gas (measured in moles), can be thought of as just another manifestation of the ideal gas law rewritten as follows,$$V = (RT/p) n \, .$$Consequently, strictly speaking, Avogadro's law is applicable only for a hypothetical ...
|
In economics, the
consumption function describes a relationship between consumption and disposable income. [1] Algebraically, this means C = f(Y_{d}) where f \colon \mathbb{R} \to \mathbb{R} is a function that maps levels of disposable income Y_{d}—income after government intervention, such as taxes or transfer payments—into levels of consumption C. The concept is believed to have been introduced into macroeconomics by John Maynard Keynes in 1936, who used it to develop the notion of a government spending multiplier. [2] Its simplest form is the linear consumption function used frequently in simple Keynesian models: [3]
C = a + b \times Y_{d}
where a is the autonomous consumption that is independent of disposable income; in other words, consumption when income is zero. The term b \times Y_{d} is the induced consumption that is influenced by the economy's income level. The parameter b is known as the marginal propensity to consume, i.e. the increase in consumption due to an incremental increase in disposable income, since \partial C / \partial Y_{d} = b. Geometrically, b is the slope of the consumption function. One of the key assumptions of Keynesian economics is that this parameter is positive but smaller than one, i.e. b \in (0,1).
[4]
Criticism of the simplicity and irreality of this assumption lead to the development of Milton Friedman's permanent income hypothesis, and Richard Brumberg and Franco Modigliani's life-cycle hypothesis. But none of them developed a definitive consumption function. Friedman, although he got the Nobel prize for his book
A Theory of the Consumption Function (1957), presented several different definitions of the permanent income in his approach, making it impossible to develop a more sophisticated function. Modigliani and Brumberg tried to develop a better consumption function using the income got in the whole life of consumers, but them and their followers ended in a formulation lacking economic theory and therefore full of proxies that do not account for the complex changes of today's economic systems.
Until recently, the three main existing theories, based on the income dependent Consumption Expenditure Function pointed by Keynes in 1936, were Duesenberry's (1949) relative consumption expenditure,
[5] Modigliani and Brumberg's (1954) life-cycle income, and Friedman's (1957) permanent income. [6]
Some new theoretical works are based, following Duesenberry's one, on behavioral economics and suggest that a number of behavioural principles can be taken as microeconomic foundations for a behaviorally-based aggregate consumption function.
[7] See also Notes
Further reading (Undergraduate level discussion of the subject.) (Graduate level discussion of the subject.) External links
An essay examining the strengths and weaknesses of Keynes's theory of consumption
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
|
A group G is said to have a
factorization if there exist proper subgroups $A$ and $B$ such that $G = AB = \{ ab \ | \ a \in A, b \in B \}$.
The paper Factorisations of sporadic simple groups (by Michael Giudici) provides the classification of all the factorizations of the sporadic simple groups. We observe there that only $11$ sporadic simple groups (among $26$) admit a factorization. So the remaining $15$ (which are $J_1$, $M_{22}$, $J_3$, $McL$, $O'N$, $Co_3$, $Co_2$, $F_5$, $Ly$, $F_3$, $Fi_{23}$, $J_4$, $F_{3+}$, $F_2$, $F_1$) admit no factorization.
Obviously, a cyclic group admits no factorization if and only if it is of prime power order.
Question: What are all the (other) finite groups without factorization? (Is there an official name for such a group?) Proposition: Let $G$ be a group without factorization. If the intersection of all the maximal subgroups of $G$ is the trivial subgroup then $G$ is simple. proof: Assume $G$ non-simple and let $N$ be a non-trivial proper normal subgroup of $G$. If every maximal subgroup of $G$ contains $N$ then their intersection also, which contradicts the assumption. So there is a maximal subgroup $M$ of $G$ not containing $N$. It follows that $NM=G$, contradiction. $\square$ Sub-question: must a non-simple finite group without factorization be cyclic (of prime power order)? Lemma: A finite group $G$ admits a unique maximal subgroup $M$ iff it is cyclic of prime power order. proof: Let $g \in G \setminus M$ and $H = \langle g \rangle$. If $H \neq G$ then there must exist a maximal subgroup $M'$ of $G$ containing $H$, but $M=M'$ by assumption, so $g \in M$, contradiction. So $G = H$ is cyclic. It is moreover of prime power order by Chinese Remainder Theorem. $\square$ Bonus question: What do we know about the infinite groups without factorization?
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering
Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code.
he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects.
i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent.
you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl.
In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos...
Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval
@AkivaWeinberger are you familiar with the theory behind Fourier series?
anyway here's a food for thought
for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely.
(a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$?
@AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it.
> In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite.
I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d...
Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions.
@AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations
hence you're free to rescale the sides, and therefore the (semi)perimeter as well
so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality
that makes a lot of the formulas simpler, e.g. the inradius is identical to the area
It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane?
$q$ is the upper summation index in the sum with the Bernoulli numbers.
This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
|
In The Da Vinci Code, Dan Brown feels he need to bring in a French cryptologist, Sophie Neveu, to explain the mystery behind this series of numbers:
13 – 3 – 2 – 21 – 1 – 1 – 8 – 5
The Fibonacci sequence, 1-1-2-3-5-8-13-21-34-55-89-144-… is such that any number in it is the sum of the two previous numbers.
It is the most famous of all
integral linear recursive sequences, that is, a sequence of integers
\[
a = (a_0,a_1,a_2,a_3,\dots) \]
such that there is a monic polynomial with integral coefficients of a certain degree $n$
\[
f(x) = x^n + b_1 x^{n-1} + b_2 x^{n-2} + \dots + b_{n-1} x + b_n \]
such that for every integer $m$ we have that
\[
a_{m+n} + b_1 a_{m+n-1} + b_2 a_{m+n-2} + \dots + b_{n-1} a_{m+1} + a_m = 0 \]
For the Fibonacci series $F=(F_0,F_1,F_2,\dots)$, this polynomial can be taken to be $x^2-x-1$ because
\[ F_{m+2} = F_{m+1}+F_m \]
The set of
all integral linear recursive sequences, let’s call it $\Re(\mathbb{Z})$, is a beautiful object of great complexity.
For starters, it is a
ring. That is, we can add and multiply such sequences. If
\[
a=(a_0,a_1,a_2,\dots),~\quad \text{and}~\quad a’=(a’_0,a’_1,a’_2,\dots)~\quad \in \Re(\mathbb{Z}) \]
then the sequences
\[
a+a’ = (a_0+a’_0,a_1+a’_1,a_2+a’_2,\dots) \quad \text{and} \quad a \times a’ = (a_0.a’_0,a_1.a’_1,a_2.a’_2,\dots) \]
are again linear recursive. The zero and unit in this ring are the constant sequences $0=(0,0,\dots)$ and $1=(1,1,\dots)$.
So far, nothing terribly difficult or exciting.
It follows that $\Re(\mathbb{Z})$ has a
co-unit, that is, a ring morphism
\[
\epsilon~:~\Re(\mathbb{Z}) \rightarrow \mathbb{Z} \]
sending a sequence $a = (a_0,a_1,\dots)$ to its first entry $a_0$.
It’s a bit more difficult to see that $\Re(\mathbb{Z})$ also has a
co-multiplication
\[
\Delta~:~\Re(\mathbb{Z}) \rightarrow \Re(\mathbb{Z}) \otimes_{\mathbb{Z}} \Re(\mathbb{Z}) \] with properties dual to those of usual multiplication.
To describe this co-multiplication in general will have to await another post. For now, we will describe it on the easier ring $\Re(\mathbb{Q})$ of all
rational linear recursive sequences.
For such a sequence $q = (q_0,q_1,q_2,\dots) \in \Re(\mathbb{Q})$ we consider its Hankel matrix. From the sequence $q$ we can form symmetric $k \times k$ matrices such that the opposite $i+1$-th diagonal consists of entries all equal to $q_i$
\[ H_k(q) = \begin{bmatrix} q_0 & q_1 & q_2 & \dots & q_{k-1} \\ q_1 & q_2 & & & q_k \\ q_2 & & & & q_{k+1} \\ \vdots & & & & \vdots \\ q_{k-1} & q_k & q_{k+1} & \dots & q_{2k-2} \end{bmatrix} \] The Hankel matrix of $q$, $H(q)$ is $H_k(q)$ where $k$ is maximal such that $det~H_k(q) \not= 0$, that is, $H_k(q) \in GL_k(\mathbb{Q})$.
Let $S(q)=(s_{ij})$ be the inverse of $H(q)$, then the co-multiplication map
\[ \Delta~:~\Re(\mathbb{Q}) \rightarrow \Re(\mathbb{Q}) \otimes \Re(\mathbb{Q}) \] sends the sequence $q = (q_0,q_1,\dots)$ to \[ \Delta(q) = \sum_{i,j=0}^{k-1} s_{ij} (D^i q) \otimes (D^j q) \] where $D$ is the shift operator on sequence \[ D(a_0,a_1,a_2,\dots) = (a_1,a_2,\dots) \]
If $a \in \Re(\mathbb{Z})$ is such that $H(a) \in GL_k(\mathbb{Z})$ then the same formula gives $\Delta(a)$ in $\Re(\mathbb{Z})$.
For the Fibonacci sequences $F$ the Hankel matrix is
\[ H(F) = \begin{bmatrix} 1 & 1 \\ 1& 2 \end{bmatrix} \in GL_2(\mathbb{Z}) \quad \text{with inverse} \quad S(F) = \begin{bmatrix} 2 & -1 \\ -1 & 1 \end{bmatrix} \] and therefore \[ \Delta(F) = 2 F \otimes ~F – DF \otimes F – F \otimes DF + DF \otimes DF \] There’s a lot of number theoretic and Galois-information encoded into the co-multiplication on $\Re(\mathbb{Q})$.
To see this we will describe the co-multiplication on $\Re(\overline{\mathbb{Q}})$ where $\overline{\mathbb{Q}}$ is the field of all algebraic numbers. One can show that
\[
\Re(\overline{\mathbb{Q}}) \simeq (\overline{\mathbb{Q}}[ \overline{\mathbb{Q}}_{\times}^{\ast}] \otimes \overline{\mathbb{Q}}[d]) \oplus \sum_{i=0}^{\infty} \overline{\mathbb{Q}} S_i \]
Here, $\overline{\mathbb{Q}}[ \overline{\mathbb{Q}}_{\times}^{\ast}]$ is the group-algebra of the multiplicative group of non-zero elements $x \in \overline{\mathbb{Q}}^{\ast}_{\times}$ and each $x$, which corresponds to the geometric sequence $x=(1,x,x^2,x^3,\dots)$, is a group-like element
\[ \Delta(x) = x \otimes x \quad \text{and} \quad \epsilon(x) = 1 \]
$\overline{\mathbb{Q}}[d]$ is the universal Lie algebra of the $1$-dimensional Lie algebra on the primitive element $d = (0,1,2,3,\dots)$, that is
\[ \Delta(d) = d \otimes 1 + 1 \otimes d \quad \text{and} \quad \epsilon(d) = 0 \]
Finally, the co-algebra maps on the elements $S_i$ are given by
\[ \Delta(S_i) = \sum_{j=0}^i S_j \otimes S_{i-j} \quad \text{and} \quad \epsilon(S_i) = \delta_{0i} \]
That is, the co-multiplication on $\Re(\overline{\mathbb{Q}})$ is completely known. To deduce from it the co-multiplication on $\Re(\mathbb{Q})$ we have to consider the invariants under the action of the absolute Galois group $Gal(\overline{\mathbb{Q}}/\mathbb{Q})$ as
\[ \Re(\overline{\mathbb{Q}})^{Gal(\overline{\mathbb{Q}}/\mathbb{Q})} \simeq \Re(\mathbb{Q}) \]
Unlike the Fibonacci sequence, not every integral linear recursive sequence has an Hankel matrix with determinant $\pm 1$, so to determine the co-multiplication on $\Re(\mathbb{Z})$ is even a lot harder, as we will see another time.
4 Comments
Reference: Richard G. Larson, Earl J. Taft, ‘The algebraic structure of linearly recursive sequences under Hadamard product’
|
I am trying to use
\prod in math display mode but it appears large (larger than I want it to be) but when I use
\mathsmaller from the
relsize package the letter appears on right side of it instead of below. Is there any way to make the symbol smaller while the character remains below the symbol?
I am trying to use
I don't know the package you mention and you didn't provide an example but using
\mathop around the construction will revert to the operator limits positioning.
\mathop{\mathsmaller.....}_0^n
For consistency, define a new math operator to be set in
\textstyle using
amsmath's
\DeclareMathOperator*:
\documentclass{article}\usepackage{amsmath}% http://ctan.org/pkg/amsmath\DeclareMathOperator*{\sProd}{\textstyle\prod}\usepackage{relsize}% http://ctan.org/pkg/relsize\begin{document}\[ \prod_{i=1}^{\infty}\ %\frac{1}{i} = {\mathsmaller \prod_{i=1}^{\infty}}\ %\frac{1}{i} = \sProd_{i=1}^{\infty} %\frac{1}{i}\]\end{document}
|
With sufficient worldbuilding, your creatures could be affected by
Electric Imbalance Syndrome (EIS)
Similar to amflare's answer, but differing in that Electric Imbalance Syndrome is biological in nature instead of affecting whatever happens to be unlucky enough to be nearby when the sun decides to screw everyone over.
Gravity is indeed a much weaker force than electric force. How much weaker?
Newtonian gravity is modeled by the equation $F_g = G\frac{Mm}{r^2}$. Electric force is modeled similarly by $F_e = \frac{1}{4\pi\epsilon_0}\frac{q_1q_2}{r^2}$.
We can compare them, quite simply, by dividing one by the other:
$$\frac{F_e}{F_g} = \frac{1}{4\pi\epsilon_0}\frac{q_1q_2}{r^2}\frac{r^2}{GMm} = \frac{q_1q_2}{4\pi\epsilon_0GMm}$$
Substituting in all the constants and canceling $r^2$, this comes to
$$\frac{F_e}{F_g} = \left(1.35 \times 10^{20} \frac{kg^2}{C^2}\right)\frac{q_1q_2}{Mm}$$
If we choose two protons to use as our basis for comparison, the charge of a proton is $1.60 \times 10^{-19} C$ and the mass of a proton is $1.67 \times 10^{-27} kg$, so plug all that in and we find that the electric force between them is
$$\frac{F_e}{F_g} = \left(1.35 \times 10^{20} \frac{kg^2}{C^2}\right)\frac{(1.60 \times 10^{-19} C)^2}{(1.67 \times 10^{-27} kg)^2} = 1.24 \times 10^{36}$$
So the electric force between them is roughly $10^{36}$ times stronger than the gravitational force, if I did my math right.
So perhaps your planet is composed largely of minerals with a low conductivity (insulators) and contains positive or negative charges distributed uniformly within. This will result in a near-uniform electric field directed toward or away from the planet. This way, any charges in the creatures' bodies will cause them to be attracted to or repelled from the planet. A very delicate balance of positive and negative electric charges is then necessary for your species to survive.
Perhaps these charges will be balanced through diet, like vitamins. Eat too much of foods with one charge and find yourself being pulled "heavily" toward the ground. Eat too much of the other and risk flying off into the sky. When perfectly balanced, the creatures experience only gravity, as we do on Earth. Perhaps optimal balance should keep the creature a little "light" on its feet to reduce wear on the legs, feet and knees.
Natural excretion will no doubt regulate these charges as the majority charge will be repelled and the minority charge attracted. Foods that reinforce the electric imbalance would also be difficult to pick up or eat. But perhaps sufficiently poor diet could outpace these natural processes or simply provide no charges to counteract the effect of the majority charge, or perhaps some illness causes' the creature's antibodies to forcefully expel either positive or negative charges.
EIS could theoretically function in either variety, the kind that pulls the victim to the ground or the kind that flings the victim into the air. These could be known as Positive EIS and Negative EIS, depending on the charge of the planet itself. Proper worldbuilding could make one much more rare than the other. Perhaps negatively-charged food is uncommon or unpleasant on this positively-charged planet. In such event, Positive EIS is the common condition where a creature's body tends away from the planet, eventually sending them off into the clouds.
There is little hope for someone whose fate lies in the sky, as both gravity and electric force scale with distance squared. Once a creature's outward-directed electric force exceeds its inward-directed gravitational force, it always will regardless of how far away the creature's body moves. Those who are fortunate enough to be indoors at such time may be saved, or perhaps someone surrounded by quick-acting friends might as well.
If these charges are to stay within the body, which they must if this entire world is to work, the creature's skin must also be an insulator. As such, one potential remedy is to simply break the skin at the top of the head and allow the outward-tending charges to escape on their own.
Another remedy might be pharmaceutical charge tablets that maintain one's charge when used as directed by your doctor. Do not use these tablets if you are nursing, pregnant or may become pregnant. Side effects include bleeding, shortness of breath, acute pain and difficulty excreting. Contact your doctor if you experience an... well, you know the rest.
|
In AP Calculus, we saw the following derivatives: \[\sin’ x = \cos x \\ \cos’ x = -\sin x\]
That is, the derivative of the sine is the cosine, and the derivative of the cosine is the opposite of the sine.
I showed one way to see why this is true using Desmos. The purple line is the tangent, while the black dot is the slope of that tangent. As you move the slider a from side to side, the black dot traces the cosine function.
The book has an algebraic proof of the first rule (p. 112), using the limit process. In this post, I’ll discuss a more visual, geometry-based explanation.
The angle \(\theta\) represents our original angle, while \(\delta = \Delta\theta\). Note that we are NOT looking at the change in the \(x\) value; we’re looking at the change in the angle measurement. This is key!
So what we want to know is: How does \(\sin\theta\) change as \(\theta\) changes?
\(\overline{AC}\) is the radius of the unit circle, so its length is 1; \(\sin\theta = m\overline{CF}/m\overline{AC} = f\). Likewise, \(\cos\theta = g\).
\(\overline{EC}\) is on the tangent to the circle at point \(C\). \(\overline{CD}\) is collinear with \(\overline{CF}\); \(\overline{ED}\perp\overline{CD}\).
So let’s consider the blue and green triangles. Using geometry, we know that since the blue triangle is a right triangle, \(\theta\) and \(\gamma\) are complementary. Since \(\overline{DF}\) is on a line, and since \(\angle ECA\) is right, \(\alpha\) and \(\gamma\) are complementary… meaning that \(\alpha\cong\theta\).
More about the green triangle: \(\cos\alpha = \cos\theta = e/c \Rightarrow e = c \cos\theta\). But \(c\) is also opposite \(\delta\) in \(\triangle ECA\), meaning that \(\sin\delta = c/a \Rightarrow c = a\sin\delta\) and \(e = a\sin\delta\cos\theta\).
We’re ready to write our limit now. We want to know what happens to \(e\) when \(\delta\) gets smaller and smaller. Specifically: \[\lim_{\Delta\theta\rightarrow 0}\frac{\Delta y}{\Delta\theta} = \lim_{\delta\rightarrow 0}\frac{e}{\delta} \\ = \lim_{\delta\rightarrow 0}\frac{a\sin\delta\cos\theta}{\delta} \\ = \lim_{\delta\rightarrow 0} a \cdot \lim_{\delta\rightarrow 0}\frac{\sin\delta}{\delta} \cdot \lim_{\delta\rightarrow 0} \cos\theta\]
Consider each in turn. As \(\delta\) gets smaller, the difference between \(a\) and the radius of the unit circle gets smaller. That is, \[\lim_{\delta\rightarrow 0} a = 1\]
From our examination of the Squeeze Theorem, we know the important identity \[\lim_{x\rightarrow 0}\frac{\sin x}{x} = 1\] which we can apply here (where \(x = \delta\)).
Finally, since \(\cos\theta\) is not affected by \(\delta\) at all, \[\lim_{\delta\rightarrow 0} \cos\theta = \cos\theta\]
Putting these together, we get \[\lim_{\delta\rightarrow 0} a \cdot \lim_{\delta\rightarrow 0}\frac{\sin\delta}{\delta} \cdot \lim_{\delta\rightarrow 0} \cos\theta = 1 \cdot 1 \cdot \cos\theta = \cos\theta\] QED.
The process for demonstrating the cosine is very similar (using \(d = a\sin\delta\sin\theta\)), but the question that comes up is: Why is the cosine’s derivative the opposite of the sine? Notice what happens in the green triangle: While the height goes upward, the width (corresponding to \(\sin\alpha = \sin\theta\)) goes to the left. This is why the cosine’s derivative is the opposite of the sine.
|
Here we want to give an easy mathematical bootstrap argument why solutions to the time independent 1D Schrödinger equation (TISE) tend to be rather nice. First formally rewrite the differential form$$-\frac{\hbar^2}{2m} \psi^{\prime\prime}(x) + V(x) \psi(x) ~=~ E \psi(x) \tag{1}$$into the int...
[Some time travel comments] Since in the previous paragraph, we have explained how travelling to the future will not necessary result in you to arrive in the future that is resulted as if you have never time travelled (via twin paradox), what is the reason that the past you travelled back, has to be the past you learnt from historical records :?
@0ßelö7 Well, I'd omit the explanation of the notation on the slide itself, and since there seems to be two pairs of formulae, I'd just put one of the two and then say that there's another one with suitable substitutions.
I mean, "Hey, I bet you've always wondered how to prove X - here it is" is interesting. "Hey, you know that statement everyone knows how to prove but doesn't bother to write down? Here is the proof written down" significantly less so
Sorry I have a quick question: For questions like this physics.stackexchange.com/questions/356260/… where the accepted answer clearly does not answer the original question what is the best thing to do; downvote, flag or just leave it?
So this question says express $u^0$ in terms of $u^j$ where $u$ is the four-velocity and I get what $u^0$ and $u^j$ are but I'm a bit confused how to go about this one? I thought maybe using the space-time interval and evaluating for $\frac{dt}{d\tau}$ but it's not workin out for me... :/ Anyone give me a quickie starter please? :p
Although a physics question, this is still important to chemistry. The delocalized electric field is related to the force (and therefore the repulsive potential) between two electrons. This in turn is what we need to solve the Schrödinger Equation to describe molecules. Short answer: You can calculate the expectation value of the corresponding operator, which comes close to the mentioned superposition. — Feodoran13 hours ago
If we take an electron that's delocalised w.r.t position, how can one evaluate the electric field over some space? Is it some superposition or a sort of field with all the charge at the expectation value of the position?
@0ßelö7 I just looked back at chat and noticed Phase's question, I wasn't purposefully ignoring you - do you want me to look over it? Because I don't think I'll gain much personally from reading the slides.
Maybe it's just me having not really done much with Eigenbases but I don't recognise where I "put it in terms of M's eigenbasis". I just wrote it down for some vector v, rather than a space that contains all of the vectors v
If we take an electron that's delocalised w.r.t position, how can one evaluate the electric field over some space? Is it some superposition or a sort of field with all the charge at the expectation value of the position?
Honey, I Shrunk the Kids is a 1989 American comic science fiction film. The directorial debut of Joe Johnston and produced by Walt Disney Pictures, it tells the story of an inventor who accidentally shrinks his and his neighbor's kids to a quarter of an inch with his electromagnetic shrinking machine and throws them out into the backyard with the trash, where they must venture into their backyard to return home while fending off insects and other obstacles.Rick Moranis stars as Wayne Szalinski, the inventor who accidentally shrinks his children, Amy (Amy O'Neill) and Nick (Robert Oliveri). Marcia...
|
I am using AucTeX to write LaTeX and I have fontification enabled. The highlighting of marcos works but while macros like
\begin are highlighted using font-lock-keyword-face all math macros ( \sum, \leq to mention a few) are highlighed using font-lock-sedate-face that is reserved for ``unknown'' macros.
On the other hand AucTeX clearly has some knowledge of all these marcos since
Tex-insert-macro lists many of them among possible completions. Math macros are also defined in LaTeX-math-default in latex.el.
It seems that
font-latex.el does not use this information. Is there a ``best practice'' to make font-latex learn all these keywords and fontify them appropriately?
|
I'll assume we want a cryptographic hash giving security in the Random Oracle Model; collision-resistance and preimage-resistance follows. Collision-resistance alone rules out CRC, regardless of size.
The standard technique would be to split the file into blocks, distribute them with index to the participants, which hash each of their blocks; then hash the hashes concatenated in order of increasing indexes, or use a Merkle tree, to form the hash of the whole file. However, with blocks distributed in haphazard manner (as in the question), most block hashes need to be exchanged, which can get sizable; and the distributed computation of the final hash is slightly hard to organize.
Rather, we can group the block hashes (made dependent of their index) using an order-independent hash, used by each participant over all the hashes of the blocks s/he is responsible for, then again to obtain the final hash. This simplifies the organization, and saves bandwidth when there more than a few blocks per participant: like 8 in the following simple example using $\Bbb Z_p^*$, but I conjecture that the overhead can be made negligible down to 1 block per participant using an Elliptic Curve Group instead.
For a 256-bit hash, marginally more costly than a regular one for large files, we'll use:
Some 512-bit hash $H$, e.g. $H=\operatorname{SHA-512}$. Some 2048-bit prime $p$ making the Discrete Logarithm Problem in $\mathbb Z_p^*$ (conjecturally) hard; see final section. Some public block size $b$ multiple of $2^{12}$ bit (512 bytes), e.g. $b=2^{23}$ for blocks of 1 MiB. Implicit conversion from integer to bitstring and back, per big-endian convention.
To hash a file of $s$ bits (with $s\le2^{62}b$, which is more than ample):
Split the file into $\lceil s/b\rceil$ blocks $B_i$ of size $b$-bit, except for the last which may be smaller (but non-empty), with $0\le i<\lceil s/b\rceil$. Distribute the blocks $B_i$ and indexes $i$, such that each participant $j$ is assigned a block only once. Have each participant $j$ perform: $f_j\gets1$ For each block $B_i$ assigned to participant $j$ $h_i\gets H(B_i)$. That's a 512-bit bitstring characteristic of $B_i$. $g_i\gets H(h_i\mathbin\|\widetilde{4i})\mathbin\|H(h_i\mathbin\|\widetilde{4i+1})\|H(h_i\mathbin\|\widetilde{4i+2})\|H(h_i\mathbin\|\widetilde{4i+3})$ where $\widetilde{\;n\;}$ is the representation of integer $n$ as a 64-bit bitstring. Since function $H$ returns 512 bits, the concatenation of the 4 hashes makes $g_i$ a 2048-bit bitstring, characteristic of $B_i$ and $i$. $f_j\gets f_j\cdot g_i\bmod p$. $f_j$ is a 2048-bit bitstring characteristic of the $B_i$ and $i$ assigned to participant $j$. If $j\ne 0$, transmit that $f_j$ to participant $0$. Participant $0$ performs: $f\gets f_o$ When receiving $f_j$ with $j\ne 0$ $f\gets f\cdot f_j\bmod p$. $h\gets H(f)$ truncated to it's first 256 bits, where $f$ is represented as a 2048-bit bitstring when applying $H$. Send $h$ to all participants.
Absent message alteration or loss, $h$ is independent of how the blocks have been distributed. That is a 256-bit bitstring characteristic of the whole file, computed in a largely distributed manner. The computation of $f$ and $h$ could be distributed too, at a small extra cost in message exchange.
The order-independent hash is borrowed from the multiplicative one in Dwaine Clarke, Srinivas Devadas, Marten van Dijk, Blaise Gassend, G. Edward Suh,
Incremental Multiset Hash Functions and Their Application to Memory Integrity Checking, in proceedings of AsiaCrypt 2013, which is given a security reduction in appendix C. The security of the whole construction should follow.
On choice of $p$: our requirement is hardness of the DLP in $\Bbb Z_p^*$, as in classic Diffie-Hellman key exchange. We need a 2048-bit safe prime, with no special form $p=2^k\pm s$ that could make SNFS easier. Customarily, it is used a nothing-up-my-sleeves number based on the bits of some transcendental mathematical constant, as a good-enough assurance that $p$ is of no special form.
That can be $p=\lfloor2^{2046}\pi\rfloor+3617739$. The construction uses the first 2048 bits of the binary representation of $\pi$, then increments until hitting a safe prime. Hexadecimal value:
c90fdaa22168c234c4c6628b80dc1cd129024e088a67cc74020bbea63b139b22514a08798e3404ddef9519b3cd3a431b302b0a6df25f14374fe1356d6d51c245e485b576625e7ec6f44c42e9a637ed6b0bff5cb6f406b7edee386bfb5a899fa5ae9f24117c4b1fe649286651ece45b3dc2007cb8a163bf0598da48361c55d39a69163fa8fd24cf5f83655d23dca3ad961c62f356208552bb9ed529077096966d670c354e4abc9804f1746c08ca18217c32905e462e36ce3be39e772c180e86039b2783a2ec07a28fb5c55df06f4c52c9de2bcbf6955817183995497cea956ae515d2261898fa051015728e5a8aaac42dad33170d04507a33a85521abdf53ee2f
As pointed by Sqeamish Ossifrage in comment, we could use the 2048-bit MODP proposed by RFC 3526: $p=2^{2048}-2^{1984}-1+2^{64}\cdot\lfloor2^{1918}\pi+124476\rfloor$. That similarly uses as many of the first bits of the binary representation of $\pi$ as possible, but by construction has the 66 high-order bits (including two from $\pi\approx 3$) and 64 low-order bits set. The high-order bits simplify choice of dividend limbs in Euclidean division by the classical method, while the low-order simplify Montgomery reduction. This is believed few enough forced bits to not allow a huge speedup of the DLP.
ffffffffffffffffc90fdaa22168c234c4c6628b80dc1cd129024e088a67cc74020bbea63b139b22514a08798e3404ddef9519b3cd3a431b302b0a6df25f14374fe1356d6d51c245e485b576625e7ec6f44c42e9a637ed6b0bff5cb6f406b7edee386bfb5a899fa5ae9f24117c4b1fe649286651ece45b3dc2007cb8a163bf0598da48361c55d39a69163fa8fd24cf5f83655d23dca3ad961c62f356208552bb9ed529077096966d670c354e4abc9804f1746c08ca18217c32905e462e36ce3be39e772c180e86039b2783a2ec07a28fb5c55df06f4c52c9de2bcbf6955817183995497cea956ae515d2261898fa051015728e5a8aacaa68ffffffffffffffff
|
Answer
$-192$
Work Step by Step
$\sum_{i=1}^{24} (17-2i)=-2 \times \sum_{i=1}^{24} i+\sum_{i=1}^{24} (17)$ $=(-2) \times \dfrac{(24) \times (24+1)}{2}+17 \times (24)$ $=(-2) \times \dfrac{(24) \times (25)}{2}+17 \times (24)$ $=-600+408$ $=-192$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
This set of Network Theory Questions & Answers for Exams focuses on “Advanced Problems on Reciprocity Theorem”.
1. In Reciprocity Theorem, which of the following ratios is considered?
a) Voltage to current b) Current to current c) Voltage to voltage d) No ratio is considered View Answer
Explanation: The Reciprocity Theorem states that if an Emf E in one branch produces a current I in a second branch, then if the same emf E is moved from the first to the second branch, it will produce the same current in the first branch, when the Emf E in the first branch is replaced with a short circuit. Therefore the ratio of Voltage to Current is considered in case of Reciprocity Theorem.
2. The Reciprocity Theorem is valid for ___________
a) Non-Linear Time Invariant circuits b) Linear Time Invariant circuits c) Non-Linear Time Variant circuits d) Linear Time Variant circuits View Answer
Explanation: A reciprocal network comprises of linear time-invariant bilateral elements. It is applicable to resistors, capacitors, inductors (including coupled inductors) and transformers. However, both dependent and independent sources ate not permissible.
Explanation: R
th= [(2+4) || 6] + 12 = 15 Ω
I
S= \(\frac{45}{15}\) = 3 A
Now, by current division rule, we get, I = \(\frac{3 × 6}{12} = \frac{18}{12}\) = 1.5 A.
4. The Reciprocity Theorem is applicable for __________
a) Single-source networks b) Multi-source networks c) Both Single and Multi-source networks d) Neither Single nor Multi-source networks View Answer
Explanation: According to Reciprocity Theorem, the voltage source and the resulting current source may be interchanged without a change in current. Therefore the theorem is applicable only to single-source networks. It therefore cannot be employed in multi-source networks.
Explanation: Equivalent resistance, R
EQ= [(12 || 6) + 2 + 4] = 10 Ω
I
S= \(\frac{45}{10}\) = 4.5 A
Now, by using Current division rule, we get, I = \(\frac{4.5 × 6}{12+6} = \frac{27}{18}\) = 1.5 A.
6. A circuit is given in the figure below. We can infer that ________
a) The circuit follows Reciprocity Theorem b) The circuit follows Millman’s Theorem c) The circuit follows Superposition Theorem d) The circuit follows Tellegen Theorem View Answer
Explanation: Let us consider this circuit,
R
th= [(2+4) || 6] + 12 = 15 Ω
I
S= \(\frac{45}{15}\) = 3 A
Now, by current division rule, we get, I
1= \(\frac{3 × 6}{12} = \frac{18}{12}\) = 1.5 A.
Again, let us consider this circuit,
Equivalent resistance,R
EQ= [(12 || 6) + 2 + 4] = 10 Ω
I
S= \(\frac{45}{10}\) = 4.5 A
Now, by using Current division rule, we get, I
2= \(\frac{4.5 × 6}{12+6} = \frac{27}{18}\) = 1.5 A.
Since I
1= I 2, the circuit follows Reciprocity Theorem.
Explanation: Equivalent Resistance R
EQ= 20 + [30 || (20 + (20||20))]
= 20 + [30 || (20 + \(\frac{20×20}{20+20}\))]
= 20 + [30 || (20+10)]
= 20 + [30 || 30]
= 20 + \(\frac{30 × 30}{30+30}\)
= 20 + 15 = 35 Ω
The current drawn by the circuit = \(\frac{200}{35}\) = 5.71 A
Now, by using current division rule, we get, I
2Ω= 1.43 A.
Explanation: Equivalent Resistance, R
EQ= [[((30 || 20) + 20) || 20] + 20]
= \(\Big[\Big[\left(\left(\frac{30 × 20}{30+20}\right) + 20\right) || 20\Big] + 20\Big]\)
= [[(12 + 20) || 20] + 20]
= [[32 || 20] + 20]
= \(\Big[\left(\frac{32 × 20}{32+20}\right) + 20\Big]\)
= [12.31 + 20] = 32.31 Ω
The current drawn by the circuit = \(\frac{200}{32.31}\) = 6.19 A
Now, by using current division rule, we get, I
2Ω= 1.43 A.
9. A circuit is given in the figure below. We can infer that ________
a) The circuit follows Reciprocity Theorem b) The circuit follows Millman’s Theorem c) The circuit follows Superposition Theorem d) The circuit follows Tellegen Theorem View Answer
Explanation: Let us consider this circuit,
Equivalent Resistance R
EQ= 20 + [30 || (20 + (20||20))] = 20 + [30 || (20 + \(\frac{20×20}{20+20}\))]
= 20 + [30 || (20+10)]
= 20 + [30 || 30]
= 20 + \(\frac{30 × 30}{30+30}\)
= 20 + 15 = 35 Ω
The current drawn by the circuit = \(\frac{200}{35}\) = 5.71 A
Now, by using current division rule, we get, I
1= 1.43 A
Again, let us consider this circuit,
Equivalent Resistance, R
EQ= [[((30 || 20) + 20) || 20] + 20]
= \(\Big[\Big[\left(\left(\frac{30 × 20}{30+20}\right) + 20\right) || 20\Big] + 20\Big]\)
= [[(12 + 20) || 20] + 20]
= [[32 || 20] + 20]
= \(\Big[\left(\frac{32 × 20}{32+20}\right) + 20\Big]\)
= [12.31 + 20] = 32.31 Ω
The current drawn by the circuit = \(\frac{200}{32.31}\) = 6.19 A
Now, by using current division rule, we get, I
2= 1.43 A.
Since I
1= I 2, the circuit follows Reciprocity Theorem.
Explanation: Equivalent Resistance, R
EQ= 20 + [60 || 30]
= 20 + \(\frac{60 × 30}{60+30}\)
= 20 + 20 = 40 Ω
Total current from the source, I = \(\frac{120}{40}\) = 3A
Now, by using current division rule, I
3Ω= \(\frac{3 × 60}{30+60}\) = 2 A.
Explanation: Equivalent resistance, R
EQ= [[20 || 60] + 30]
= \(\Big[\frac{20 × 60}{20+60} + 30\Big]\)
= [15 + 30] = 45 Ω
Total current = \(\frac{120}{45}\) = 2.67 A
Current through the 20Ω resistor is, I
20Ω= \(\frac{2.67 × 60}{60+20}\) = 2 A.
12. A circuit is given in the figure below. We can infer that ________
a) The circuit follows Reciprocity Theorem b) The circuit follows Millman’s Theorem c) The circuit follows Superposition Theorem d) The circuit follows Tellegen Theorem View Answer
Explanation: Let us consider this circuit,
Equivalent Resistance, Equivalent Resistance, R
EQ= 20 + [60 || 30]
= 20 + \(\frac{60 × 30}{60+30}\)
= 20 + 20 = 40 Ω
Total current from the source, I = \(\frac{120}{40}\) = 3A
Now, by using current division rule, I
1= \(\frac{3 × 60}{30+60}\) = 2 A.
Again, let us consider this circuit,
Equivalent resistance, R
EQ= [[20 || 60] + 30]
= \(\Big[\frac{20 × 60}{20+60} + 30\Big]\)
= [15 + 30] = 45 Ω
Total current = \(\frac{120}{45}\) = 2.67 A
Current through the 20Ω resistor is, I
2= \(\frac{2.67 × 60}{60+20}\) = 2 A
Since I
1= I 2, the circuit follows Reciprocity Theorem.
Explanation: I + 0.9 = 10 I
Or, I = 0.1 A
V
OC= 3 × 10 I = 30 I
Or, V
OC= 3 V
Now, I
SC= 10 I = 1 A
R
th= 3/1 = 3 Ω.
Explanation: R
L= \(\sqrt{R_{TH}^2+X_{TH}^2}\)
= \(\sqrt{3^2+4^2}\) = 5
Now, 110 = (6 + j8 + 5) I
1+ 5I 2
And 90 = (6 + j8 + 5) I
2= 5I 1
∴ I
1= 5.5 – 2.75j and I 2= 4.5 – 2.2j
Total current in R
L= I 1+ I 2= (10 – 4.95j) A = 11.15 A
∴ Power absorbed by R
L= I 2R
= 11.15
2× 5 = 621 W.
Explanation: Equivalent resistance of the circuit is = [{(3 + 2) || 5} + 10] = (2.5 + 10) = 12.5 Ω
Total current drawn by the circuit is I
T= \(\frac{50}{12.5}\) = 4 A
Current in 3 Ω resistor is I
3= I T× \(\frac{5}{5+5} = \frac{4 × 5}{10}\) = 2 A
V
TH= V 3= 3 × 2 = 6V
R
TH= R AB= [(2 + 5) || 3] = 2.1 Ω
For maximum power transfer R
L= R TH= 2.1 Ω
∴ Current drawn by R
Lis I L= \(\frac{6}{2.1+2.1} = \frac{6}{4.2}\) = 1.42 A
∴ Power delivered to the load = \(I_L^2 R_L\)
= (1.42)
2(2.1) = 4.2 W. Sanfoundry Global Education & Learning Series – Network Theory.
To practice all areas of Network Theory,
here is complete set of 1000+ Multiple Choice Questions and Answers.
|
We start with half-hourly \(u_*\)-filtered and gap-filled NEE_f values. For simplicity this example uses data provided with the package and omits \(u_*\) threshold detection but rather applies a user-specified threshold.
With option
FillAll = TRUE, an uncertainty, specifically the standard deviation, of the flux is estimated for each record during gapfilling and stored in variable
NEE_uStar_fsd.
library(REddyProc)
library(dplyr)
EddyDataWithPosix <- Example_DETha98 %>%
filterLongRuns("NEE") %>%
fConvertTimeToPosix('YDH',Year = 'Year',Day = 'DoY', Hour = 'Hour')
EProc <- sEddyProc$new(
'DE-Tha', EddyDataWithPosix, c('NEE','Rg','Tair','VPD', 'Ustar'))
EProc$sMDSGapFillAfterUstar('NEE', uStarTh = 0.3, FillAll = TRUE)
results <- EProc$sExportResults()
summary(results$NEE_uStar_fsd)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.03535 1.69960 2.32796 2.74246 3.44417 24.55782
We can inspect, how the uncertainty scales with the flux magnitude.
plot( NEE_uStar_fsd ~ NEE_uStar_fall, slice(results, sample.int(nrow(results),400)))
With neglecting correlations among records, the uncertainty of the mean annual flux is computed by adding the variances. The mean is computed by \(m = \sum{x_i}/n\). And hence its standard deviation by \(sd(m) = \sqrt{Var(m)}= \sqrt{\sum{Var(x_i)}/n^2} = \sqrt{n \bar{\sigma^2}/n^2} = \bar{\sigma^2}/\sqrt{n}\). This results in an approximate reduction of the average standard deviation \(\bar{\sigma^2}\) by \(\sqrt{n}\).
results %>% filter(NEE_uStar_fqc == 0) %>% summarise(
nRec = sum(is.finite(NEE_uStar_f))
, varSum = sum(NEE_uStar_fsd^2, na.rm = TRUE)
, seMean = sqrt(varSum) / nRec
, seMeanApprox = mean(NEE_uStar_fsd, na.rma = TRUE) / sqrt(nRec)
) %>% select(nRec, seMean, seMeanApprox)
## nRec seMean seMeanApprox
## 1 10901 0.02988839 0.02650074
Due to the large number of records, the estimated uncertainty is very low.
When observations are not independent of each other, the formulas now become \(Var(m) = s^2/n_{eff}\) where \(s^2 = \frac{n_{eff}}{n(n_{eff}-1)} \sum_{i=1}^n \sigma_i^2\), and with the number of effective observations \(n_{eff}\) decreasing with the autocorrelation among records (Bayley 1946, Zieba 2011).
The average standard deviation \(\sqrt{\bar{\sigma^2_i}}\) now approximately decreases only by about \(\sqrt{n_{eff}}\):
\[ Var(m) = \frac{s^2}{n_{eff}} = \frac{\frac{n_{eff}}{n(n_{eff}-1)} \sum_{i=1}^n \sigma_i^2}{n_{eff}} = \frac{1}{n(n_{eff}-1)} \sum_{i=1}^n \sigma_i^2 \\ = \frac{1}{n(n_{eff}-1)} n \bar{\sigma^2_i} = \frac{\bar{\sigma^2_i}}{(n_{eff}-1)} \]
First we need to quantify the error terms, i.e. model-data residuals. For all the records of good quality, we have an original measured value
NEE_uStar_orig and modelled value from MDS gapfilling,
NEE_uStar_fall. The residual of bad-quality data is set to missing.
results <- EProc$sExportResults() %>%
mutate(
resid = ifelse(NEE_uStar_fqc == 0, NEE_uStar_orig - NEE_uStar_fall, NA )
)
Now we can inspect the the autocorrelation of the errors.
acf(results$resid, na.action = na.pass, main = "")
The empiricical autocorrelation function shows strong positive autocorrelation in residuals up to a lag of 10 records.
Computation of effective number of observations is provided by function
computeEffectiveNumObs from package
lognorm based on the empirical autocorrelation function for given model-data residuals.
library(lognorm)
## Warning: package 'lognorm' was built under R version 3.5.3
autoCorr <- computeEffectiveAutoCorr(results$resid)
nEff <- computeEffectiveNumObs(results$resid, na.rm = TRUE)
c( nEff = nEff, nObs = sum(is.finite(results$resid)))
## nEff nObs
## 3870.283 10901.000
We see that the effective number of observations is only about a third of the number of observations.
Now we can use the formulas for the sum and the mean of correlated normally distributed variables to compute the uncertainty of the mean.
results %>% filter(NEE_uStar_fqc == 0) %>% summarise(
nRec = sum(is.finite(NEE_uStar_fsd))
, varMean = sum(NEE_uStar_fsd^2, na.rm = TRUE) / nRec / (!!nEff - 1)
, seMean = sqrt(varMean)
#, seMean2 = sqrt(mean(NEE_uStar_fsd^2, na.rm = TRUE)) / sqrt(!!nEff - 1)
, seMeanApprox = mean(NEE_uStar_fsd, na.rm = TRUE) / sqrt(!!nEff - 1)
) %>% select(seMean, seMeanApprox)
## seMean seMeanApprox
## 1 0.05016727 0.04448115
When aggregating daily respiration, the same principles hold.
However, when computing the number of effective observations, we recommend using the empirical autocorrelation function estimated on longer time series of residuals (
autoCorr computed above) in
computeEffectiveNumObs instead of estimating them from the residuals of each day.
results <- results %>% mutate(
DateTime = EddyDataWithPosix$DateTime
, DoY = as.POSIXlt(DateTime - 15*60)$yday # midnight belongs to the previous
)
aggDay <- results %>% group_by(DoY) %>%
summarise(
DateTime = first(DateTime)
, nRec = sum( NEE_uStar_fqc == 0, na.rm = TRUE)
, nEff = computeEffectiveNumObs(
resid, effAcf = !!autoCorr, na.rm = TRUE)
, NEE = mean(NEE_uStar_f, na.rm = TRUE)
, sdNEE = if (nEff == 0) NA_real_ else sqrt(
mean(NEE_uStar_fsd^2, na.rm = TRUE) / (nEff - 1))
, sdNEEuncorr = if (nRec == 0) NA_real_ else sqrt(
mean(NEE_uStar_fsd^2, na.rm = TRUE) / (nRec - 1))
)
aggDay
## # A tibble: 365 x 7
## DoY DateTime nRec nEff NEE sdNEE sdNEEuncorr
## <int> <dttm> <int> <dbl> <dbl> <dbl> <dbl>
## 1 0 1998-01-01 00:30:00 21 7.87 0.124 0.988 0.579
## 2 1 1998-01-02 00:30:00 7 3.66 0.00610 1.57 1.05
## 3 2 1998-01-03 00:30:00 0 0 0.0484 NA NA
## 4 3 1998-01-04 00:30:00 0 0 0.303 NA NA
## 5 4 1998-01-05 00:30:00 28 10.9 0.195 0.851 0.515
## 6 5 1998-01-06 00:30:00 48 18.0 0.926 0.615 0.370
## 7 6 1998-01-07 00:30:00 48 18.0 -0.337 0.566 0.340
## 8 7 1998-01-08 00:30:00 46 17.2 -0.139 0.541 0.325
## 9 8 1998-01-09 00:30:00 45 16.8 0.614 0.482 0.289
## 10 9 1998-01-10 00:30:00 36 13.5 0.242 0.646 0.386
## # ... with 355 more rows
The confidence bounds (+-1.96 stdDev) computed with accounting for correlations in this case are about twice the ones computed with neglecting correlations.
|
Lots of the answers so far have focused on the economic reasons why a floating city is impossible. But what about physical reasons?
Aerodynamic Levitation
As a first approximation, we can treat the city as an air bearing. There are a couple formulas that we can take from an intro fluid flow class to calculate the amount of airflow required to hold us up, assuming incompressible laminar flow.
$$\dot M \approx \frac{\pi b^3 \rho\sigma g}{3\mu}$$
Where $b$ is the distance in between our city and the ground, $\rho$ is air density and $\mu$ is viscosity, $\sigma$ is the load of our city and $g$ is gravity. Here are the values I assume:
$g = 9.8~\text{m}/\text{s}^2$ (Earth standard gravity) $\rho = 1.225~\text{kg}/\text{m}^3$ and $\mu = 1.789\cdot 10^{-5}~\text{Pa}\cdot\text{s}$ (standard air at sea level) $\sigma = 1000~\text{kg}/\text{m}^2$ Assuming the average density of stuff (buildings, dirt, etc.) in the city is around the same as water, and if you flattened the city out it would be a meter high (this is a huge underestimate) $b = 1000~\text{ft}$ This is what my intuition tells me is a "reasonable" height, something like the spaceship in District 9.
We can plug in these values and we get:
$$\dot M\approx 1.5\cdot 10^{15}~\text{t}/\text{s}$$
Yes, we need to move over a million billion tons of air per second to keep the city afloat, or enough to turn over the whole atmosphere in around three seconds. Now, this is obviously not going to be laminar flow anymore; we can calculate the air velocity at the edge of the city, assuming a diameter of $2R=45~\text{km}$:
$$v = \frac{b^2\sigma g}{\mu R} = 20\cdot 10^{6}~\text{km}/\text{s}$$
This is obviously wrong, since it's 60 times greater than the speed of light. However, it does tell us that we can't levitate the city this way.
We can try another approximation, this time using actuator disk theory. It tells us that the amount of power required for the city to hover is given by:
$$P \approx A\sqrt{\frac{\left(\sigma g\right)^3}{2\rho}}$$
Using the same values as before, we come up with a power of:
$$P \approx 1000~\text{TW}$$
... which is 50 to 100 times current global energy consumption.
Even if we could circumvent these issues, we'd still have to apply a couple of psi to the ground below to support the weight of the city. This would certainly flatten any fields that city flies over, and if the force is applied in even a slightly unstructured way, this could easily flatten buildings. In addition, the ground itself could give way when you fly over another city: imagine doubling the weight of every building, and the settling that would occur.
Hydrostatic Levitation
We can get around the ground pressure problems by replacing the weight of the air that's already there; that is, floating the city with balloons. As a best-case scenario, I'll assume that the balloons are filled with vacuum.
To see how this works, imagine cutting out a disk-shaped slab of air and replacing it with a rigid shell. If the total weight of the shell is equal to the weight of the air removed, the forces on the surrounding air will be exactly the same, and the people on the ground won't feel any pressure.
I'll use the same figures as before for our calculation: a base height of $1000~\text{ft}$ and an average mass of $1000~\text{kg}/\text{m}^2$ (around $1.4~\text{psi}$, or $0.1~\text{atm}$). We can set up the following equation relating the mass of a section of the city and the mass of the air it displaces:
$$m = \int \rho\ dV \\\sigma A = \int \rho\ dz\ dA \\\sigma = \int_{b}^{b+\Delta z}\rho\ dz$$
Using the US standard atmosphere, I get a value $\Delta z = 2900~\text{ft}$. If we using a lifting gas with density relative to air $\tilde\rho$, the equation becomes:
$$\sigma = (1-\tilde\rho)\int_{b}^{b+\Delta z}\rho\ dz$$
For helium with $\tilde\rho=14~\%$, we get $\Delta z = 3400~\text{ft}$. Of course, this is only enough to keep you $1000~\text{ft}$ above
sea level. If you want to float above the tallest building in the mile-high city (or get from one side of the US to the other) you need to have $\Delta z = 4000~\text{ft}$ high balloons. Psuedoscience (Aside)
It looks like we need to ignore hard-science if we want to make this work. Personally I would have your city levitated with a variant of a reactionless drive. The common spaceborne sci-fi variant pushes on a gravitational well, so that momentum is conserved but no reaction mass is expended. If this is possible, it may also be possible to "latch on" directly to the gravity well of a planet. It would essentially be a floating solid foundation (although it would experience tidal motion due to the influence of the Sun and Moon). The energy requirements would be zero until the city moves, and when it does the required power can be made as small as desired by reducing speed (although the total amount of energy required to lift the city a given height is fixed by its weight).
Generic Problems
Whatever method you use, there are two more problems that I can foresee. First, your city will be a giant moving eclipse. Nobody wants an airship the size of Guam floating over their heads, even if it's just for a day: not cities, where there are lots of people to get angry; and certainly not in rural areas, where crops could be harmed by the lack of sunlight. Environmentalists will protest the disruption to the local ecosystem wherever you go.
Second, wind speeds increase rapidly with altitude, and temperature and pressure decrease; not to mention that low clouds would pass through the city like dense fog. Inhabitants of a floating city would experience worse weather than ground-dwellers at pretty much all times.
Thirdly, the city would likely be subject to electrostatic charging by the same mechanism that causes clouds to become charged. At the very best, the city itself would act as a lightning conduit during storms. At the worst, the city itself might generate a few small lightning strikes when first passing over a tall building. (Yet another reason to refuse passage to this power-hungry/regular-hungry darkness-bringer of a city.)
Pretty much all these problems can be countered by floating close to sea level above somewhere with no people or plants. But in that case, they'd probably just drop it down the last 100 feet and float it on the ocean—much easier.
|
ISSN:
1930-5346
eISSN:
1930-5338
All Issues
Advances in Mathematics of Communications
May 2014 , Volume 8 , Issue 2
Select all articles
Export/Reference:
Abstract:
A problem of improving the accuracy of nonparametric entropy estimation for a stationary ergodic process is considered. New weak metrics are introduced and relations between metrics, measures, and entropy are discussed. A new nonparametric entropy estimator is constructed based on weak metrics and has a parameter with which the estimator is optimized to reduce its bias. It is shown that estimator's variance is upper-bounded by a nearly optimal Cramér-Rao lower bound.
Abstract:
This paper gives lower and upper bounds on the covering radius of codes over $\mathbb{Z}_{2^s}$ with respect to homogenous distance. We also determine the covering radius of various Repetition codes, Simplex codes (Type $\alpha$ and Type $\beta$) and their dual and give bounds on the covering radii for MacDonald codes of both types over $\mathbb{Z}_4$.
Abstract:
A set of quasi-uniform random variables $X_1,\ldots,X_n$ may be generated from a finite group $G$ and $n$ of its subgroups, with the corresponding entropic vector depending on the subgroup structure of $G$. It is known that the set of entropic vectors obtained by considering arbitrary finite groups is much richer than the one provided just by abelian groups. In this paper, we start to investigate in more detail different families of non-abelian groups with respect to the entropic vectors they yield. In particular, we address the question of whether a given non-abelian group $G$ and some fixed subgroups $G_1,\ldots,G_n$ end up giving the same entropic vector as some abelian group $A$ with subgroups $A_1,\ldots,A_n$, in which case we say that $(A, A_1, \ldots, A_n)$ represents $(G, G_1, \ldots, G_n)$. If for any choice of subgroups $G_1,\ldots,G_n$, there exists some abelian group $A$ which represents $G$, we refer to $G$ as being abelian (group) representable for $n$. We completely characterize dihedral, quasi-dihedral and dicyclic groups with respect to their abelian representability, as well as the case when $n=2$, for which we show a group is abelian representable if and only if it is nilpotent. This problem is motivated by understanding non-linear coding strategies for network coding, and network information theory capacity regions.
Abstract:
To resist Binary Decision Diagrams (BDD) based attacks, a Boolean function should have a high BDD size. The hidden weighted bit function (HWBF), introduced by Bryant in 1991, seems to be the simplest function with exponential BDD size. In [28], Wang et al. investigated the cryptographic properties of the HWBF and found that it is a very good candidate for being used in real ciphers. In this paper, we modify the HWBF and construct two classes of functions with very good cryptographic properties (better than the HWBF). The new functions are balanced, with almost optimum algebraic degree and satisfy the strict avalanche criterion. Their nonlinearity is higher than that of the HWBF. We investigate their algebraic immunity, BDD size and their resistance against fast algebraic attacks, which seem to be better than those of the HWBF too. The new functions are simple, can be implemented efficiently, have high BDD sizes and rather good cryptographic properties. Therefore, they might be excellent candidates for constructions of real-life ciphers.
Abstract:
In this paper, we focus on the design of unitary space-time codes achieving full diversity using division algebras, and on the systematic computation of their minimum determinant. We also give examples of such codes with high minimum determinant. Division algebras allow to obtain higher rates than known constructions based on finite groups.
Abstract:
The values of the homogeneous weight are determined for finite Frobenius rings that are a direct product of local Frobenius rings. This is used to investigate the partition induced by this weight and its dual partition under character-theoretic dualization. A characterization is given of those rings for which the induced partition is reflexive or even self-dual.
Abstract:
A binary sequence family ${\mathcal S}$ of length $n$ and size $M$ can be characterized by the maximum magnitude of its nontrivial aperiodic correlation, denoted as $\theta_{\max} ({\mathcal S})$. The lower bound on $\theta_{\max} ({\mathcal S})$ was originally presented by Welch, and improved later by Levenshtein. In this paper, a Fourier transform approach is introduced in an attempt to improve the Levenshtein's lower bound. Through the approach, a new expression of the Levenshtein bound is developed. Along with numerical supports, it is found that $\theta_{\max} ^2 ({\mathcal S}) > 0.3584 n-0.0810$ for $M=3$ and $n \ge 4$, and $\theta_{\max} ^2 ({\mathcal S}) > 0.4401 n-0.1053$ for $M=4$ and $n \ge 4$, respectively, which are tighter than the original Welch and Levenshtein bounds.
Abstract:
In the present article we propose a reduction point algorithm for any Fuchsian group in the absence of parabolic transformations. We extend to this setting classical algorithms for Fuchsian groups with parabolic transformations, such as the
flip flopalgorithm known for the modular group $\mathbf{SL}(2, \mathbb{Z})$ and whose roots go back to [9]. The research has been partially motivated by the need to design more efficient codes for wireless transmission data and for the study of Maass waveforms under a computational point of view. Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]
|
Category:Supremum Metric Let $S$ be a set.
Let $M = \left({A', d'}\right)$ be a metric space.
Let $d: A \times A \to \R$ be the function defined as:
$\displaystyle \forall f, g \in A: d \left({f, g}\right) := \sup_{x \mathop \in S} d' \left({f \left({x}\right), g \left({x}\right)}\right)$
where $\sup$ denotes the supremum.
$d$ is known as the supremum metric on $A$. Subcategories
This category has the following 4 subcategories, out of 4 total.
Pages in category "Supremum Metric"
The following 8 pages are in this category, out of 8 total.
S Supremum Metric is Metric Supremum Metric on Bounded Continuous Mappings is Metric Supremum Metric on Bounded Real Functions on Closed Interval is Metric Supremum Metric on Bounded Real Sequences is Metric Supremum Metric on Bounded Real-Valued Functions is Metric Supremum Metric on Continuous Real Functions is Metric Supremum Metric on Continuous Real Functions is Subspace of Bounded Supremum Metric on Differentiability Class is Metric
|
Acoustic Topology Optimization with Thermoviscous Losses Today, guest blogger René Christensen of GN Hearing discusses including thermoviscous losses in the topology optimization of microacoustic devices.
Topology optimization helps engineers design applications in an optimized manner with respect to certain
a priori objectives. Mainly used in structural mechanics, topology optimization is also used for thermal, electromagnetics, and acoustics applications. One physics that was missing from this list until last year is microacoustics. This blog post describes a new method for including thermoviscous losses for microacoustics topology optimization.
Standard Acoustic Topology Optimization
A previous blog post on acoustic topology optimization outlined the introductory theory and gave a couple of examples. The description of the acoustics was the standard Helmholtz wave equation. With this formulation, we can perform topology optimization for many different applications, such as loudspeaker cabinets, waveguides, room interiors, reflector arrangements, and similar large-scale geometries.
The governing equation is the standard wave equation with material parameters given in terms of the density \rho and the bulk modulus K. For topology optimization, the density and the bulk modulus are interpolated via a variable, \epsilon. This interpolation variable ideally takes binary values: 0 represents air and 1 represents a solid. During the optimization procedure, however, its value follows an interpolation scheme, such as a solid isotropic material with penalization model (SIMP), as shown in Figure 1.
Figure 1: The density and bulk modulus interpolation for standard acoustic topology optimization. The units have been omitted to have both values in the same plot.
Using this approach will work for applications where the so-called thermoviscous losses (close to walls in the acoustic boundary layers) are of little importance. The optimization domain can be coupled to narrow regions described by, for example, a homogenized model (this is the
Narrow Region Acoustics feature in the Pressure Acoustics, Frequency Domain interface). However, if the narrow regions where the thermoviscous losses occur change shape themselves, this procedure is no longer valid. An example is when the cross section of a waveguide changes shape. Thermoviscous Acoustics (Microacoustics)
For microacoustic applications, such as hearing aids, mobile phones, and certain metamaterial geometries, the acoustic formulation typically needs to include the so-called thermoviscous losses explicitly. This is because the main losses occur in the acoustic boundary layer near walls. Figure 2 below illustrates these effects.
Figure 2: The volume field is the acoustic pressure, the surface field is the temperature variation, and the arrows indicate the velocity.
An acoustic wave travels from the bottom to the top of a tube with a circular cross section. The pressure is shown in a ¾-revolution plot.
The arrows indicate the particle velocity at this particular frequency. Near the boundary, the velocity is low and tends to zero on the boundary, whereas in the bulk, it takes on the velocity expected from standard acoustics via Euler’s equation. At the boundary, the velocity is zero because of viscosity, since the air “sticks” to the boundary. Adjacent particles are slowed down, which leads to an overall loss in energy, or rather a conversion from acoustic to thermal energy (viscous dissipation due to shear). In the bulk, however, the molecules move freely.
Governing Equations of Thermoviscous Acoustics
Modeling microacoustics in detail, including the losses associated with the acoustic boundary layers, requires solving the set of linearized Navier-Stokes equations with quiescent conditions. These equations are implemented in the
Thermoviscous Acoustics physics interfaces available in the Acoustics Module add-on to the COMSOL Multiphysics® software. However, this formulation is not suited for topology optimization where certain assumptions can be used. A formulation based on a Helmholtz decomposition is presented in Ref. 1. The formulation is valid in many microacoustic applications and allows decoupling of the thermal, viscous, and compressible (pressure) waves. An approximate, yet accurate, expression (Ref. 1) links the velocity and the pressure gradient as
where the viscous field \Psi_{v} is a scalar nondimensional field that describes the variation between bulk conditions and boundary conditions.
In the figure above, the surface color plot shows the acoustic temperature variation. The variation on the boundary is zero due to the high thermal conductivity in the solid wall, whereas in the bulk, the temperature variation can be calculated via the isentropic energy equation. Again, the relationship between temperature variation and acoustic pressure can be written in a general form (Ref. 1) as
where the thermal field \Psi_{h} is a scalar, nondimensional field that describes the variation between bulk conditions and boundary conditions.
As will be shown later, these viscous and thermal fields are essential for setting up the topology optimization scheme.
Topology Optimization for Thermoviscous Acoustics Applications
For thermoviscous acoustics, there is no established interpolation scheme, as opposed to standard acoustics topology optimization. Since there is no one-equation system that accurately describes the thermoviscous physics (typically, it requires three governing equations), there are no obvious variables to interpolate. However, I will describe a novel procedure in this section.
For simplicity, we look at only wave propagation in a waveguide of constant cross section. This is equivalent to the so-called Low Reduced Frequency model, which may be known to those working with microacoustics. The viscous field can be calculated (Ref. 1) via Equation 1 as
(1)
where \Delta_{cd} is the Laplacian in the cross-sectional direction only. For certain simple geometries, the fields can be calculated analytically (as done in the
Narrow Region Acoustics feature in the Pressure Acoustics, Frequency Domain interface). However, when used for topology optimization, they must be calculated numerically for each step in the optimization procedure.
In standard acoustics topology optimization, an interpolation variable varies between 0 and 1, where 0 represents air and 1 represents a solid. To have a similar interpolation scheme for the thermoviscoacoustic topology optimization, I came up with a heuristic approach, where the thermal and viscous fields are used in the interpolation strategy. The two typical boundary conditions for the viscous field (Ref. 1) are
and
These boundary conditions give us insight into how to perform the optimization procedure, since an air-solid interface could be represented by the former boundary condition and an air-air interface by the latter. We write the governing equation in a more general matter:
We already know that for air domains, (a
v,f v) = (1,1), since that gives us the original equation (1). If we instead set a v to a large value so that the gradient term becomes insignificant, and set f v to zero, we get
This corresponds exactly to the boundary condition for no-slip boundaries, just as a solid-air interface, but obtained via the governing equation. We need this property, since we have no way of applying explicit boundary conditions during the optimization. So, for solids, (a
v,f v) should have values of (“large”,0). Thus, we have established our interpolation extremes:
and
I carried out a comparison between the explicit boundary conditions and interpolation extremes, with the test geometry shown in Figure 3. On the left side, boundary conditions are used, whereas on the adjacent domains on the right, the suggested values of a
v and f v are input. Figure 3: On the left, standard boundary conditions are applied. On the right, black domains indicate a modified field equation that mimics a solid boundary. White domains are air.
The field in all domains is now calculated for a frequency with a boundary layer thick enough to visually take up some of the domain. It can be seen that the field is symmetric, which means that the extreme field values can describe either air or a solid. In a sense, that is comparable to using the actual corresponding boundary conditions.
Figure 4: The resulting field with contours for the setup in Figure 3.
The actual interpolation between the extremes is done via SIMP or RAMP schemes (Ref. 2), for example, as with the standard acoustic topology optimization. The viscous field, as well as the thermal field, can be linked to the acoustic pressure variable pressure via equations. With this, the world’s first acoustic topology optimization scheme that incorporates accurate thermoviscous losses has come to fruition.
Optimizing an Acoustic Loss Response
Here, we give an example that shows how the optimization method can be used for a practical case. A tube with a hexagonally shaped cross section has a certain acoustic loss due to viscosity effects. Each side length in the hexagon is approximately 1.1 mm, which gives an area equivalent to a circular area with a radius of 1 mm. Between 100 and 1000 Hz, this acoustic loss increases by a factor of approximately 2.6, as shown in Figure 7. Now, we seek to find an optimal topology so that we obtain a flatter acoustic loss response in this frequency range, with no regards to the actual loss value. The resulting geometry looks like this:
Figure 5: The topology for a maximally flat acoustic loss response and resulting viscous field at 1000 Hz.
A simpler geometry that resembles the optimized topology was created, where explicit boundary conditions can be applied.
Figure 6: A simplified representation of the optimized topology, with the viscous field at 1000 Hz.
The normalized acoustic loss for the initial hexagonal geometry and the topology-optimized geometry are compared in Figure 7. For each tube, the loss is normalized to the value at 100 Hz.
Figure 7: The acoustics loss normalized to the value at 100 Hz for the initial cross section (dashed) and the topology-optimized geometry (solid), respectively.
For the optimized topology, the acoustic loss at 1000 Hz is only 1.5 times higher than at 100 Hz, compared to the 2.6 times for the initial geometry. The overall loss is larger for the optimized geometry, but as mentioned before, we do not consider this in the example.
This novel topology optimization strategy can be expanded to a more general 1D method, where pressure can be used directly in the objective function. A topology optimization scheme for general 3D geometries has also been established, but its implementation is still ongoing. It would be very advantageous for those of us working with microacoustics to focus on improving topology optimization, in both universities and industry. I hope to see many advances in this area in the future.
References W.R. Kampinga, Y.H. Wijnant, A. de Boer, “An Efficient Finite Element Model for Viscothermal Acoustics,” Acta Acousticaunited with Acoustica, vol. 97, pp. 618–631, 2011. M.P. Bendsoe, O. Sigmund, Topology Optimization: Theory, Methods, and Applications, Springer, 2003. About the Guest Author
René Christensen has been working in the field of vibroacoustics for more than a decade, both as a consultant (iCapture ApS) and as an engineer in the hearing aid industry (Oticon A/S, GN Hearing A/S). He has a special interest in the modeling of viscothermal effects in microacoustics, which was also the topic of his PhD. René joined the hardware platform R&D acoustics team at GN Hearing as a senior acoustic engineer in 2015. In this role, he works with the design and optimization of hearing aids.
Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
Search
Now showing items 1-1 of 1
Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2016-09)
The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
|
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
October 2005 , Volume 13 , Issue 5
Special Issue
Recent Development on Differential Equations and Dynamical Systems: Part II
Select all articles
Export/Reference:
Abstract:
We will classify the path connected components of spaces of Sobolev maps between manifolds and study the strong and weak density of smooth maps in the spaces of Sobolev maps for the case the domain manifold has nonempty boundary and Dirichlet problems.
Abstract:
We study non-hyperbolic repellers of diffeomorphisms derived from transitive Anosov diffeomorphisms with unstable dimension 2 through a Hopf bifurcation. Using some recent abstract results about non-uniformly expanding maps with holes, by ourselves and by Dysman, we show that the Hausdorff dimension and the limit capacity (box dimension) of the repeller are strictly less than the dimension of the ambient manifold.
Abstract:
We perform a systematic multiscale analysis for the 2-D incompressible Euler equation with rapidly oscillating initial data using a Lagrangian approach. The Lagrangian formulation enables us to capture the propagation of the multiscale solution in a natural way. By making an appropriate multiscale expansion in the vorticity-stream function formulation, we derive a well-posed homogenized equation for the Euler equation. Based on the multiscale analysis in the Lagrangian formulation, we also derive the corresponding multiscale analysis in the Eulerian formulation. Moreover, our multiscale analysis reveals some interesting structure for the Reynolds stress term, which provides a theoretical base for establishing systematic multiscale modeling of 2-D incompressible flow.
Abstract:
In the study of systems which combine slow and fast motions which depend on each other (fully coupled setup) whenever the averaging principle can be justified this usually can be done only in the sense of $L^1$-convergence on the space of initial conditions. When fast motions are hyperbolic (Axiom A) flows or diffeomorphisms (as well as expanding endomorphisms) for each freezed slow variable this form of the averaging principle was derived in [19] and [20] relying on some large deviations arguments which can be applied only in the Axiom A or uniformly expanding case. Here we give another proof which seems to work in a more general framework, in particular, when fast motions are some partially hyperbolic or some nonuniformly hyperbolic dynamical systems or nonuniformly expanding endomorphisms.
Abstract:
I show that the dynamical determinant, associated to an Anosov diffeomorphism, is the Fredholm determinant of the corresponding Ruelle-Perron-Frobenius transfer operator acting on appropriate Banach spaces. As a consequence it follows, for example, that the zeroes of the dynamical determinant describe the eigenvalues of the transfer operator and the Ruelle resonances and that, for $\C^\infty$ Anosov diffeomorphisms, the dynamical determinant is an entire function.
Abstract:
We prove the existence of reaction-diffusion traveling fronts in mean zero space-time periodic shear flows for nonnegative reactions including the classical KPP (Kolmogorov-Petrovsky-Piskunov) nonlinearity. For the KPP nonlinearity, the minimal front speed is characterized by a variational principle involving the principal eigenvalue of a space-time periodic parabolic operator. Analysis of the variational principle shows that adding a mean-zero space time periodic shear flow to an existing mean zero space-periodic shear flow leads to speed enhancement. Computation of KPP minimal speeds is performed based on the variational principle and a spectrally accurate discretization of the principal eigenvalue problem. It shows that the enhancement is monotone decreasing in temporal shear frequency, and that the total enhancement from pure reaction-diffusion obeys quadratic and linear laws at small and large shear amplitudes.
Abstract:
We study the propagation of a front arising as the asymptotic (macroscopic) limit of a model in spatial ecology in which the invasive species propagate by "jumps". The evolution of the order parameter marking the location of the colonized/uncolonized sites is governed by a (mesoscopic) integro-differential equation. This equation has structure similar to the classical Fisher or KPP - equation, i.e., it admits two equilibria, a stable one at $k$ and an unstable one at $0$ describing respectively the colonized and uncolonized sites. We prove that, after rescaling, the solution exhibits a sharp front separating the colonized and uncolonized regions, and we identify its (normal) velocity. In some special cases the front follows a geometric motion. We also consider the same problem in heterogeneous habitats and oscillating habitats. Our methods, which are based on the analysis of a Hamilton-Jacobi equation obtained after a change of variables, follow arguments which were already used in the study of the analogous phenomena for the Fisher/KPP - equation.
Abstract:
We consider the partial analogue of the usual measurable Livsic theorem for Anosov diffeomorphims in the context of non-uniformly hyperbolic diffeomorphisms (Theorem 2). Our main application of this theorem is to the density of absolutely continuous measures (Theorem 1).
Abstract:
The linearized Primitive Equations with vanishing viscosity are considered. Some new boundary conditions (of transparent type) are introduced in the context of a modal expansion of the solution which consist of an infinite sequence of integral equations. Applying the linear semi-group theory, existence and uniqueness of solutions is established. The case with nonhomogeneous boundary values, encountered in numerical simulations in limited domains, is also discussed.
Abstract:
We consider the long time behavior of moments of solutions and of the solutions itself to dissipative Quasi-Geostrophic flow (QG) with sub-critical powers. The flow under consideration is described by the nonlinear scalar equation
$\frac{\partial \theta}{\partial t} + u\cdot \nabla \theta + \kappa (-\Delta)^{\alpha}\theta =f$, $\theta|_{t=0}=\theta_0 $
Rates of decay are obtained for moments of the solutions, and lower bounds of decay rates of the solutions are established.
Abstract:
In this paper we develop the theory of polymorphisms of measure spaces, which is a generalization of the theory of measure-preserving transformations. We describe the main notions and discuss relations to the theory of Markov processes, operator theory, ergodic theory, etc. We formulate the important notion of quasi-similarity and consider quasi-similarity between polymorphisms and automorphisms.
The question is as follows: is it possible to have a quasi-similarity between a measure-preserving automorphism $T$ and a polymorphism $\Pi$ (that is not an automorphism)? In less definite terms: what kind of equivalence can exist between deterministic and random (Markov) dynamical systems? We give the answer: every nonmixing prime polymorphism is quasi-similar to an automorphism with positive entropy, and every $K$-automorphism $T$ is quasi-similar to a polymorphism $\Pi$ that is a special random perturbation of the automorphism $T$.
Abstract:
It is shown that periodic solutions of a delay differential equation approach a square wave if a parameter becomes large. The equation models short-term prize fluctuations. The proof relies on the fact that the branches of the unstable manifold at equilibrium tend to the periodic orbit.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]
|
Search
Now showing items 1-10 of 18
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV
(American Physical Society, 2014-12-05)
We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
|
Search
Now showing items 1-10 of 51
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Highlights of experimental results from ALICE
(Elsevier, 2017-11)
Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ...
Event activity-dependence of jet production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV measured with semi-inclusive hadron+jet correlations by ALICE
(Elsevier, 2017-11)
We report measurement of the semi-inclusive distribution of charged-particle jets recoiling from a high transverse momentum ($p_{\rm T}$) hadron trigger, for p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in p-Pb events ...
System-size dependence of the charged-particle pseudorapidity density at $\sqrt {s_{NN}}$ = 5.02 TeV with ALICE
(Elsevier, 2017-11)
We present the charged-particle pseudorapidity density in pp, p–Pb, and Pb–Pb collisions at sNN=5.02 TeV over a broad pseudorapidity range. The distributions are determined using the same experimental apparatus and ...
Photoproduction of heavy vector mesons in ultra-peripheral Pb–Pb collisions
(Elsevier, 2017-11)
Ultra-peripheral Pb-Pb collisions, in which the two nuclei pass close to each other, but at an impact parameter greater than the sum of their radii, provide information about the initial state of nuclei. In particular, ...
Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE
(Elsevier, 2017-11)
The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ...
|
A Belyi-extender (or dessinflateur) is a rational function $q(t) = \frac{f(t)}{g(t)} \in \mathbb{Q}(t)$ that defines a map
\[ q : \mathbb{P}^1_{\mathbb{C}} \rightarrow \mathbb{P}^1_{\mathbb{C}} \] unramified outside $\{ 0,1,\infty \}$, and has the property that $q(\{ 0,1,\infty \}) \subseteq \{ 0,1,\infty \}$.
An example of such a Belyi-extender is the power map $q(t)=t^n$, which is totally ramified in $0$ and $\infty$ and we clearly have that $q(0)=0,~q(1)=1$ and $q(\infty)=\infty$.
The composition of two Belyi-extenders is again an extender, and we get a rather mysterious monoid $\mathcal{E}$ of all Belyi-extenders.
Very little seems to be known about this monoid. Its units form the symmetric group $S_3$ which is the automrphism group of $\mathbb{P}^1_{\mathbb{C}} – \{ 0,1,\infty \}$, and mapping an extender $q$ to its degree gives a monoid map $\mathcal{E} \rightarrow \mathbb{N}_+^{\times}$ to the multiplicative monoid of positive natural numbers.
If one relaxes the condition of $q(t) \in \mathbb{Q}(t)$ to being defined over its algebraic closure $\overline{\mathbb{Q}}$, then such maps/functions have been known for some time under the name of
dynamical Belyi-functions, for example in Zvonkin’s Belyi Functions: Examples, Properties, and Applications (section 6).
Here, one is interested in the complex dynamical system of iterations of $q$, that is, the limit-behaviour of the orbits
\[ \{ z,q(z),q^2(z),q^3(z),… \} \] for all complex numbers $z \in \mathbb{C}$.
In general, the 2-sphere $\mathbb{P}^1_{\mathbb{C}} = S^2$ has a finite number of open sets (the Fatou domains) where the limit behaviour of the series is similar, and the union of these open sets is dense in $S^2$. The complement of the Fatou domains is the Julia set of the function, of which we might expect a nice fractal picture.
Let’s take again the power map $q(t)=t^n$. For a complex number $z$ lying outside the unit disc, the series $\{ z,z^n,z^{2n},… \}$ has limit point $\infty$ and for those lying inside the unit circle, this limit is $0$. So, here we have two Fatou domains (interior and exterior of the unit circle) and the Julia set of the power map is the (boring?) unit circle.
Fortunately, there are indeed dynamical Belyi-maps having a more pleasant looking Julia set, such as this one
But then, many dynamical Belyi-maps (and Belyi-extenders) are systems of an entirely different nature, they are
completely chaotic, meaning that their Julia set is the whole $2$-sphere! Nowhere do we find an open region where points share the same limit behaviour… (the butterfly effect).
There’s a nice sufficient condition for chaotic behaviour, due to Dennis Sullivan, which is pretty easy to check for dynamical Belyi-maps.
A
periodic point for $q(t)$ is a point $p \in S^2 = \mathbb{P}^1_{\mathbb{C}}$ such that $p = q^m(p)$ for some $m > 1$. A critical point is one such that either $q(p) = \infty$ or $q'(p)=0$.
Sullivan’s result is that $q(t)$ is completely chaotic when all its critical points $p$ become eventually periodic, that is some $q^k(p)$ is periodic,
but $p$ itself is not periodic.
For a Belyi-map $q(t)$ the critical points are either comlex numbers mapping to $\infty$ or the inverse images of $0$ or $1$ (that is, the black or white dots in the dessin of $q(t)$) which are not leaf-vertices of the dessin.
Let’s do an example, already used by Sullivan himself:
\[ q(t) = (\frac{t-2}{t})^2 \] This is a Belyi-function, and in fact a Belyi-extender as it is defined over $\mathbb{Q}$ and we have that $q(0)=\infty$, $q(1)=1$ and $q(\infty)=1$. The corresponding dessin is (inverse images of $\infty$ are marked with an $\ast$)
The critical points $0$ and $2$ are not periodic, but they become eventually periodic:
\[
2 \rightarrow^q 0 \rightarrow^q \infty \rightarrow^q 1 \rightarrow^q 1 \] and $1$ is periodic.
For a general Belyi-extender $q$, we have that the image under $q$ of any critical point is among $\{ 0,1,\infty \}$ and because we demand that $q(\{ 0,1,\infty \}) \subseteq \{ 0,1,\infty \}$, every critical point of $q$ eventually becomes periodic.
If we want to avoid the corresponding dynamical system to be completely chaotic, we have to ensure that one of the periodic points among $\{ 0,1,\infty \}$ (and there is at least one of those) must be critical.
Let’s consider the very special Belyi-extenders $q$ having the additional property that $q(0)=0$, $q(1)=1$ and $q(\infty)=\infty$, then all three of them are periodic.
So, the system is always completely chaotic
unless the black dot at $0$ is not a leaf-vertex of the dessin, or the white dot at $1$ is not a leaf-vertex, or the degree of the region determined by the starred $\infty$ is at least two.
Going back to the mystery Manin-Marcolli sub-monoid of $\mathcal{E}$, it might explain why it is a good idea to restrict to very special Belyi-extenders having associated dessin a $2$-coloured tree, for then the periodic point $\infty$ is critical (the degree of the outside region is at least two), and therefore the conditions of Sullivan’s theorem are not satisfied. So, these Belyi-extenders do not necessarily have to be completely chaotic. (tbc)Leave a Comment
|
Yesterday, Jan Stienstra gave a talk at theARTS entitled “Quivers, superpotentials and Dimer Models”. He started off by telling that the talk was based on a paper he put on the arXiv Hypergeometric Systems in two Variables, Quivers, Dimers and Dessins d’Enfants but that he was not going to say a thing about dessins but would rather focuss on the connection with superpotentials instead…pleasing some members of the public, while driving others to utter despair.
Anyway, it gave me the opportunity to figure out for myself what dessins might have to do with
dimers, whathever these beasts are. Soon enough he put on a slide containing the definition of a dimer and from that moment on I was lost in my own thoughts… realizing that a dessin d’enfant had to be a dimer for the Dedekind tessellation of its associated Riemann surface! and a few minutes later I could slap myself on the head for not having thought of this before :
There is a natural way to associate to a Farey symbol (aka a permutation representation of the modular group) a quiver and a superpotential (aka a necklace) defining (conjecturally) a Calabi-Yau algebra! Moreover, different embeddings of the cuboid tree diagrams in the hyperbolic plane may (again conjecturally) give rise to all sorts of arty-farty fanshi-wanshi dualities…
I’ll give here the details of the simplest example I worked out during the talk and will come back to general procedure later, when I’ve done a reference check. I don’t claim any originality here and probably all of this is contained in Stienstra’s paper or in some physics-paper, so if you know of a reference, please leave a comment. Okay, remember the Dedekind tessellation ?
So, all hyperbolic triangles we will encounter below are colored black or white. Now, take a Farey symbol and consider its associated special polygon in the hyperbolic plane. If we start with the Farey symbol
[tex]\xymatrix{\infty \ar@{-}_{(1)}[r] & 0 \ar@{-}_{\bullet}[r] & 1 \ar@{-}_{(1)}[r] & \infty} [/tex]
we get the special polygonal region bounded by the thick edges, the vertical edges are identified as are the two bottom edges. Hence, this fundamental domain has 6 vertices (the 5 blue dots and the point at $i \infty $) and 8 hyperbolic triangles (4 colored black, indicated by a black dot, and 4 white ones).
Right, now let us associate a
quiver to this triangulation (which embeds the quiver in the corresponding Riemann surface). The vertices of the triangulation are also the vertices of the quiver (so in our case we are going for a quiver with 6 vertices). Every hyperbolic edge in the triangulation gives one arrow in the quiver between the corresponding vertices. The orientation of the arrow is determined by the color of a triangle of which it is an edge : if the triangle is black, we run around its edges counter-clockwise and if the triangle is white we run over its edges clockwise (that is, the orientation of the arrow is independent of the choice of triangles to determine it). In our example, there is one arrows directed from the vertex at $i $ to the vertex at $0 $, whether you use the black triangle on the left to determine the orientation or the white triangle on the right. If we do this for all edges in the triangulation we arrive at the quiver below
where x,y and z are the three finite vertices on the $\frac{1}{2} $-axis from bottom to top and where I’ve used the physics-convention for double arrows, that is there are two F-arrows, two G-arrows and two H-arrows. Observe that the quiver is of
Calabi-Yau type meaning that there are as much arrows coming into a vertex as there are arrows leaving the vertex.
Now that we have our quiver we determine the
superpotential as follows. Fix an orientation on the Riemann surface (for example counter-clockwise) and sum over all black triangles the product of the edge-arrows counterclockwise MINUS sum over all white triangles the product of the edge arrows counterclockwise. So, in our example we have the cubic superpotential
$IH’B+HAG+G’DF+FEC-BHI-H’G’A-GFD-CEF’ $
From this we get the associated noncommutative algebra, which is the quotient of the path algebra of the above quiver modulo the following ‘commutativity relations’
$\begin{cases} GH &=G’H’ \\ IH’ &= IH \\ FE &= F’E \\ F’G’ &= FG \\ CF &= CF’ \\ EC &= GD \\ G’D &= EC \\ HA &= DF \\ DF’ &= H’A \\ AG &= BI \\ BI &= AG’ \end{cases} $
and
morally this should be a Calabi-Yau algebra (( can someone who knows more about CYs verify this? )). This concludes the walk through of the procedure. Summarizing : to every Farey-symbol one associates a Calabi-Yau quiver and superpotential, possibly giving a Calabi-Yau algebra! Similar Posts: the modular group and superpotentials (1) Quiver-superpotentials the modular group and superpotentials (2) Hyperbolic Mathieu polygons Modular quilts and cuboid tree diagrams quivers versus quilts The Dedekind tessellation Segal’s formal neighbourhood result seen this quiver? what does the monster see?
|
Differentiating a function is usually regarded as a discrete operation: we use the first derivative of a function to determine the slope of the line that is tangent to it, and we differentiate twice if we want to know the curvature. We can even differentiate a function negative times—ie integrate it—and thanks to that we measure the area under a curve. But why stop there? Is calculus limited to discrete operations, or is there a way to define the half derivative of a function? Is there even an interpretation or an application of the half derivative?
Fractional calculus is a concept as old as the traditional version of calculus, but if we have always thought about things using only whole numbers then suddenly using fractions might seem like taking the Hogwarts Express from King’s Cross station. However, fractional calculus opens up a whole new area of beautiful and magical maths.
How do we interpret the half derivative of a function? Since we are only halfway between the first derivative of a function and not differentiating it at all, then maybe the result should also be somewhere between the two? For example, if we have the function $\mathrm{e}^{cx}$, with $c>0$, then we can write any derivative of the function as
\begin{equation*} \mathrm{D}^{n} \mathrm{e}^{cx} = c^{n} \mathrm{e}^{cx}, \end{equation*} which works for $n=1, 2,\ldots$, but also works for integrating the function ($n = -1$) and doing nothing to it at all ($n=0$). As an aside, the symbol $\mathrm{D}^n$ might seem like a weird notation to represent the $n$-th derivative, or $\mathrm{D}^{-n}$ to represent the $n$-th integral of a function, but it’s just an easy way to represent derivatives and integrals at the same time. So if we have a real number $\nu$, why not express the fractional derivative of $\mathrm{e}^{\,cx}$ as \begin{equation*} \mathrm{D}^{\nu} \mathrm{e}^{cx} = c^{\nu} \mathrm{e}^{cx}? \end{equation*}
With this new derivative we know that if $\nu$ is an integer then the fractional expression has the same result as the traditional version of the derivative or integral. That seems like an important thing, right? If we want to generalise something, then we cannot change what was already there.
If the fractional derivative is a linear operator (ie if $a$ is a constant then $\mathrm{D}^{1/2} a \hspace{1pt} f (x) = a\mathrm{D}^{1/2} f(x)$) , then we would also obtain that
\begin{equation*} \mathrm{D}^{1/2} \left[\mathrm{D}^{1/2} \mathrm{e}^{cx} \right] = \mathrm{D}^{1/2} \left[c^{1/2} \mathrm{e}^{cx} \right] = c^{1/2} \mathrm{D}^{1/2} \left[ \mathrm{e}^{cx} \right] = c \mathrm{e}^{cx}, \end{equation*} so half differentiating the half derivative gives us the same result as just applying the first derivative. In fact for this very first definition of a fractional derivative, we get that \begin{equation*} \mathrm{D}^{\nu} \left[\mathrm{D}^{\mu} \mathrm{e}^{cx} \right] = \mathrm{D}^{\nu + \mu} \mathrm{e}^{cx} = c^{\nu + \mu} \mathrm{e}^{cx} \end{equation*} for all real values of $\nu$ and $\mu$. Great: our fractional derivative has at least some properties that sound like necessary things. Differentiating a derivative or integrating an integral should just give us the expected derivative or appropriate integral.
This way of defining a fractional derivative for the exponential function is perhaps a good introductory example, but some important questions need to be asked. Firstly, is this the only way to define the half derivative for $\mathrm{e}^{cx}$ such that it has the above properties, or could we come up with a different definition? Secondly, what happens if $c<0$? For example, with $c = -1$, we would get that $\mathrm{D}^{1/2} \mathrm{e}^{-x} = \mathrm{i} \mathrm{e}^{-x}$, which is imaginary. So the fractional derivative of a real-valued function could be complex or imaginary? That sounds like dark arts to me. And finally, how does that $\mathrm{D}^{\nu} f(x)$ work if we are not talking about the exponential function, but if we have a polynomial or, even simpler, a constant function, like $f(x) = 4$?
If we start with $f(x)=4$ (a boring, horizontal line), we know that its first derivative is $\mathrm{D}^1 f(x)=0$, so should the half derivative be something like $\mathrm{D}^{1/2} f(x) = 2$? Then, if we half differentiate that expression again, we obtain a zero on the left-hand side (since $\mathrm{D}^{1} f(x)=0$) and the half derivative of a constant function (in this case $g(x)=2$), on the right-hand side. But this is certainly not right! We said that the half derivative of a constant function is half the value of that constant, but now we obtain that it is zero! There is nothing worse for a mathematician than a system that is not consistent.
The best way is to begin with a more formal definition. Perhaps after having to integrate a function thousands and thousands of times, Augustin-Louis Cauchy discovered in the 19th century a way in which he could write the repeated integral of a function in a very elegant way:
\begin{equation*} \mathrm{D}^{-n}f(x)= \frac{1}{(n-1)!}\int^{x}_{a} (x-t)^{n-1} f(t) \, \mathrm{d}t. \end{equation*}
Not only is this a beautiful and simple formula, it also gives us a way to write any iterated integration (although to actually solve it, we would usually need to do some not-so-beautiful integration by parts). So why don’t we just change that number $n$ to a fraction, like 9¾? Everything in that expression would work smoothly… except for that dodgy factorial! The value of $n$ factorial (written as $n!$, possibly the worst symbol ever used in maths since now we cannot express a number with surprise) is the product of the numbers from 1 through to $n$, ie $1 \times 2 \times \cdots \times n$. What, then, would the factorial of 9¾ be? Maybe close to 10! but not quite there yet?
Luckily for us, an expression for the factorial of a real number has intrigued mathematicians for centuries, and brilliant minds like Euler and Gauss, amongst others, have worked on this issue. They defined the
gamma function, $\Gamma$, in such a way that it has the two properties we need: first, $\Gamma(n) = (n-1)!$, so we can use $\Gamma(n)$ instead of the factorial. Second—and even more importantly, given that we are dealing with fractions here—is that the function is well-defined and continuous for every positive real number, so we can now compute the factorial of 9¾, which is only 57% of the value of 10!. Now, we can write the repeated integral as \[ \mathrm{D}^{-\nu}f(x)= \frac{1}{\Gamma(\nu)}\int^{x}_{a} (x-t)^{\nu-1} f(t) \, \mathrm{d}t, \] which gives the same result as before when $\nu$ is a positive integer and is well-defined when $\nu$ is not an integer. The integral above is known as the fractional integral of the function $f$. Awesome!
What happens if we take the derivative of the repeated integral? Easy! We get the fractional derivative, right? Not quite, since we have two options: we could either differentiate the original function first and then take the fractional integral, or we could fractionally integrate first and then take the derivative. Damn! Both definitions are equally valid and we mathematicians hate having two definitions for the same thing. But are they even the same thing? If we differentiate first and then take the fractional integral—known as the
Caputo derivative—we don’t necessarily get the same result as if we fractionally integrate a function first and then take its derivative. The latter is called the Riemann–Liouville derivative or simply the fractional derivative since it is the one more frequently used.
As an example, let’s look at the 9¾ derivative of a polynomial, say $f(x) = x^{9}$. The 9¾ Caputo derivative is zero, since we first differentiate $x^{9}$ ten times, which is zero, and then integrate it; but the Riemann–Liouville derivative is
\begin{equation*} \mathrm{D}^{9\text{¾}}f(x)= \frac{9!}{\Gamma(\frac{1}{4})} x^{-1/4}, \end{equation*} which is clearly different to the Caputo derivative. Two things are to be noted here. The fractional part is only contained in the integral, so in order to obtain both of the 9¾ derivatives of a function we need to quarter integrate the tenth derivative or differentiate the ¼ integral ten times. Also, and very importantly in fractional calculus, the fractional integral depends on its integration limits (just as in the traditional version of calculus) but since the fractional derivative is defined in terms of the fractional integral, then the fractional derivatives also depend on the limits.
There are many applications of fractional calculus in, for example, engineering and physics. Interestingly, most of the applications have emerged in the last twenty years or so, and it has allowed a different approach to topics such as viscoelastic damping, chaotic systems and even acoustic wave propagation in biological tissue.
Perhaps fractional calculus is a bit tricky to interpret, seeming at first to be a weird generalisation of calculus but for me, just thinking about the 9¾ derivative of a function was like discovering the entry into a whole new world between platforms 9 and 10. Certainly, there is some magic hidden behind fractional calculus!
You might also like… Reflecting on what we've learnt over the past few weeks. The co-author of a recent paper on diversity in professional STEM societies talks about access to science. Meet Talitha Washington, an activist, mathematician, and professor We spoke with Jonathan Farley about his research and experiences as a black mathematician. As part of Black Mathematician Month, we spoke to the Bristol University professor about access schemes and the importance of mentors. Meet Olubunmi Abidemi Fadipe-Joseph, an active promoter for women in mathematics from Nigeria
|
Search
Now showing items 1-10 of 11
Measurement of $W$ boson angular distributions in events with high transverse momentum jets at $\sqrt{s}=$ 8 TeV using the ATLAS detector
(Elsevier, 2017-02)
The $W$ boson angular distribution in events with high transverse momentum jets is measured using data collected by the ATLAS experiment from proton--proton collisions at a centre-of-mass energy $\sqrt{s}=$ 8 TeV at the ...
Search for new resonances decaying to a $W$ or $Z$ boson and a Higgs boson in the $\ell^+ \ell^- b\bar b$, $\ell \nu b\bar b$, and $\nu\bar{\nu} b\bar b$ channels with $pp$ collisions at $\sqrt s = 13$ TeV with the ATLAS detector
(Elsevier, 2017-01)
A search is presented for new resonances decaying to a $W$ or $Z$ boson and a Higgs boson in the $\ell^+ \ell^- b\bar b$, $\ell\nu b\bar b$, and $\nu\bar{\nu} b\bar b$ channels in $pp$ collisions at $\sqrt s = 13$ TeV with ...
Search for dark matter in association with a Higgs boson decaying to $b$-quarks in $pp$ collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector
(Elsevier, 2017-02)
A search for dark matter pair production in association with a Higgs boson decaying to a pair of bottom quarks is presented, using 3.2 $fb^{-1}$ of $pp$ collisions at a centre-of-mass energy of 13 TeV collected by the ATLAS ...
Measurement of the prompt $J/\psi$ pair production cross-section in pp collisions at $\sqrt{s} = 8$ TeV with the ATLAS detector
(Springer, 2017-02)
The production of two prompt $J/\psi$ mesons, each with transverse momenta $p_{\mathrm{T}}>8.5$ GeV and rapidity $|y| < 2.1$, is studied using a sample of proton-proton collisions at $\sqrt{s} = 8$ TeV, corresponding to ...
Search for heavy resonances decaying to a $Z$ boson and a photon in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector
(Elsevier, 2017-01)
This Letter presents a search for new resonances with mass larger than 250 GeV, decaying to a $Z$ boson and a photon. The dataset consists of an integrated luminosity of 3.2 fb$^{-1}$ of $pp$ collisions collected at $\sqrt{s} ...
Measurement of forward-backward multiplicity correlations in lead-lead, proton-lead, and proton-proton collisions with the ATLAS detector
(American Physical Society, 2017-06)
Two-particle pseudorapidity correlations are measured in $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV Pb+Pb, $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV $p$+Pb, and $\sqrt{s}$ = 13 TeV $pp$ collisions at the LHC, with total integrated luminosities ...
Performance of the ATLAS trigger system in 2015
(Springer, 2017-05)
During 2015 the ATLAS experiment recorded $3.8 \mathrm{fb}^{-1}$ of proton--proton collision data at a centre-of-mass energy of $13 \mathrm{TeV}$. The ATLAS trigger system is a crucial component of the experiment, responsible ...
Search for lepton-flavour-violating decays of the Higgs and $Z$ bosons with the ATLAS detector
(Springer, 2017-02)
Direct searches for lepton flavour violation in decays of the Higgs and $Z$ bosons with the ATLAS detector at the LHC are presented. The following three decays are considered: $H\to e\tau$, $H\to\mu\tau$, and $Z\to\mu\tau$. ...
Measurement of the $t\bar{t}Z$ and $t\bar{t}W$ production cross sections in multilepton final states using 3.2 fb$^{-1}$ of $pp$ collisions at $\sqrt{s}$ =13 TeV with the ATLAS detector
(Springer, 2017-01)
A measurement of the $t\bar{t}Z$ and $t\bar{t}W$ production cross sections in final states with either two same-charge muons, or three or four leptons (electrons or muons) is presented. The analysis uses a data sample of ...
A measurement of the calorimeter response to single hadrons and determination of the jet energy scale uncertainty using LHC Run-1 $pp$-collision data with the ATLAS detector
(Springer, 2017-01)
A measurement of the calorimeter response to isolated charged hadrons in the ATLAS detector at the LHC is presented. This measurement is performed with 3.2 nb$^{-1}$ of proton--proton collision data at $\sqrt{s}=7$ TeV ...
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.