text
stringlengths 256
16.4k
|
|---|
May 16th, 2016, 08:34 AM
# 1
Senior Member
Joined: Dec 2014
From: The Asymptote
Posts: 142
Thanks: 6
Math Focus: Certainty
Temperature Gradient
You are part of a team of engineers designing an electric power plant that makes use of the temperature gradient in the ocean. The system is to operate between 20.0°C (surface-water temperature) and 5.00°C (water temperature at a depth of about 1 km).
I'm asked to find energy/hour in TJ/hour
$\displaystyle P=\frac{W}{t} --> W=Pt$
$\displaystyle e=\frac{W}{Q_H} --> Q_H=\frac{W}{e}$
$\displaystyle Q_H=\frac{Pt}{e}=\frac{70\times 10^6\cdot 3600}{0.051}=4.9\times 10^{12} J$
$\displaystyle e = 1 - \frac{(273+5)}{(273+20} = 0.051$
$\displaystyle Energy(E)=Q_H=4.9 TJ$ correct?
Then again TJ/h therefore $\displaystyle Energy(E)=Q_H=4.9/3600=1.36\times10^{-3} TJ/h$
Thank you in advance
Last edited by hyperbola; May 16th, 2016 at 08:39 AM.
May 17th, 2016, 04:59 AM
# 2
Senior Member
Joined: Apr 2014
From: Glasgow
Posts: 2,164
Thanks: 736
Math Focus: Physics, mathematical modelling, numerical and computational solutions
Where did you get the value for P?
Tags gradient, temperature
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post temperature problem shunya Elementary Math 8 March 9th, 2016 07:17 PM Which of the following is standard temperature girlbadatmath Chemistry 2 November 6th, 2014 01:25 PM Temperature ChristinaScience Algebra 2 January 25th, 2014 12:23 PM Resistance as a function of Temperature dave daverson Algebra 1 October 28th, 2012 06:35 PM Resistance as a function of Temperature dave daverson Calculus 1 December 31st, 1969 04:00 PM
|
@DavidReed the notion of a "general polynomial" is a bit strange. The general polynomial over a field always has Galois group $S_n$ even if there is not polynomial over the field with Galois group $S_n$
Hey guys. Quick question. What would you call it when the period/amplitude of a cosine/sine function is given by another function? E.g. y=x^2*sin(e^x). I refer to them as variable amplitude and period but upon google search I don't see the correct sort of equation when I enter "variable period cosine"
@LucasHenrique I hate them, i tend to find algebraic proofs are more elegant than ones from analysis. They are tedious. Analysis is the art of showing you can make things as small as you please. The last two characters of every proof are $< \epsilon$
I enjoyed developing the lebesgue integral though. I thought that was cool
But since every singleton except 0 is open, and the union of open sets is open, it follows all intervals of the form $(a,b)$, $(0,c)$, $(d,0)$ are also open. thus we can use these 3 class of intervals as a base which then intersect to give the nonzero singletons?
uh wait a sec...
... I need arbitrary intersection to produce singletons from open intervals...
hmm... 0 does not even have a nbhd, since any set containing 0 is closed
I have no idea how to deal with points having empty nbhd
o wait a sec...
the open set of any topology must contain the whole set itself
so I guess the nbhd of 0 is $\Bbb{R}$
Btw, looking at this picture, I think the alternate name for these class of topologies called British rail topology is quite fitting (with the help of this WfSE to interpret of course mathematica.stackexchange.com/questions/3410/…)
Since as Leaky have noticed, every point is closest to 0 other than itself, therefore to get from A to B, go to 0. The null line is then like a railway line which connects all the points together in the shortest time
So going from a to b directly is no more efficient than go from a to 0 and then 0 to b
hmm...
$d(A \to B \to C) = d(A,B)+d(B,C) = |a|+|b|+|b|+|c|$
$d(A \to 0 \to C) = d(A,0)+d(0,C)=|a|+|c|$
so the distance of travel depends on where the starting point is. If the starting point is 0, then distance only increases linearly for every unit increase in the value of the destination
But if the starting point is nonzero, then the distance increases quadratically
Combining with the animation in the WfSE, it means that in such a space, if one attempt to travel directly to the destination, then say the travelling speed is 3 ms-1, then for every meter forward, the actual distance covered by 3 ms-1 decreases (as illustrated by the shrinking open ball of fixed radius)
only when travelling via the origin, will such qudratic penalty in travelling distance be not apply
More interesting things can be said about slight generalisations of this metric:
Hi, looking a graph isomorphism problem from perspective of eigenspaces of adjacency matrix, it gets geometrical interpretation: question if two sets of points differ only by rotation - e.g. 16 points in 6D, forming a very regular polyhedron ...
To test if two sets of points differ by rotation, I thought to describe them as intersection of ellipsoids, e.g. {x: x^T P x = 1} for P = P_0 + a P_1 ... then generalization of characteristic polynomial would allow to test if our sets differ by rotation ...
1D interpolation: finding a polynomial satisfying $\forall_i\ p(x_i)=y_i$ can be written as a system of linear equations, having well known Vandermonde determinant: $\det=\prod_{i<j} (x_i-x_j)$. Hence, the interpolation problem is well defined as long as the system of equations is determined ($\d...
Any alg geom guys on? I know zilch about alg geom to even start analysing this question
Manwhile I am going to analyse the SR metric later using open balls after the chat proceed a bit
To add to gj255's comment: The Minkowski metric is not a metric in the sense of metric spaces but in the sense of a metric of Semi-Riemannian manifolds. In particular, it can't induce a topology. Instead, the topology on Minkowski space as a manifold must be defined before one introduces the Minkowski metric on said space. — baluApr 13 at 18:24
grr, thought I can get some more intuition in SR by using open balls
tbf there’s actually a third equivalent statement which the author does make an argument about, but they say nothing about substantive about the first two.
The first two statements go like this : Let $a,b,c\in [0,\pi].$ Then the matrix $\begin{pmatrix} 1&\cos a&\cos b \\ \cos a & 1 & \cos c \\ \cos b & \cos c & 1\end{pmatrix}$ is positive semidefinite iff there are three unit vectors with pairwise angles $a,b,c$.
And all it has in the proof is the assertion that the above is clearly true.
I've a mesh specified as an half edge data structure, more specifically I've augmented the data structure in such a way that each vertex also stores a vector tangent to the surface. Essentially this set of vectors for each vertex approximates a vector field, I was wondering if there's some well k...
Consider $a,b$ both irrational and the interval $[a,b]$
Assuming axiom of choice and CH, I can define a $\aleph_1$ enumeration of the irrationals by label them with ordinals from 0 all the way to $\omega_1$
It would seemed we could have a cover $\bigcup_{\alpha < \omega_1} (r_{\alpha},r_{\alpha+1})$. However the rationals are countable, thus we cannot have uncountably many disjoint open intervals, which means this union is not disjoint
This means, we can only have countably many disjoint open intervals such that some irrationals were not in the union, but uncountably many of them will
If I consider an open cover of the rationals in [0,1], the sum of whose length is less than $\epsilon$, and then I now consider [0,1] with every set in that cover excluded, I now have a set with no rationals, and no intervals.One way for an irrational number $\alpha$ to be in this new set is b...
Suppose you take an open interval I of length 1, divide it into countable sub-intervals (I/2, I/4, etc.), and cover each rational with one of the sub-intervals.Since all the rationals are covered, then it seems that sub-intervals (if they don't overlap) are separated by at most a single irrat...
(For ease of construction of enumerations, WLOG, the interval [-1,1] will be used in the proofs) Let $\lambda^*$ be the Lebesgue outer measure We previously proved that $\lambda^*(\{x\})=0$ where $x \in [-1,1]$ by covering it with the open cover $(-a,a)$ for some $a \in [0,1]$ and then noting there are nested open intervals with infimum tends to zero.
We also knew that by using the union $[a,b] = \{a\} \cup (a,b) \cup \{b\}$ for some $a,b \in [-1,1]$ and countable subadditivity, we can prove $\lambda^*([a,b]) = b-a$. Alternately, by using the theorem that $[a,b]$ is compact, we can construct a finite cover consists of overlapping open intervals, then subtract away the overlapping open intervals to avoid double counting, or we can take the interval $(a,b)$ where $a<-1<1<b$ as an open cover and then consider the infimum of this interval such that $[-1,1]$ is still covered. Regardless of which route you take, the result is a finite sum whi…
W also knew that one way to compute $\lambda^*(\Bbb{Q}\cap [-1,1])$ is to take the union of all singletons that are rationals. Since there are only countably many of them, by countable subadditivity this give us $\lambda^*(\Bbb{Q}\cap [-1,1]) = 0$. We also knew that one way to compute $\lambda^*(\Bbb{I}\cap [-1,1])$ is to use $\lambda^*(\Bbb{Q}\cap [-1,1])+\lambda^*(\Bbb{I}\cap [-1,1]) = \lambda^*([-1,1])$ and thus deducing $\lambda^*(\Bbb{I}\cap [-1,1]) = 2$
However, what I am interested here is to compute $\lambda^*(\Bbb{Q}\cap [-1,1])$ and $\lambda^*(\Bbb{I}\cap [-1,1])$ directly using open covers of these two sets. This then becomes the focus of the investigation to be written out below:
We first attempt to construct an open cover $C$ for $\Bbb{I}\cap [-1,1]$ in stages:
First denote an enumeration of the rationals as follows:
$\frac{1}{2},-\frac{1}{2},\frac{1}{3},-\frac{1}{3},\frac{2}{3},-\frac{2}{3}, \frac{1}{4},-\frac{1}{4},\frac{3}{4},-\frac{3}{4},\frac{1}{5},-\frac{1}{5}, \frac{2}{5},-\frac{2}{5},\frac{3}{5},-\frac{3}{5},\frac{4}{5},-\frac{4}{5},...$ or in short:
Actually wait, since as the sequence grows, any rationals of the form $\frac{p}{q}$ where $|p-q| > 1$ will be somewhere in between two consecutive terms of the sequence $\{\frac{n+1}{n+2}-\frac{n}{n+1}\}$ and the latter does tends to zero as $n \to \aleph_0$, it follows all intervals will have an infimum of zero
However, any intervals must contain uncountably many irrationals, so (somehow) the infimum of the union of them all are nonzero. Need to figure out how this works...
Let's say that for $N$ clients, Lotta will take $d_N$ days to retire.
For $N+1$ clients, clearly Lotta will have to make sure all the first $N$ clients don't feel mistreated. Therefore, she'll take the $d_N$ days to make sure they are not mistreated. Then she visits client $N+1$. Obviously the client won't feel mistreated anymore. But all the first $N$ clients are mistreated and, therefore, she'll start her algorithm once again and take (by suposition) $d_N$ days to make sure all of them are not mistreated. And therefore we have the recurence $d_{N+1} = 2d_N + 1$
Where $d_1$ = 1.
Yet we have $1 \to 2 \to 1$, that has $3 = d_2 \neq 2^2$ steps.
|
Here is the URL: https://github.com/avaneev/biteopt
I've tested it on numerous global optimization benchmarking functions (included), and on real-world hyperparameter optimization problems I have. Seems to be working quite well except in comparison to deterministic methods it's necessary to make several attempts at different random seeds, so the iteration budget may be high. But stochasticity of this method gives a chance to solve a problem which can't be sufficiently solved by deterministic methods. Anyway, most benchmarking functions solve in 1 attempt given enough iteration budget.
Works best for non-convex problems, and can also solve convex problems, but of course slower than deterministic methods. Can also solve non-linear constrained problems, but such constraints increase convergence time considerably (though, this application was not tested thoroughly).
Still working on the method.
I would like to hear comments from users that have some practical models (e.g. black-box hyperparameter optimization) which are still needed to be solved acceptably - whether this method works or not for their models, possibly with the description of the model.
Here is the description of the method. The algorithm consists of the following elements:
A cost-ordered population of previous solutions is maintained. A solution is an independent parameter vector which is evolved towards a better solution. On every iteration, the best solution is evolved. $$x_\text{new}=x_\text{best}$$ Below,
iis either equal to rand(1, N) or in the range [1; N], depending on the
AllpProbprobability. Probabilities are defined in the range [0; 1] and in many instances in the code were replaced with simple resetting counters for more efficiency. Parameter values are internally normalized to [0; 1] range and, to stay in this range, are wrapped in a special manner before each function evaluation. Algorithm's hyper-parameters (probabilities) were pre-selected and should not be changed.
Depending on the
RandProbprobability, a single (or all) parameter value randomization is performed using "bitmask inversion" operation. $$mask= 2^{1+\lfloor(0.999999997-rand(0\ldots1)^4 )\cdot MantSize\rfloor}-1$$ $$MantMult=2^{MantSize}$$ $$x_\text{new}[i] = \frac{\lfloor x_\text{new}[i]\cdot MantMult \rfloor \bigotimes mask }{MantMult}$$ Plus, with
CentProbprobability the random "step in the right direction" operation is performed using the centroid vector, twice. $$m_1=\text{rand}(0\ldots1)\cdot CentSpan$$ $$x_\text{new}[i]=x_\text{new}[i]-m_1(x_\text{new}[i]-x_\text{cent}[i])$$ $$m_2=\text{rand}(0\ldots1)\cdot CentSpan$$ $$x_\text{new}[i]=x_\text{new}[i]-m_2(x_\text{new}[i]-x_\text{cent}[i])$$ With
RandProb2probability an alternative randomization method is used. $$x_\text{new}[i]=x_\text{new}[i]+(-1)^{s}(x_\text{cent}[i]-x_\text{new}[i]), \quad i=1,\ldots,N,\\ \quad s\in\{1,2\}=(\text{rand}(0\ldots1)<0.5 ? 1:2)$$
(Not together with N.2) the "step in the right direction" operation is performed using the random previous solution, current best and worst solutions. This is conceptually similar to Differential Evolution's "mutation" operation. $$x_\text{new}=x_\text{best}-\frac{(x_\text{worst}-x_\text{rand})}{2}$$
With
ScutProbprobability a "short-cut" parameter vector change operation is performed. $$z=x_\text{new}[\text{rand}(1,N)]$$ $$x_\text{new}[i]=z, \quad i=1,\ldots,N$$
After each objective function evaluation, the highest-cost previous solution is replaced using the cost constraint.
You can find this algorithm implemented in the
optimize() function in
biteopt.h on lines 284-395, it does not involve any higher-order math.
|
Dear Uncle Colin, If $e = \left( 1+ \frac{1}{n} \right)^n$ when $n = \infty$, how come it isn’t 1? Surely $1 + \frac{1}{\infty}$ is just 1? - I’m Not Finding It Natural, It’s Terribly Yucky Hi, INFINITY, and thanks for your message. You have fallen into one of maths’s classicRead More →
What are they? I thought, until I looked closely, that we had a Hoberman sphere in the children’s toybox. We don’t: we have something closely related to it, though. The Hoberman mechanism comprises a series of pairs of pivoted struts arranged end to end. Each pair looks a little likeRead More →
Dear Uncle Colin, I’ve been struggling with this: “If the surface area of a sphere to cylinder is in the ratio 4:3 and the sphere has a radius of 3a, calculate the radius of the cylinder if the radius if the cylinder is equal to its height.” Can you help?Read More →
I love Futility Closet -- it's an incredible collection of interesting bits and pieces, but it has a special place in my heart because they love and appreciate maths. Not only that, they appreciate maths that I find interesting. The internet has many interesting miscellanies, and many excellent sites specialisingRead More →
Dear Uncle Colin, I have to solve $615 + x^2 = 2^y$ for integers $x$ and $y$. I’ve solved it by inspection using Desmos ($x=59$ and $y=12$ is the only solution), but I’d prefer a more analytical solution! Getting Exponent Right Makes An Interesting Noise Hi, GERMAIN, and thanks forRead More →
Via @markritchings, an excellent logs problem: If $a = \log_{14}(7)$ and $b = \log_{14}(5)$, find $\log_{35}(28)$ in terms of $a$ and $b$. One of the reasons I like this puzzle is that I did it a somewhat brutal way, and once I had the answer, a much neater way jumpedRead More →
In this episode, we're joined by @christianp, who is Christian Lawson-Perfect in real life, our first returning special guest co-host1. We discuss: The Big Internet Math Off and associated stickerbook 99 variations on a proof by Philip Ording The Art of Statistics - Learning from Data by David Spiegelhalter MathsRead More →
Dear Uncle Colin, How would you write $\frac{1}{10}$ in binary? Binary Is Totally Stupid Hi, BITS, and thanks for your message! I have two ways to deal with this: the standard, long-division sort of method, and a much nicer geometric series approach. Long division-esque While I can do the longRead More →
Here’s a tweet from @colinthemathmo: Here's another one. Take a square, crease in the halfway mark, fold up a corner - where does the corner go to? What are its coordinates? pic.twitter.com/Bfr0X8ACur — Colin Wright (@ColinTheMathmo) February 12, 2018 I’m not big on origami, but if Colin thinks it’s anRead More →
Dear Uncle Colin, I lost the first game of my Big Internet Math-Off tournament - can I still win the group and qualify for the semi-finals? - Surely Combinations Of Talent, Luck And Nous Deliver? Hi, SCOTLAND, and thanks for your message! Because the tie-break rules aren’t currently clear, IRead More →
|
The Strauss conjecture on negatively curved backgrounds
1.
Department of Mathematics, Johns Hopkins University, Baltimore, MD 21218, USA
2.
School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
This paper is devoted to several small data existence results for semi-linear wave equations on negatively curved Riemannian manifolds. We provide a simple and geometric proof of small data global existence for any power $ p\in (1, 1+\frac{4}{n-1}] $ for the shifted wave equation on hyperbolic space $ {\mathbb{H}}^n $ involving nonlinearities of the form $ \pm |u|^p $ or $ \pm|u|^{p-1}u $. It is based on the weighted Strichartz estimates of Georgiev-Lindblad-Sogge [
Keywords:Wave equations, curvature, Strauss conjecture, Strichartz estimates, weighted Strichartz estimates, Bessel potentials. Mathematics Subject Classification:35L71, 35L05, 58J45, 35B33, 35B45, 35R01, 58C40. Citation:Yannick Sire, Christopher D. Sogge, Chengbo Wang. The Strauss conjecture on negatively curved backgrounds. Discrete & Continuous Dynamical Systems - A, 2019, 39 (12) : 7081-7099. doi: 10.3934/dcds.2019296
References:
[1] [2] [3] [4] [5] [6] [7]
I. Chavel,
[8] [9]
V. Georgiev, H. Lindblad and C. D. Sogge,
Weighted Strichartz estimates and global existence for semilinear wave equations,
[10] [11] [12] [13]
E. Hebey,
[14] [15] [16] [17] [18] [19]
R. R. Mazzeo and R. B. Melrose,
Meromorphic extension of the resolvent on complete spaces with asymptotically constant negative curvature,
[20] [21] [22] [23] [24]
A. G. Setti,
A lower bound for the spectrum of the Laplacian in terms of sectional and Ricci curvature,
[25] [26]
C. D. Sogge,
[27] [28]
R.t S. Strichartz,
Restrictions of Fourier transforms to quadratic surfaces and decay of solutions of wave equations,
[29]
D. Tataru, Strichartz estimates in the hyperbolic space and global existence for the semilinear wave equation,
[30]
M. Taylor,
[31] [32]
C. Wang and X. Yu, Recent works on the Strauss conjecture, In
[33] [34] [35]
show all references
References:
[1] [2] [3] [4] [5] [6] [7]
I. Chavel,
[8] [9]
V. Georgiev, H. Lindblad and C. D. Sogge,
Weighted Strichartz estimates and global existence for semilinear wave equations,
[10] [11] [12] [13]
E. Hebey,
[14] [15] [16] [17] [18] [19]
R. R. Mazzeo and R. B. Melrose,
Meromorphic extension of the resolvent on complete spaces with asymptotically constant negative curvature,
[20] [21] [22] [23] [24]
A. G. Setti,
A lower bound for the spectrum of the Laplacian in terms of sectional and Ricci curvature,
[25] [26]
C. D. Sogge,
[27] [28]
R.t S. Strichartz,
Restrictions of Fourier transforms to quadratic surfaces and decay of solutions of wave equations,
[29]
D. Tataru, Strichartz estimates in the hyperbolic space and global existence for the semilinear wave equation,
[30]
M. Taylor,
[31] [32]
C. Wang and X. Yu, Recent works on the Strauss conjecture, In
[33] [34] [35]
[1] [2]
Youngwoo Koh, Ihyeok Seo.
Strichartz estimates for Schrödinger equations in weighted $L^2$ spaces and their applications.
[3] [4] [5]
Haruya Mizutani.
Strichartz estimates for Schrödinger equations with variable coefficients and unbounded potentials II. Superquadratic potentials.
[6] [7] [8]
Chu-Hee Cho, Youngwoo Koh, Ihyeok Seo.
On inhomogeneous Strichartz estimates for fractional Schrödinger equations and their applications.
[9]
Mingchun Wang, Jiankai Xu, Huoxiong Wu.
On Positive solutions of integral equations with the weighted Bessel potentials.
[10]
Vladimir Georgiev, Atanas Stefanov, Mirko Tarulli.
Smoothing-Strichartz estimates for the Schrodinger equation with small magnetic potential.
[11]
Younghun Hong.
Strichartz estimates for $N$-body Schrödinger operators with small potential interactions.
[12]
Michael Goldberg.
Strichartz estimates for Schrödinger operators with a non-smooth
magnetic potential.
[13]
Hyeongjin Lee, Ihyeok Seo, Jihyeon Seok.
Local smoothing and Strichartz estimates for the Klein-Gordon equation with the inverse-square potential.
[14] [15]
Claudia Anedda, Giovanni Porru.
Boundary estimates for solutions of weighted semilinear elliptic
equations.
[16]
Jason Metcalfe, David Spencer.
Global existence for a coupled wave system related to the Strauss conjecture.
[17] [18]
Sun-Yung Alice Chang, Xi-Nan Ma, Paul Yang.
Principal curvature estimates for the convex level sets of semilinear
elliptic equations.
[19]
Jinju Xu.
A new proof of gradient estimates for mean curvature equations with oblique boundary conditions.
[20]
Junjie Zhang, Shenzhou Zheng.
Weighted lorentz estimates for nondivergence linear elliptic equations with partially BMO coefficients.
2018 Impact Factor: 1.143
Tools
Article outline
[Back to Top]
|
Event detail
Seminar | March 4 | 12:10-1 p.m. | 939 Evans Hall
Nicolai Reshetikhin, UC Berkeley
For finite dimensional representations $V_1, \dots , V_m$ of a simple finite dimensional Lie algebra $\mathfrak g$ consider the tensor product $W=\otimes _{I=1}^m V_i^{\otimes N_i}$. The first result, which will be presented in the talk, is the asymptotic of the multiplicity of an irreducible representation $V_\lambda $ with the highest weight λ in this tensor product when $N_i=\tau _i/\epsilon , \lambda =\xi /\epsilon $ and $\epsilon \to 0$. Then we will discuss the asymptotical distribution of irreducible components with respect to the character probability measure $Prob(\lambda )=\frac {m_\lambda \chi _{V_\lambda }(e^t)}{\chi _W(e^t)}$. Here $\chi _V(e^t)$ is the character of representation $V$ evaluated on $e^t$ where $t$ is an element of the Cartan subalgebra of the split real form of the Lie algebra $\mathfrak g$. This is a joint work with O. Postnova.
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Search
Now showing items 1-10 of 20
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC
(Elsevier, 2013-12)
The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ...
Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(American Physical Society, 2013-12)
The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ...
Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2013-10)
Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ...
Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2013-03)
The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ...
Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE
(Springer, 2013-06)
Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ...
Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(American Physical Society, 2013-02)
The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ...
Mid-rapidity anti-baryon to baryon ratios in pp collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV measured by ALICE
(Springer, 2013-07)
The ratios of yields of anti-baryons to baryons probes the mechanisms of baryon-number transport. Results for anti-proton/proton, anti-$\Lambda/\Lambda$, anti-$\Xi^{+}/\Xi^{-}$ and anti-$\Omega^{+}/\Omega^{-}$ in pp ...
Charge separation relative to the reaction plane in Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV
(American Physical Society, 2013-01)
Measurements of charge dependent azimuthal correlations with the ALICE detector at the LHC are reported for Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV. Two- and three-particle charge-dependent azimuthal correlations ...
Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC
(Springer, 2013-09)
We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ...
|
I am trying to prove the following inequality:
EDIT: Almost immediately after I posted this question, I discovered that the inequality I am being asked to prove is called Cantelli's inequality. When I wrote this up, I didn't realize this particular inequality had a name. I have found multiple proofs through Google, so I don't strictly speaking need a solution anymore. However, I am keeping this question up because none of the proofs I have found involve invoking the fact that $t=E(t-X)\leq E[(t-X)\mathbb{I}_{X<t}]$, as was originally intended.
For $t \geq 0$,
$\mathbb{P}(X-E(X)\geq t)\leq \frac{V(X)}{V(X)+t^2}$
Our professor gave us the following "hints" for working this out: "First work out the problem assuming $E(X)=0$ then use the fact that $t=E(t-X)\leq E[(t-X)\mathbb{I}_{X<t}]$."
EDIT: To be clear, in my notation, $\mathbb{I}$ refers to the indicator function.
The first part is pretty simple. It is basically a variation of the proof for Markov's or Chebychev's inequality. I did it out as follows:
$V(X)=\int_{-\infty}^{\infty}(x-E(X))^2f(x)dx$
(I know that, properly speaking, we should replace $x$ with, say, $u$ and $f(x)$ with $f_x(u)$ when evaluating an integral. To be honest, though, I find that notation/convention to be unnecessarily confusing and not terribly transparent, so I am sticking with my more informal notation.)
If we assume $E(X)=0$, then the above simplifies to
$V(X)=\int_{-\infty}^{\infty}x^2f(x)dx$
For brevity's sake, I will skip some steps, but it is easy to show then that
$V(X)\geq t^2 P(X>t)$, or rather $P(X>t)\leq \frac{V(X)}{t^2}$. Since $E(X)=0$, we can replace the $X$ on the left-hand side of the latter with $X-E(X)$.
Here is where I am having trouble moving forward. I don't understand how to go about using the fact that $t=E(t-X)\leq E[(t-x)\mathbb{I}_{X<t}]$. Again, since $E(X)=0$, we can substitute in $t-E(X)$ for $t$. This is equivalent to $E(t-X)$. Then, we can rewrite the $t^2$ in the denominator at the right-hand side of the inequality as $[E(t-X)]^2$, which since the middle term drops out simplifies to $t^2-[E(X)]^2$. But I don't see where I can go from here, either. Though you can further rewrite this as $t^2+V(X)-E(X^2)$, which at least gets me the $V(X)+t^2$ term in the right place.
Clearly I am missing something, here, related to $E(t-X) \leq E[(t-X)\mathbb{I}_{X<t}]$, but I quite frankly just have no idea what to do with this term. I understand conceptually what this term is telling me. Intuitively, the expected value of $t-X$ is going to be smaller than the same quantity if $X$ is restricted to being strictly less than $t$; that is, the former term is likely to be negative, while the latter must be positive. But I don't see how I can use this fact in the proof.
I tried "distributing" on the inside to simplify ...
$E[(t-X)\mathbb{I}_{X<t}]=E[t\mathbb{I}_{X<t} - X\mathbb{I}_{X<t}]=tP(X<t)-?$
But am not sure how to evaluate $E(X\mathbb{I}_{X<t}]$.
Anyone have an idea or a hint?
|
Reference
For a bounded nonexample of integrability see: Riemann Integral: Bounded Nonexample
For a convergence theorem on integral see: Riemann Integral: Uniform Convergence
For a comparison of integrals see: Uniform Integral vs. Riemann Integral
Definition
Given a measure space $\Omega$ and a Banach space $E$.
Consider functions $F:\Omega\to E$.
Denote the measurable subsets of finite mass by: $$\mathcal{A}_\infty:=\{A:\mu(A)<\infty\}$$ and order them by inclusion: $$A\leq A':\iff A\subseteq A'$$
Remember the generalized Riemann integral on finite measure spaces:$$A\in\mathcal{A}_\infty:\quad\int_AF\mathrm{d}\mu:=\lim_\mathcal{P}\left\{\sum_{a\in A\in\mathcal{P}}F(a)\mu(A)\right\}_\mathcal{P}$$
(For more details see references above.)
Define the improper Riemann integral as:$$\int_\Omega F\mathrm{d}\mu:=\lim_A\left\{\int_AF\mathrm{d}\mu\right\}_{A\in\mathcal{A}_\infty}$$
(Crucially, this reflects independence of approximation by finite spaces.) Discussion
For finite measure spaces the improper agrees with the proper as $\Omega\in\mathcal{A}_\infty$.
This way, poles still can't be handled:$$\int_0^1\frac{1}{\sqrt{x}}\mathrm{d}x\notin E$$
(Note that the concept of compact intervals isn't available in general.)
For Borel spaces a suitable criterion could be continuity plus absolute integrability: $$F\in\mathcal{C}(\Omega,E):\quad\int_\Omega\|F\|\mathrm{d}\mu<\infty\implies\int_\Omega F\mathrm{d}\mu\in E$$
How to prove this in the abstract setting?
(I slightly doubt it...)
|
I'm having a bit of a struggle with the following proof.
Statement: Prove that $N((0,0);1)$ is an open set in $\mathbb{R}\times\mathbb{R}$ with metric $d((x_1,x_2),(y_1,y_2))=|x_1-y_1|+|x_2-y_2|$.
Attempt: Let $x\in N((0,0);1)$. We need $N(x;\epsilon)=\{(a_1,a_2):|x_1-a_1|+|x_2-a_2|<\epsilon \} \subseteq N((0,0);1)=\{(y_1,y_2):|y_1|+|y_2|<1 \}$. For this to occur, we must have $|x_1-a_1|+|x_2-a_2|<\epsilon\implies |a_1|+|a_2|<1$. Notice $|x_1-a_1|+|x_2-a_2|< |x_1|+|a_1|+|x_2|+|a_2|<\epsilon$. Pick $\epsilon=|x_1|+|x_2|+1$. This is greater than 0, for $0<|x_1|+|x_2|<1$. Then, $N(x;\epsilon)=\{(a_1,a_2):|a_1|+|a_2|<1 \}$, so $N((0,0);1)$ is open.
I feel as though it's not valid for me to have chosen $\epsilon$ as I did; could someone point me in the right direction? Thanks, exam in a few hours, so any and all help is appreciated.
|
Is there an algorithm for finding the shortest path in an undirected weighted graph?
All-Pairs-Shortest Path
Given a graph $G = (V, E)$ find the shortest path between
any two nodes $u,v \in V$. It can be solved by Floyd-Warshall's Algorithm in time $O(|V|^3).$ Many believe the APSP problem requires $\Omega(n^3)$ time, but it remains open if there exists algorithms taking $O(n^{3 - \delta} \cdot \text{poly}(\log M))$, where $\delta > 0$ and edge weights are in the range $[-M, M]$.
The reasoning for this is upon close examination we see that the APSP problem can be solved by matrix multiplication. If we replace the operators to $\{\text{min}, +\}$ instead of $\{ +, \cdot \}$ we may use the framework for matrix multiplication to compute the solution. What is interesting is if there exists sub-cubic algorithms for the APSP problem, then there exists sub-cubic algorithms for many related graph and matrix problems [1].
|
In the classic crypto textbook "Introduction to Modern Cryptography" by Jonathan Katz and Yehuda Lindell, there is a definition for indistinguishable encryption in the presence of an eavesdropper as such that for every probabilistic polynomial time adversary A there is a negligible function negl(n) such that
$\Pr[PrivK_{A,\Pi}=1] \leq negl(n)$
where PrivK is the indistinguishability experiment and for the purpose of this question we only need to know that the experiment outcome is 1 iff the adversary makes the correct guess.
My doubts are as follows. Consider a sequence of probabilistic polynomial time adversaries $\{A_i\}_{i>=1}$ whose advantage in the indistinguishability experiment is bounded by the following sequence of negligible functions
$\Pr[PrivK_{A,\Pi}=1] \leq negl_i(n) = \frac{1}{(1+1/i)^n}$
Clearly it is necessary for the above conditions to hold for a indistinguishable encryption. But is it a correct model/condition for real-world applications? For example, in practice we typically choose a sufficiently large n and set up some encryption scheme. However, there is the always some adversary $A_i$ that wins the experiment with probability close to one. So what's wrong?
|
The problem with your code is that
N[Product[...]] calls
NProduct[...] which turns integers into reals before
Prime can evaluate, causing the error: See
Details and Options section of documentation of
NProduct.
For your calculation, you actually do not need these. It looks like the result converges around $0.660162$:
Table[N[Product[(Prime[i] (Prime[i] - 2))/(Prime[i] - 1)^2, {i, 2, 10^k}]], {k, 5}]
{0.665138, 0.66033, 0.66017, 0.660162, 0.660162}
Since the product is monotonic and you are using
N at the end, it is not necessary to carry out whole multiplication all the way upto infinity: Depending on the interested accuracy, you can go to higher orders as well. For example,
Table[N[Product[(Prime[i] (Prime[i] - 2))/(Prime[i] - 1)^2, {i, 2, 10^k}], 8], {k, 5}]
yields
{0.66513840, 0.66033029, 0.66017020, 0.66016232, 0.66016185}
which reveals that using only first $10^5$ primes, which is all primes upto $1299709$, is sufficient to get a result with error of the order $10^{-6}$.
For the record: Mathematica fails to do the calculation analytically:
Product[(Prime[i] (Prime[i] - 2))/(Prime[i] - 1)^2, {i, 2, Infinity}]
$\prod _{i=2}^{\infty } \frac{\left(p_i-2\right) p_i}{\left(p_i-1\right){}^2}$
|
Is the following theorem known, or can be easily derived from known results?
Consider the differential equation $$w''-kz^{-1}w'=(\lambda+\phi(z))w,$$ where $k>0$ is fixed, $\lambda$ is a large (complex) parameter and $\phi$ is a complex valued function analytic on $[0,1]$. It is known that there exists a unique solution $w_0(z,\lambda)$ which is analytic at $z=0$ and $w_0(0,\lambda)=1$. It is also known that $f(\lambda)=w_0(1,\lambda)$ is an entire function.
Theorem (?). $f(\lambda)=(1+o(1))\exp\sqrt{\lambda},$ as $\lambda\to\infty$, $|\arg\lambda|\leq\pi-\epsilon$, where $\sqrt{\lambda}$ is the principal branch.
Remark. There is another solution, normalized by $w_1(z,\lambda)\sim z^{k+1},z\to 0$, and for this solution I know how to prove the result, and know the references, for example Olver, Asymptotics and special functions, Ch. 12.
More remarks: 1. When $\phi=0$ this reduces to Bessel's equation; the Theorem is true in this case but not trivial.
In the definition of $w_0$ the crucial word is ANALYTIC. There are infinitely many other solutions $w$ satisfying $w(0)=1$: adding to $w_0$ any multiple of $w_1$ does not change the value at $0$. And the conclusion of the Theorem does NOT hold for some of solutions satisfying $w(0)=1$. For example, when $\phi=0$ there is a solution with $w(0)=1$, for which $w(1,\lambda)$ decays exponentially for $\lambda>0$.
The most important case for me is when $\phi$ is even, which may help. But on my opinion, the problem is interesting in the general case as well.
Remark. Now I proved this, but the question remains whether this follows from some known, published results.
|
Dana Scott had once proved that Zermelo's set theory $``\text{Z}"$ can be interpretable in the first order set theory whose axioms are just the axioms of:
Separation: if $\phi$ is a formula in which x doesn't occur, then: $\forall A \exists x \forall y (y \in x \iff y \subset A \wedge \phi)$ is an axiom. Infinity: the usual form
Can a similar situation occur with $\text{ZF}$, i.e. can we have a theory whose axiomatic system consists of a modified form of Replacement and an Infinity axiom, that can interpret $\text{ZF}$? Here is a try:
Replacement: if $\phi(x,y)$ is a formula in which the symbols $``x"$, $``y"$ occur free, and those never occur bound, and in which the symbol $``B"$ never occur, then: $\forall A ([\forall x \in A \exists z \forall y (\phi(x,y) \implies y \subset z) ]\implies \exists B \forall y (y \in B \iff \exists x \in A \phi(x,y)))$
is an axiom.
Infinity: the usual form.
/
The idea is that this form of Replacement do prove Pairing, Power, Separation and the form of axiom schema of replacement I've lately posted to Mathoverflow at:
and it would prove Union over sets in which every element of them is an element of some transitive set.
This seems to be enough to interpret the cumulative hierarchy and thus full $\text{ZF}$.
|
For the heat equation
\begin{equation} u_t(t,x) = \nu u_{xx}(t,x) \end{equation}
for $x \in [0,1]$ with boundary conditions $u(t,0) = u(t,1) = 0$ and initial value $u(0,x) = u_0(x)$ it is easy to show that the "energy" defined as
\begin{equation} E(t) = \frac{1}{2} \int_{0}^{1} u^2(t,x)~dx \end{equation}
decays over time, that is $E(t) \leq E(0)$. I wonder if there is any physical interpretation of the quantity $E$?
In terms of units, $u$ is temperature in Kelvin while the thermal diffusivity $\nu$ has units $m^2/s$ and is composed from \begin{equation} \nu = \frac{k}{\rho c_p} \end{equation} where $k$ is the thermal conductivity in $W/(m*K)$, $\rho$ the density in $kg/m^3$ and $c_p$ the specific heat capacity in $J/(kg*K)$.
I figured out that $u$ is related to the internal energy in Joule per volume via \begin{equation} I = c_p \rho u. \end{equation}
But what is the interpretation of $E$, which is related to $u^2$?
|
If the density is bounded by a constant $C$ then I think the rate of convergence can be bounded as $\exp(-\delta t /C^2)$ for some fixed constant $\delta >0$. If the density is unbounded then I don't think there needs to be a uniform version of exponential convergence.
To elaborate on this, I will use my answer to the related question Error term for renewal function by Johan Wastlund. From the work there, we find that for any $c>0$$$ m(t) = \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \frac{e^{st}}{s(1-{\Bbb E}(e^{-sX}))}ds.$$ There is a double pole at $s=0$ and its residue is $t/{\Bbb E}(X) + {\Bbb E}(X^2)/(2{\Bbb E}(X)^2)$. Clearly all the other zeros lie in the half-plane Re$(s)<0$ . To show that there is a uniform exponential rate of convergence, we need to show that there is a half-plane $\text{Re}(s) < -\delta/C^2$ which is free of zeros of $1-{\Bbb E}(e^{-sX})$ (apart from the trivial zero at $s=0$).
First note that ${\Bbb E}(e^{-sX})= 1- s {\Bbb E}(X) + O(|s|^2 {\Bbb E}(X^2))$ from which it follows that there is some neighborhood $|s| <\delta$ which is has no zeros except for $s=0$.
Now suppose that $s=-x+iy$ is a zero, and we want a lower bound for $x$. We may assume that $|y| \ge \delta/2$. Let $f$ be the density function for the random variable $X$. Then we know that $$ \int_0^1 f(u) e^{ux} \cos(uy) du = \int_0^1 f(u) du.$$ Rearranging this means that $$ \int_0^1 f(u) (1-\cos(uy))du \le \int_0^1 f(u) (e^{ux}-1) du \le (e^x-1).$$Now we get a lower bound on the left hand side, which will finish the proof. Since $y$ is bounded away from zero, if $\epsilon$ is small enough then the set of $u$ with $(1-\cos(uy))<\epsilon$ is bounded by some constant times $\sqrt{\epsilon}$. Therefore, since $f(u)\le C$, $$\int_0^1 f(u) (1-\cos(uy)) du \ge \epsilon - \epsilon \int_{u:(1-\cos (uy))\le \epsilon} f(u) du \ge \epsilon (1- KC\sqrt{\epsilon}), $$ for some constant $K$. Thus taking $\epsilon = \delta/C^2$ for a suitably small $\delta$, we obtain that $$(e^x-1) \ge \delta/C^2,$$ for some $\delta >0$. (In this write up, $\delta$ just denotes some positive constant which may change from line to line.) This completes the proof.
If the density is not bounded, then one can arrange for zeros to get arbitrarily close to the line $\text{Re }(s)=0$, and therefore there would be no uniform rate of exponential decay.
|
Is there a physical limit to data transfer rate (e.g. for USB $3.0$, this rate can be a few Gbit per second)? I am wondering if there is a physical law giving a fundamental limit to data transfer rate, similar to how the second law of thermodynamics tells us perpetual motion cannot happen and relativity tells us going faster than light is impossible.
The maximum data rate you're looking for would be called the tl;dr- . Realistically speaking, we don't know nearly enough about physics yet to meaningfully predict such a thing. maximum entropy flux
But since it's fun to talk about a data transfer cord that's basically a $1\mathrm{mm}$-tube containing a stream of black holes being fired near the speed of light, the below answer shows an estimate of $1.3{\cdot}{10}^{75}\frac{\mathrm{bit}}{\mathrm{s}}$, which is about $6.5{\cdot}{10}^{64}$ faster than the current upper specification for USB, $20\frac{\mathrm{Gbit}}{\mathrm{s}}=2{\cdot}{10}^{10}\frac{\mathrm{bit}}{\mathrm{s}}$.
Intro
You're basically looking for an upper bound on entropy flux:
the number of potential states which could, in theory, codify information; entropy: rate at which something moves through a given area. flux:
So,$$\left[\text{entropy flux}\right]~=~\frac{\left[\text{information}\right]}{\left[\text{area}\right]{\times}\left[\text{time}\right]}\,.$$
Note: If you search for this some more, watch out for "maximum entropy thermodynamics"; " maximum" means something else in that context.
In principle, we can't put an upper bound on stuff like entropy flux because we can't claim to know how physics really works. But, we can speculate at the limits allowed by our current models.
Speculative physical limitations
Wikipedia has a partial list of computational limits that might be estimated given our current models.
In this case, we can consider the limit on maximum data density, e.g. as discussed in this answer. Then, naively, let's assume that we basically have a pipeline shipping data at maximum density arbitrarily close to the speed of light.
The maximum data density was limited by the Bekenstein bound:
In physics, the
Bekenstein boundis an upper limit on the entropy $S$, or information $I$, that can be contained within a given finite region of space which has a finite amount of energy—or conversely, the maximum amount of information required to perfectly describe a given physical system down to the quantum level.
–"Bekenstein bound", Wikipedia [references omitted]
Wikipedia lists it has allowing up to$$ I ~ \leq ~ {\frac {2\pi cRm}{\hbar \ln 2}} ~ \approx ~ 2.5769082\times {10}^{43}mR \,,$$where $R$ is the radius of the system containing the information and $m$ is the mass.
Then for a black hole, apparently this reduces to$$ I ~ \leq ~ \frac{A_{\text{horizon}}}{4\ln{\left(2\right)}\,{{\ell}_{\text{Planck}}^2}} \,,$$where
${\ell}_{\text{Planck}}$ is the Planck length;
$A_{\text{horizon}}$ is the area of the black hole's event horizon.
This is inconvenient, because we wanted to calculate $\left[\text{entropy flux}\right]$ in terms of how fast information could be passed through something like a wire or pipe, i.e. in terms of $\frac{\left[\text{information}\right]}{\left[\text{area}\right]{\times}\left[\text{time}\right]}.$ But, the units here are messed up because this line of reasoning leads to the holographic principle which basically asserts that we can't look at maximum information of space in terms of per-unit-of-volume, but rather per-unit-of-area.
So, instead of having a continuous stream of information, let's go with a stream of discrete black holes inside of a data pipe of radius $r_{\text{pipe}}$. The black holes' event horizons have the same radius as the pipe, and they travel at $v_{\text{pipe}} \, {\approx} \, c$ back-to-back.
So, information flux might be bound by$$ \frac{\mathrm{d}I}{\mathrm{d}t} ~ \leq ~ \frac{A_{\text{horizon}}}{4\ln{\left(2\right)}\,{{\ell}_{\text{Planck}}^2}} {\times} \frac{v_{\text{pipe}}}{2r_{\text{horizon}}} ~{\approx}~ \frac{\pi \, c }{2\ln{\left(2\right)}\,{\ell}_{\text{Planck}}^2} r_{\text{pipe}} \,,$$where the observation that $ \frac{\mathrm{d}I}{\mathrm{d}t}~{\propto}~r_{\text{pipe}} $ is basically what the holographic principle refers to.
Relatively thick wires are about $1\,\mathrm{mm}$ in diameter, so let's go with $r_{\text{pipe}}=5{\cdot}{10}^{-4}\mathrm{m}$ to mirror that to estimate (WolframAlpha):$$ \frac{\mathrm{d}I}{\mathrm{d}t} ~ \lesssim ~ 1.3{\cdot}{10}^{75}\frac{\mathrm{bit}}{\mathrm{s}} \,.$$
Wikipedia claims that the maximum USB bitrate is currently $20\frac{\mathrm{Gbit}}{\mathrm{s}}=2{\cdot}{10}^{10}\frac{\mathrm{bit}}{\mathrm{s}}$, so this'd be about $6.5{\cdot}{10}^{64}$ times faster than USB's maximum rate.
, to be very clear, the above was a quick back-of-the-envelope calculation based on the Bekenstein bound and a hypothetical tube that fires black holes near the speed of light back-to-back; it's not a fundamental limitation to regard too seriously yet. However
The Shannon-Hartley theorem tells you what the maximum data rate of a communications channel is, given the bandwidth.
$$ C = B \log_2\left(1+\frac{S}{N}\right) $$
Where $C$ is the data rate in bits per second, $S$ is the signal power and $N$ is the noise power.
Pure thermal noise power in a given bandwidth at temperature $T$ is given by:
$$ N = k_BTB $$
So for example, if we take the bandwidth of WiFi (40MHz) at room temperature (298K) using 1W the theoretical maximum data rate for a single channel is:
$$ 40 \times 10^6 \times \log_2\left(1 + \frac{1}{1.38\times 10^{-23} \times 298 \times 40 \times 10^6}\right) = 1.7 \times 10^9 = 1.7 \mathrm{\;Gbs^{-1}} $$
In a practical system, the bandwidth is limited by the cable or antenna and the speed of the electronics at each end. Cables tend to filter out high frequencies, which limits the bandwidth. Antennas will normally only work efficiently across a narrow bandwidth. There will be significantly larger sources of noise from the electronics, and interference from other electronic devices which increases $N$. Signal power is limited by the desire to save power and to prevent causing interference to other devices, and is also affected by the loss from the transmitter to the receiver.
A system like USB uses simple on-off electronic signal operating at one frequency, because that's easy to detect and process. This does not fill the bandwidth of the cable, so USB is operating a long way from the Shannon-Hartley limit (The limiting factors are more to do with the transceivers, i.e. semiconductors). On the other hand, 4G (and soon 5G) mobile phone technology does fill its bandwidth efficiently, because everyone has to share the airwaves and they want to pack as many people in as possible, and those systems are rapidly approaching the limit.
No, there is no fundamental limit on overall transfer rate. Any process that can transfer data at a given rate can be done twice in parallel to transfer data at twice that given rate.
protected by Qmechanic♦ Apr 30 '18 at 19:17
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
Data is a general term for information (observations and/or measurements) collected during any type of systematic investigation.
A
data display is a visual format for organising and summarising data. Examples include box plots, column graphs, frequency tables, scatter plots, and stem plots.
A decimal is a numeral in the decimal number system, which is the place-value system most commonly used for representing real numbers. In this system numbers are expressed as sequences of Arabic numerals 0 to 9, in which each successive digit to the left or right of the decimal point indicates a multiple of successive powers of 10; for example, the number represented by the decimal 123.45 is the sum
\(1\times10^2+2\times10^1+3\times10^0+4\times10^{-1}+5\times10^{-2}\)
\(=1\times100+2\times10+3\times1+4\times\frac1{10}+5\times\frac1{100}\)
The digits after the decimal point can be terminating or non-terminating. A terminating decimal is a decimal that contains a finite number of digits, as shown in the example above. A decimal is non-terminating, if it has an infinite number of digits after the decimal point. Non-terminating decimals may be recurring, that is, contain a pattern of digits that repeats indefinitely after a certain number of places. For example, the fraction \(\frac13\), written in the decimal number system, results in an infinite sequence of 3s after the decimal point. This can be represented by a dot above the recurring decimal.
\(\frac13=0.333333\dots=0.\dot3\)
Similarly, the fraction \(\frac17\) results in a recurring group of digits, which is represented by a bar above the whole group of repeating digits
\(\frac17=0.142857142857142857\dots=0.\overline{142857}\)
Non-terminating decimals may also be non-recurring, that is the digits after the decimal point never repeat in a pattern. This is the case for irrational number, such as pi, e, or \(\sqrt[{}]2\). For example,
\(\pi=3.1415926535897932384626433832795028841971693993751058209749\dots\)
Irrational numbers can only be approximated in the decimal number system.
In any fraction in the form \(\frac ab\) , b is the denominator. It represents the number of equal parts into which the whole has been divided. For example, in the diagram below, a rectangle has been divided into 5 equal parts. Each of those parts is one fifth of the whole and corresponds to the unit fraction \(\frac15\;\;.\).
A diameter is a chord that passes through the centre of a circle. The word diameter is also used to refer to the length of the diameter. The diameter d of the circle below is represented by line segment AB.
A
difference is the result of subtracting one number or algebraic quantity from another. For example, the difference between and is , written as .
Multiplication of numbers is said to be ‘distributive over addition’, because the product of one number with the sum of two others equals the sum of the products of the first number with each of the others. For example, the product of 3 with (4+5) gives the same result as the sum of 3×4 and 3×5:
3×(4+5)=3×9=27 and 3×4+3×5=12+15=27
This distributive law is expressed algebraically as follows:
a(b+c)=ab+ac, for all numbers a,b and c.
In general, a number or algebraic expression \(x\) is divisible by another \(y\), if there exists a number or algebraic expression \(q\) of a specified type for which \(x=yq\).
A natural number \(m\) is divisible by a natural number \(n\) if there is a natural number \(q\) such that \(m=nq\); for example, 12 is divisible by 4 because 12=3×4.
A dot plot is a graph used in statistics for organising and displaying categorical data or discrete numerical data.
The dot plot below displays the number of passengers observed in 32 cars stopped at a traffic light.
|
$\newcommand{\Re}{\mathbb{R}}$I m looking for sufficient conditions that
may guarantee positive semidefiniteness (PSD) of a block matrix$$A = \begin{bmatrix} A_{1,1} & \cdots & A_{1,n} \\ \vdots & \ddots & \vdots \\ A_{n,1} & \cdots & A_{n,n}\end{bmatrix}$$with consistent block dimensions, $A_{i,j} \in \Re^{d_i \times d_j}$. I'm aware of the sufficient condition that if $A$ is block diagonally dominant (BDD) this implies PSD [1], but the BDD condition turns to be too restrictive for my case of interest (my block matrices are not BDD, yet they are PSD).I looked for some alternative but couldn't find anything in the literature. Also I'm not interested in Schur-based exact conditions, but simpler conditions that work well often (but not necessarily always).
In looking at the
block diagonally dominant (BDD) condition [1] I came up with a proposal which I briefly describe here. First, consider the traditional BDD condition [1] from the following perspective:
Build the $n\times n$ matrix $M \in \Re_+^{n\times n}$ given as $$M=[m_{ij}]_{i,j=1}^n, \quad m_{ij} = \begin{cases} \inf_{x} \frac{||A_{ij}x||_2}{||x||_2} \text{ if } i=j\\ \sup_{x} \frac{||A_{ij}x||_2}{||x||_2} \text{ if } i\neq j \end{cases}$$
Considering this
condensedmatrix $M$, the traditional BDD condition [1] on $A$ is equivalent to the condition that $M$ is Diagonally Dominant(DD). Thus, we can state $M$ is DD $\implies$ $A$ is PSD. Conjecture
Now, my first hunch was that maybe the relationship
$M$ is PSD $\implies$ $A$ is PSD would stand.However, after some extensive numerical testing, I've found cases where my conjecture is contradicted. That is, $M$ is PSD $\nRightarrow$ $A$ is PSD. A contradiction example
For completeness, I'm including here a simple numerical example where the conjecture fails. Consider the 3x3 block-matrix with block-dimension 2x2 (also available for download here): $$ A = \left[\begin{array}{cc|cc|cc} 1.0 & 0.6 & -0.3 & 0.4 & 0.2 & -0.2\\ 0.6 & 1.6 & 0.5 & -0.6 & -0.4 & 0.4\\ \hline -0.3 & 0.5 & 78.6 & 79.8 & 0.1 & -0.1\\ 0.4 & -0.6 & 79.8 & 84.2 & -0.3 & 0.2\\ \hline 0.2 & -0.4 & 0.1 & -0.3 & 3.5 & 0.5\\ -0.2 & 0.4 & -0.1 & 0.2 & 0.5 & 3.3 \end{array}\right]$$ This has the following eigenvalues $eig(A)=\{-0.0372 , 1.6458 , 2.3249 ,3.1036 , 3.9136 , 161.2494\}$, so it's indefinite. The condensed matrix M as defined above is $$ M = \left[\begin{array}{ccc} 0.6292 & 0.9271 & 0.6325\\ 0.9271 & 1.5509 & 0.3864\\ 0.6325 & 0.3864 & 2.8901 \end{array}\right]$$ with eigenvalues $eig(M)=\{0.0129, 1.7594, 3.2978\}$, so it's positive definite, contradicting the conjecture above.
It's obvious that the simple proposed relation does not fulfill the PSD test I was looking for. Still, for this and other cases I found so far, even though $A$ was
not PSD, it wasn't too far (the negative eigenvalue was small).So I still wonder if the proposed relationship $M$ is PSD $\implies$ $A$ is PSD might stand under additional constraints.
I would appreciate if anyone has anything to contribute on this. I am assuming some properties on $A$ to simplify the problem, so $A$ can be considered symmetric, and the blocks all square with $d_i=d_j=d$.
Analysis using block Cholesky decomposition
So far, I tried to look at this using the block variant of Cholesky decomposition as a PSD test, so that if each diagonal sub-block in the block Cholesky decomposition is PSD, then the block matrix is PSD.
Special case: 2x2 block-matrix
Using the sufficient conditions from Cholesky decomposition, the 2x2 block case fulfills the proposed relationship. Consider the 2x2 block Cholesky decomposition
$$A = LDL^\top = \begin{bmatrix} I & 0 \\ L_{21} & I \end{bmatrix} \begin{bmatrix} D_1 & 0 \\ 0 & D_2 \end{bmatrix} \begin{bmatrix} I & L_{21}^\top \\ 0 & I \end{bmatrix}$$
By the properties of the Cholesky decomposition,
{$D_1$ is PSD, $D_2$ is PSD} $\implies$ $A$ is PSD.Since $D_1 = A_{11}$ and $D_2 = A_{22} - A_{21} A_{11}^\top A_{12}$,it follows that:$$ m_{11} \geq 0 \implies D_1 \text{ is PSD} $$$$ m_{11} - m_{21} m_{22}^{-1} m_{12} \geq 0 \implies D_2 \text{ is PSD} $$The left hand inequality in the second expression can be written as $m_{11}m_{22} \geq m_{12} m_{21} = det(M) \geq 0$.
The sufficient conditions above for
$A$ is PSD are clearly fulfilled if $M$ is PSD (just consider Sylvester criterion). # General case: nxn block matrix
As soon as we go further than 2x2, the expressions from Cholesky decomposition become more complex, and just forcing
$M$ is PSD does not seem enough to provide sufficient conditions for PSD of $A$.
PD: I posted this same question some time ago in math.stackexchange, but didn't get any answer so far. So I was wondering if this might be a more appropriate forum to pose the question.
[1] Feingold, D. G., & Varga, R. S. (1962). Block diagonally dominant matrices and generalizations of the Gerschgorin circle theorem. Pacific J. Math, 12(4), 1241-1250.
|
I'm dealing with a coupled system of three transient, non-linear convection-diffusion equations. Let's just say to simplify the problem that they take the following form: $$ -\nabla\cdot(D_{1}(u_{2},u_{3})\nabla u_{1}) = -\nabla\cdot\mathbf{\chi}(u_{2},u_{3}) $$ $$ \frac{\partial u_{2}}{\partial t} + \nabla\cdot f_{1}(u_{1},u_{2},u_{3}) - \nabla\cdot(D_{2}(u_{2},u_{3})u_{2}) = 0 $$ $$ \frac{\partial u_{3}}{\partial t} + \nabla\cdot f_{2}(u_{1},u_{2},u_{3}) - \nabla\cdot(D_{3}(u_{2},u_{3})u_{3}) = 0 $$
Assume a general set of boundary conditions for each equation: $$ u_{i} = g_{i,D}, \;\;\;\text{on}\;\Gamma_{D_{i}} $$ $$ D_{i}\nabla u_{i} = g_{i,N}, \;\;\;\text{on}\;\Gamma_{N_{i}} $$
My spatial discretization is via DGFEM and the time discretization is just Backward Euler. Anyway, I'll use the superscript $(k)$ to denote the solution at the $k$th time step. This is how I am decoupling the equations:
1) Solve for $u_{1}^{(k+1)}$ using information from the previous time step for $u_{2}$ and $u_{3}$. This effectively linearizes the equation.
$$ -\nabla\cdot(D_{1}(u_{2}^{(k)},u_{3}^{(k)})\nabla u_{1}^{(k+1)}) = -\nabla\cdot\chi(u_{2}^{(k)},u_{3}^{(k)}) $$
2) Use the new $u_{1}^{(k+1)}$ and information from the previous time step for $u_{3}$ to solve for $u_{2}^{(k+1)}$:
$$ \frac{\partial u_{2}}{\partial t} + \nabla\cdot f_{1}(u_{1}^{(k+1)},u_{2}^{(k+1)},u_{3}^{(k)}) - \nabla\cdot(D_{2}(u_{2}^{(k+1)},u_{3}^{(k)})\nabla u_{2}^{(k+1)}) = 0 $$
3) Solve for $u_{3}^{(k+1)}$:
$$ \frac{\partial u_{3}}{\partial t} + \nabla\cdot f_{2}(u_{1}^{(k+1)},u_{2}^{(k+1)},u_{3}^{(k+1)}) - \nabla\cdot(D_{3}(u_{2}^{(k+1)},u_{3}^{(k+1)})\nabla u_{3}^{(k+1)}) = 0 $$
So the first equation is linear and the last two are non-linear. I am using Newton's method to handle the non-linearity. However, there are some bugs in my code and I'm a bit stuck trying to interpret the results.
I know this is long but I'm trying to be as detailed as possible here. Just a couple of questions:
1) I test the simple case where $u_{1} = u_{2} = u_{3} = 5+t$ and I get about machine precision error using linear basis functions. Newton's method is also converging quadratically. I'm using a timestep approximately equal to $h^{p+1}$. I can also increase it to $100h^{p+1}$ which surprisingly doesn't seem to affect the convergence much. Using this time step, I have:
/* sample Newton iterations */******** Current Time: 7.812500 ********u2 Iteration: 1, Error: 4.155186e-01u2 Iteration: 2, Error: 2.480219e-02u2 Iteration: 3, Error: 9.169256e-05u2 Iteration: 4, Error: 1.250692e-09u2 Iteration: 5, Error: 8.042476e-16Iterations for u2 convergence: 5u3 Iteration: 1, Error: 4.155186e-01u3 Iteration: 2, Error: 2.480219e-02u3 Iteration: 3, Error: 9.169256e-05u3 Iteration: 4, Error: 1.250692e-09u3 Iteration: 5, Error: 7.664044e-16Iterations for u3 convergence: 5/* error results */*********** L2 Error ***********u1 error in L2-norm:4.53162e-12 4.64396e-12 6.15316e-12 1.07398e-11 u2 error in L2-norm:4.12483e-13 7.30247e-13 1.39962e-12 2.76142e-12 u3 error in L2-norm:4.09102e-13 7.29944e-13 1.39935e-12 2.76121e-12 *********** H1 Error ***********u1 error in H1-norm:1.05844e-11 2.03320e-11 3.84458e-11 7.47941e-11 u2 error in H1-norm:6.40033e-13 1.33164e-12 2.71254e-12 5.47814e-12 u3 error in H1-norm:6.34670e-13 1.33089e-12 2.71211e-12 5.47862e-12
Now the bizarre part is that if I take $u_{1}=5+t$, $u_{2}=6+t$, and $u_{3}=7+t$, my results are awful. I'm thinking maybe the first case was the exception rather than the norm since it's the only example I've tested so far that seems to work. Anyway, the weird part about this one is that I'm getting about machine precision error for $u_{2}$ while the others are flatlining around
1.e0. I'm not sure how this could possible be since the solution to $u_{2}$ depends on the solutions to $u_{1}$ and $u_{3}$, which are incorrect if my code is to be believed. I observe quadratic convergence for $u_{2}$, but obviously not the others...
The time step is around $h^{p+1}$ here, but I observe similar results for smaller time steps.
******** Current Time: 0.074219 ********u2 Iteration: 1, Error: 3.908065e-03u2 Iteration: 2, Error: 2.219552e-06u2 Iteration: 3, Error: 2.889154e-12Iterations for u2 convergence: 3u3 Iteration: 1, Error: 1.373826e-01u3 Iteration: 2, Error: 1.564705e-02u3 Iteration: 3, Error: 2.492157e-03u3 Iteration: 4, Error: 7.872758e-04u3 Iteration: 5, Error: 1.311116e-04u3 Iteration: 6, Error: 2.997904e-05u3 Iteration: 7, Error: 9.158730e-06u3 Iteration: 8, Error: 1.519301e-06u3 Iteration: 9, Error: 3.555316e-07u3 Iteration: 10, Error: 1.067623e-07u3 Iteration: 11, Error: 1.755777e-08u3 Iteration: 12, Error: 4.195674e-09u3 Iteration: 13, Error: 1.242190e-09u3 Iteration: 14, Error: 2.024999e-10u3 Iteration: 15, Error: 4.936768e-11u3 Iteration: 16, Error: 1.443518e-11u3 Iteration: 17, Error: 2.333785e-12Iterations for u3 convergence: 17*********** L2 Error ***********u1 error in L2-norm:3.02951e+00 5.43305e+00 5.96871e+00 4.25056e+00 u2 error in L2-norm:1.45749e-13 2.71039e-13 3.69646e-13 4.95591e-13 u3 error in L2-norm:6.29385e-01 8.30180e-01 1.11972e+00 1.13907e+00 *********** H1 Error ***********u1 error in H1-norm:7.86736e+00 1.81562e+01 3.67674e+01 3.97038e+01 u2 error in H1-norm:2.33954e-13 2.00202e-12 4.95659e-12 1.03835e-11 u3 error in H1-norm:2.00438e+00 3.91391e+00 6.24097e+00 7.06979e+00
I think maybe the most telling part is the convergence of the Newton iterations. Namely, what does it mean if I'm not observing quadratic convergence? My initial guess is just the solution at the previous time step, and since the time step is quite small, I'm pretty sure the initial guess is not the issue. I know that non-quadratic convergence can imply that the Jacobian is not being computed properly, but the first example is so similar to the second that I can't imagine that's the case...anyone have any insights here?
2) I'm a bit iffy on the spatial discretizaton of the convection term: $$ \int_{\Omega}\nabla\cdot f_{1}(u_{1},u_{2},u_{3})z = -\int_{\Omega}\nabla z\cdot f_{1}(u_{1},u_{2},u_{3}) + \int_{\Gamma}z(f_{1}\cdot\mathbf{n}) $$
(via Green's identity). The DG discretization is $$ -\sum_{E\in\mathcal{M}_{h}}\int_{E}\nabla z\cdot f_{1} + \sum_{e\in\Gamma_{\text{int}}}\int_{e}\left\{\left\{f_{1}\cdot\mathbf{n}\right\}\right\}[[z]] + \sum_{e\in\Gamma_{\text{out}}}\int_{e}(f_{1}\cdot\mathbf{n})z + \sum_{e\in\Gamma_{\text{in}}}\int_{e}(f_{1}\cdot\mathbf{n})z $$
where {{ }} is the average and [[ ]] is the jump. In all of the literature I've come across, the inflow integral is moved to the right-hand side. So if we're considering the second equation (solving for $u_{2}$), is this correct?
$$ \sum_{e\in\Gamma_{\text{in}}}\int_{e}(f_{1}(u_{1},g_{2,D},u_{3})\cdot\mathbf{n})z $$
So when computing the Jacobian for the second equation, does this inflow term not contribute to the Jacobian at all since $u_{2,D}$ is known?
I'm able to solve a single transient, non-linear convection diffusion equation using Newton's method, so I'm not entirely sure what it is that I'm missing.
|
I have been recently researching Happynet in terms of approximation and I have found out that there is a little interest in this topic.
What's the reason for this? Are there any related problems, that get better attention?
The Happynet problem is related to minimizing the energy in a Hopfield net.
The problem is defined as follows: Given a graph $G = (V,E)$ and edge weights $w: E \to \mathbb{Z}$, find a function $s: V \to \{-1,1\}$ such that for all $v \in V$, the sum $$\sum_{u: \{v,u\} \in E} s(v)s(u)w(\{u,v\})$$ is non-negative.
Put otherwise, the task is to find a cut in an edge-weighted graph $G$ such that for each node $v$ the total weight of cut-edges incident to $v$ is at most as large as the total weight of non-cut-edges incident to $v$.
Curiously, a feasible solution always exists. However, it seems that no polynomial-time algorithm is known for solving the problem. For more information, see e.g. Giannakos and Pottie (2005).
|
According to
Uniformization theorem every compact Riemann surface $\Sigma$ of genus $g\ge2$ is isomorphic to a space that can be obtained by the action of a Fuchsian group on upper half plane $\mathbb{H}$
$$\Sigma\simeq\frac{\mathbb{H}}{\Gamma}$$
On the Teichmüller or moduli space of such Riemann surfaces, one can consider
Fenchel-Nielsen coordinates $\{\ell_a,\tau_a\}_{a=1}^{3g-3}$.
On the other hand, Selberg zeta function for a compact Riemann surface can be written as:
$$Z(s)\equiv\prod_{\{\gamma_p\}}\prod_{n}\left(1-e^{-(n+s)\ell_{\gamma_p}}\right)$$
In which $\{\gamma_p\}$ is the set of primitive elements of Fuchsian group $\Gamma$ and $\ell_{\gamma_p}$ is the length of the corresponding
simple closed geodesic with respect to hyperbolic metric on $\Sigma$ induced from Poincare metric on $\mathbb{H}$. There are several questions: What is the number of primitive elements of $\Gamma$? (There are supposedly infinitely many simple closed geodesics on $\Sigma$ so it should be infinite.) What is the number of generators of $\Gamma$? Is there any relation between number of generators of $\Gamma$ and genus of the surface? There are some quantities on the Riemann surface that can be expressed in terms of Selberg zeta function. For example Determinant of Laplacian acting on various tensor fields on the Riemman surfacecan be written in terms of Selberg zeta function. These quantities are thus dependent to the Teichmüller/moduli parameters. So the natural question is that what is the relation between lengths $\{\ell_{a}\}$, the Fenchel-Nielsen coordinates and $\{\ell_{\gamma_p}\}$ in Selberg zeta function?
Regarding the last question, it seems that we can consider $3g-3$ primitive elements $\{\hat{\gamma}_{p_i}\}_{i=1}^{3g-3}$ of $\Gamma$ and identify their lengths with Fenchel-Nielsen length coordinates $\{\ell_a\}_{a=1}^{3g-3}$:
$$\ell_{\hat{\gamma}_{p_i}}\equiv\ell_i \qquad i=1,\cdots,3g-3$$
If that is the case:
What is the role of other lengths? Are they related to the set $\{\hat{\gamma}_p\}$ by the action of Mapping class group?
Following Quillen, The determinant of Dirac operator is defined through the corresponding Laplacian which in turn can be expressed in terms of Selberg zeta function. Regarding the fact that FN coordinates can not give a complex structure to Teichmuller space, is there any notion of "holomorphic factorization" which makes it possible to define the determinant of Dirac operator directly in terms of Selberg zeta function?
If that is not the case,
How is it possible to obtain dependence of determinant of the Dirac operator on Fenchel-Nielsen coordinates?
|
Answer
b. 8 hours
Work Step by Step
We use the equation for period to find: $T=\sqrt{\frac{4\pi^2r^3}{GM_E}}$ $T=\sqrt{\frac{4\pi^2(20,200\times10^3)^3}{(6.67\times10^{-11})(5.97\times10^{24})}}\approx \fbox{8 hours}$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
Search
Now showing items 1-10 of 33
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
Title:
twolinesBfield
Post by:
ahmedelshfie on June 30, 2010, 07:21:22 pm
This following applet is
twolinesBfield
Created by prof Hwang Modified by Ahmed
Original project twolinesBfield (http://www.phy.ntnu.edu.tw/ntnujava/index.php?topic=1451)
There is current flow I in the z direction (along z-axis) which will produce circular magnet field.
There is another current i along x direction initially (i<<I), the direction of thos current can be adjusted with slider.
The magnet force between current segment, $i d\vec{\ell}$ in magnetic field $\vec{B}=\frac{\mu_o I}{2\pi r}\hat{\theta}$ is $d\vec{F}=i d \vec{\ell}\times \vec{B}$
The direction of current is represented with red arrow. Gray arrows are magnetic field, Blue arrows are interaction force.
|
A new proof of the boundedness results for stable solutions to semilinear elliptic equations
1.
ICREA, Pg. Lluis Companys 23, 08010 Barcelona, Spain
2.
Universitat Politècnica de Catalunya, Departament de Matemàtiques, Diagonal 647, 08028 Barcelona, Spain
3.
BGSMath, Campus de Bellaterra, Edifici C, 08193 Bellaterra, Spain
We consider the class of stable solutions to semilinear equations $ -\Delta u = f(u) $ in a bounded smooth domain of $ \mathbb{R}^n $. Since 2010 an interior a priori $ L^\infty $ bound for stable solutions is known to hold in dimensions $ n\le 4 $ for all $ C^1 $ nonlinearities $ f $. In the radial case, the same is true for $ n\leq 9 $. Here we provide with a new, simpler, and unified proof of these results. It establishes, in addition, some new estimates in higher dimensions —for instance $ L^p $ bounds for every finite $ p $ in dimension 5.
Since the mid nineties, the existence of an $ L^\infty $ bound holding for all $ C^1 $ nonlinearities when $ 5\leq n\leq 9 $ was a challenging open problem. This has been recently solved by A. Figalli, X. Ros-Oton, J. Serra, and the author, for nonnegative nonlinearities, in a forthcoming paper.
Mathematics Subject Classification:35K57, 35B65. Citation:Xavier Cabré. A new proof of the boundedness results for stable solutions to semilinear elliptic equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (12) : 7249-7264. doi: 10.3934/dcds.2019302
References:
[1] [2]
H. Brezis, Is there failure of the Inverse Function Theorem?, in
[3] [4] [5] [6] [7] [8] [9]
X. Cabré and P. Miraglio, Universal Hardy-Sobolev inequalities on hypersurfaces of Euclidean space, forthcoming.Google Scholar
[10]
X. Cabré and G. Poggesi, Stable solutions to some elliptic problems: Minimal cones, the Allen-Cahn equation, and blow-up solutions,
[11]
X. Cabré and T. Sanz-Perela, BMO and $L^\infty$ estimates for stable solutions to fractional semilinear elliptic equations, forthcoming.Google Scholar
[12] [13]
M. G. Crandall and P. H. Rabinowitz,
Some continuation and variational methods for positive solutions of nonlinear elliptic eigenvalue problems,
[14]
L. Dupaigne,
[15]
D. Gilbarg and N. S. Trudinger,
[16] [17] [18] [19] [20] [21]
show all references
References:
[1] [2]
H. Brezis, Is there failure of the Inverse Function Theorem?, in
[3] [4] [5] [6] [7] [8] [9]
X. Cabré and P. Miraglio, Universal Hardy-Sobolev inequalities on hypersurfaces of Euclidean space, forthcoming.Google Scholar
[10]
X. Cabré and G. Poggesi, Stable solutions to some elliptic problems: Minimal cones, the Allen-Cahn equation, and blow-up solutions,
[11]
X. Cabré and T. Sanz-Perela, BMO and $L^\infty$ estimates for stable solutions to fractional semilinear elliptic equations, forthcoming.Google Scholar
[12] [13]
M. G. Crandall and P. H. Rabinowitz,
Some continuation and variational methods for positive solutions of nonlinear elliptic eigenvalue problems,
[14]
L. Dupaigne,
[15]
D. Gilbarg and N. S. Trudinger,
[16] [17] [18] [19] [20] [21]
[1]
Tomás Sanz-Perela.
Regularity of radial stable solutions to semilinear elliptic equations for the fractional Laplacian.
[2] [3]
Xavier Cabré, Manel Sanchón, Joel Spruck.
A priori estimates for semistable solutions of semilinear elliptic equations.
[4]
Claudia Anedda, Giovanni Porru.
Boundary estimates for solutions of weighted semilinear elliptic
equations.
[5] [6] [7] [8]
Soohyun Bae.
Positive entire solutions of inhomogeneous semilinear elliptic equations with supercritical exponent.
[9]
Yi-hsin Cheng, Tsung-Fang Wu.
Multiplicity and concentration of positive solutions for semilinear
elliptic equations with steep potential.
[10] [11] [12]
Zhijun Zhang.
Large solutions of semilinear elliptic equations
with a gradient term: existence and boundary behavior.
[13]
Sara Barile, Addolorata Salvatore.
Radial solutions of semilinear elliptic equations with broken symmetry on unbounded domains.
[14] [15]
Soohyun Bae, Yūki Naito.
Separation structure of radial solutions for semilinear elliptic equations with exponential nonlinearity.
[16]
Shoichi Hasegawa.
Stability and separation property of radial solutions to semilinear elliptic equations.
[17]
Jinlong Bai, Desheng Li, Chunqiu Li.
A note on multiplicity of solutions near resonance of semilinear elliptic equations.
[18]
Francesca Alessio, Piero Montecchiari, Andrea Sfecci.
Saddle solutions for a class of systems of periodic and reversible semilinear elliptic equations.
[19]
Cemil Tunç.
Stability, boundedness and uniform boundedness of solutions of nonlinear delay differential equations.
[20]
Alexandra Rodkina, Henri Schurz.
On positivity and boundedness of solutions of nonlinear
stochastic difference equations.
2018 Impact Factor: 1.143
Tools
Article outline
[Back to Top]
|
I was reading the exponent Combination law in
proofwiki.org and got confused in one part of the proof. The proof is as follows:
Let $a \in R{> 0}$
Let $x, y \in R$
Let $a^x$ be defined as $a$ to the power of $x$
Then:
$a^x a^y = a^{x + y}$
Proof:
$a^{x+y}= $ $\displaystyle \exp \left({\left({x + y}\right) \ln a}\right)$
Definition of Power to Real Number
$= \displaystyle \exp \left({x \ln a + y \ln a}\right)$
$=\displaystyle \exp \left({x \ln a}\right) \exp \left({y \ln a}\right)$
Exponent of Sum
$= \displaystyle a^x a^y$
My question is with this part: $\displaystyle \exp \left({x \ln a}\right) \exp \left({y \ln a}\right)$. I read that $\exp \left({x + y}\right) = \left({\exp x}\right) \left({\exp y}\right)$
How can I proof that because I can't find any explanation or proof for that. Also which book would be good to start reading regarding this type of math. I don't know if this is part of abstract algebra or what math background I need to understand it. Just to let know I am just a person who has developed a great interest for math in my 30's and I am learning by myself and doing what I can with my limitations.
|
Let's suppose we have
$$\gcd(a,b)\cdot\gcd(b,c)\cdot\gcd(c,a)\cdot \gcd(a,b,c) = abc$$
and look at a prime $p$ dividing at least one of $a,b,c$.
Suppose $p$ divides at most two of the three, say $p\nmid c$, and $a = p^\alpha\cdot a'$, $b = p^\beta\cdot b'$ with $p\nmid a'b'$. Then on the left hand side, $p$ does only occur in $\gcd(a,b)$, with exponent $\min\{\alpha,\beta\}$. But on the right hand side, it occurs with exponent $\alpha+\beta = \min\{\alpha,\beta\} + \max\{\alpha,\beta\}$, and since the exponents must be equal, it follows that $\max\{\alpha,\beta\} = 0$, contradicting the assumption that $p$ divides at least one of $a,b,c$.
So every prime dividing at least one of $a,b,c$ must divide all three. Let the exponents of $p$ be $\alpha \leqslant \beta \leqslant \gamma$. Then on the left hand side, $p$ occurs with the exponent
$$\alpha + \beta + \alpha + \alpha = 3\alpha + \beta,$$
and on the right it occurs with the exponent $\alpha + \beta + \gamma$. It follows that $\gamma = 2\alpha$.
Furthermore, the condition that none of $a,b,c$ shall be an integer multiple of any other implies that for each pair $(x,y)$ of two of the numbers, there is at least one prime $p_{xy}$ that occurs with a larger exponent in the prime factorisation of $x$ than it occurs in the prime factorisation of $y$.
Any prime that is not one of the $p_{xy}$ can be removed from all of $a,b,c$, and leads to a solution with smaller numbers, and smaller $a+b+c$, so the primes involved in the minimal solution are precisely the
$$\{p_{ab},p_{ac},p_{ba},p_{bc},p_{ca},p_{cb}\}.$$
That set must contain at least two primes, and it contains at most six. It is clear that for the minimal solution, the set must contain the $k$ smallest primes, $2 \leqslant k \leqslant 6$.
Finding the minimal solution is then a small amount of work even brute-forcing.
|
I'm studying the quantum mechanics of an infinite square well from a computational standpoint.
My eigenfunctions are defined as $u_n(x)=\sqrt{2/L}\sin\left(n\pi x/L\right),\quad 0 \le x \le L, \quad n=1,2,3,\dots$ and zero outside of the $[0,L]$ interval.
I'd like to transform the wave function from position domain to the momentum domain. My book says that this is achieved by writing the Fourier transform of $u_n(x)$:
$\phi_n(k)=\frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} u_n(x)e^{-i k x}dx$
ClearAll["Global`*"];L = 1;u[n_, x_] := Sqrt[2/L] Sin[n π x]φ[n_, k_] := Assuming[{n ∈ Integers, {k, x} ∈ Reals}, 1/(Sqrt[2π]) Integrate[u[n, x] Exp[-I k x], {x, -∞, ∞}]]
Integrate::idiv: Integral of E^(-I k x) Sin[n π x] does not converge on {-∞,∞}. >>
If I, though, use
FourierTransform[], I get the result:
FourierTransform[u[n, x], x, k, FourierParameters -> {0, -1}]
-I Sqrt[π] DiracDelta[k - n π] + I Sqrt[π] DiracDelta[k + n π]
But my book says that the result should be:
$\phi_n(k) = \sqrt{\pi L} n \frac{1-(-1)^n e^{-i L k}}{(n\pi -L k)(n\pi + Lk)}$
I tried to get
Mathematica to fully simplify the output of
FourierTransform[], but I/it couldn't.
So my questions are:
Why does the integral not converge when written manually but it does converge when it is being calculated via
FourierTransform[]?
How could I prove that the output of
FourierTransform[]is equivalent to the formula that is written in my book for $\phi_n(k)$ ?
|
It's easier to think about the Intermediate Value Theorem, which is equivalent to the Brouwer Fixed-Point Theorem for the unit interval.
The main issue is that dichotomy for (Cauchy) real numbers is not constructively valid: given two real numbers $\alpha,\beta$, there is no algorithm to decide whether $\alpha \leq \beta$ or $\alpha \geq \beta$. This principle is equivalent to the Lesser Limited Principle of Omniscience (LLPO) and it's non-constructive nature is illustrated by a classic Brouwerian counterexample:
Define the sequence of rationals $(a_n)_{n=0}^\infty$ by $a_n = (-2)^{-k}$ if $k \leq n$ and the first occurrence of the sequence $736667843909774044615061702878$ in $\pi$ begins $k$-digits after the decimal point; if there is no such $k \leq n$, then $a_n = 0$. This is a well-defined Cauchy sequence (with a known rate of convergence) so the limit $\alpha = \lim_{n\to\infty} a_n$ is a well-defined real number. Is $\alpha \geq 0$ or $\alpha \leq 0$?
If the given sequence does occur in $\pi$, then we will eventually know that $\alpha > 0$ or $\alpha < 0$ and respond accordingly. However, if the given sequence does not occur in $\pi$, though both answers are valid in this case, neither answer can be proven correct without an infinite amount of information about the digits of $\pi$ (which the example assumes is not known at this time).
Returning to the Intermediate Value Theorem, consider the piecewise linear function $f:[-1,1]\to[-1,1]$ that interpolates the points $(-1,1),(-1/2,\alpha),(1/2,\alpha),(1,1)$. The Intermediate Value Theorem says that there is a number $r \in [-1,1]$ such that $f(r) = 0$. Note that $\alpha \geq 0$ iff $r \leq 1/2$ and $\alpha \leq 0$ iff $r \geq -1/2$. Now, determining whether $r \leq 1/2$ or $r \geq -1/2$ is easy: compute $r$ to enough accuracy to know that it lies within an open interval with length $1$ and rational endpoints; that interval cannot contain both $1/2$ and $-1/2$ and that is enough to know whether $r \leq 1/2$ or $r \geq -1/2$.
So, from the above, we see that if we had a constructive proof of the Intermediate Value Theorem, we would also have a constructive proof of dichotomy. Since there is no constructive proof of dichotomy, there cannot be a constructive proof of the Intermediate Value Theorem and, for the same reason, there cannot be a constructive proof of the Brouwer Fixed-Point Theorem.
The Brouwerian counterexample above might not be convincing since we (at least believe) that we know nontrivial information about $\pi$. Of course, the specific number $\pi$ is irrelevant; it's just the traditional choice for Brouwerian counterexamples. Here is a similar example that relies on the existence of inseparable pairs of computably enumerable sets.
Say a sequence $(q_n)_{n=0}^\infty$ of rational numbers is
rapidly Cauchy if $|q_n - q_m| \leq 1/2^N$ for all $m,n > N$. (This is one of the typical definitions of Cauchy real numbers.) Suppose we did have an algorithm $M$ to decide whether the limit of a rapidly Cauchy sequence is nonnegative or nonpositive.
Now given an index $e$, define $(a_{e,n})_{n=0}^\infty$ to be $a_{e,n} = (-1)^m/2^s$ if the $e$-th Turing machine halts in exactly $s \leq n$ steps and outputs $m$, and set $a_{e,n} = 0$ if the $e$-th Turing machine does not halt in $n$ or fewer steps. Each of these sequences is an effectively computable rapidly Cauchy sequence. If I apply my proposed $M$ to the $e$-th sequence, I obtain a total computable function $s:\mathbb N \to \{0,1\}$ such that if $s(e) = 0$ then $\lim_{n\to\infty} a_{e,n} \leq 0$ and if $s(e) = 1$ then $\lim_{n \to \infty} a_{e,n} \geq 0$.
Note that $\lim_{n\to\infty} a_{e,n} > 0$ iff $e$ belongs to the set $A$ of all indices for Turing machines that halt with even output, and $\lim_{n \to\infty} a_{e,n} \lt 0$ iff $e$ belongs to the set $B$ of all indices for Turing machines that halt with odd output. The pair $A,B$ is one of the standard prototypical examples of an inseparable pair, so there is no computable set $C$ such that $A \subseteq C$ and $B \cap C = \varnothing$. However, the set $C = \{e : s(e) = 1\}$ does exactly that!
|
(Note: This question is related to my previous mathoverflow question, "Critical Points in $ZF$ without Choice".)
In the
Stanford Encyclopedia of Philosophy entry "Non-Wellfounded Set Theory" (Section 2.2, "The Foundation Axiom"), one has the following statement (my comments regarding it are in brackets):
The Foundation Axiom ($FA$) may be stated in different ways. Here are some formulations; their equivalence in the presence of the other [$ZF$?] axioms is a standard result of elementary [$ZF$?] set theory [the last two, (4) and (5), are particularly relevant to my previous question]:
(4). For every set $x$, there is an ordinal $\alpha$ such that $x$$\in$$V_{\alpha}$. [seemingly necessary for Asaf's proof in his answer to my previous question]
(5). $V_{[ZF?]}$=$WF$ [the class of well-founded sets].
Question 1: Can the equivalence of (4) and (5) (and their equivalence to $FA$) be proved in $ZF$ alone, without recourse to Choice?
Question 2: Regarding (5) (i.e. $V$=$WF$--my comment excluded), does $V$=$V_{ZF}$? I ask this question because of the following: in the Daghighi, Golshani, Hamkins, and Jerabek paper, "The Role of the Foundation Axiom in the Kunen Inconsistency", they claim (and prove, in $GBC^{-f}$ and in $ZFC^{-f}$) that the Kunen inconsistency (in the following form: "There is no nontrivial $\Sigma_1$-elementary embedding $j$:$WF$$\rightarrow$$WF$.") holds for $WF$. If $V_{ZF}$=$WF$, then it seems that there must be a way to adjust their proof so that it doesn't need Choice (in which case, a major open problem will have seemingly been solved). What, if anything, is wrong with this picture?
|
Search
Now showing items 1-6 of 6
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC
(Springer, 2013-09)
We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
|
It does that whenever the Internet connection is temporarily down.muzik wrote:Looks like my apgsearch glitched out and submitted a mutant Siamese twin haul: I mean, not that that's a major problem or anything, just kinda interesting
For general discussion about Conway's Game of Life.
What do you do with ill crystallographers? Take them to the
! mono-clinic
Makes sense. My internet connection is unpredictable overnight
Looks like apgcodes for Seeds patterns actually work:
https://catagolue.appspot.com/object/xq1_69/b2 https://catagolue.appspot.com/object/xp2_12/b2
https://catagolue.appspot.com/object/xq1_69/b2
https://catagolue.appspot.com/object/xp2_12/b2
Of course -- why wouldn't they? The site only verifies that the code actually describes the object named.
Fun fact 1: the site also accepts what you might call "secondary" codes: ones that result from different readings of the same object, but which are not the canonical code on account of being too long, or (if not that) lexicographically sorting after it. For instance, xs6_2552 appears as a perfectly ordinary still life. This is a good reason to always use a script to figure out an object's apgcode.
Fun fact 2: xp2_12, contrary to what the comment on its page says, has been seen -- it's appeared in a number of semitotalistic rules, e.g. b2-a4s23. The moon, on the hand, has not, and in fact to my knowledge there's no photons known to Catagolue at all. Are there any rules which have photons but do not explode?
Indeed: viewtopic.php?f=11&t=803&start=85Apple Bottom wrote:Are there any rules which have photons but do not explode?
It has some pretty crazy rakes and breeders, though. (maybe it could answer my question on the MMMM breeder in the questions thread?)
I should've been more specific, I reckon. Are there anymuzik wrote:Indeed: viewtopic.php?f=11&t=803&start=85Apple Bottom wrote:Are there any rules which have photons but do not explode? semi-totalistic or outer-totalisticrules which have photons but do not explode, i.e. rules that are supported by Catagolue and that are (in principle at the very least) soup-searchable with Calcyman's apg* tools, and/or A for Awesome's hacked version of apgsearch?
Put more succinctly, is there a chance we'll see a photon on Catagolue anytime soon?
Probably not too soon. Currently, my hacked version (and indeed any of the python versions) will break upon encountering any p1 spaceship, due to a small detail in the implementation of the bijoscar() function. Unless someone finds a B2 outer-totalistic rule that doesn't explode (which doesn't seem likely; the best I've been able to find are rules that sometimes only grow linearly (in the form of wickstretchers), rather than quadratically) and is searchable by apgmera, it'll probably be a while.Apple Bottom wrote:Put more succinctly, is there a chance we'll see a photon on Catagolue anytime soon?
x₁=ηx
V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Can you not fix that?A for awesome wrote: Currently, my hacked version (and indeed any of the python versions) will break upon encountering any p1 spaceship, due to a small detail in the implementation of the bijoscar() function. I probably can, but I have a lot of other bugs and things I have to fix before I release a new version, so it's going to take a while. I also don't entirely know whether the apgsearch algorithm would even work with rules like cb2, given all of the natural puffers, rakes, guns, and breeders present.muzik wrote:Can you not fix that?A for awesome wrote: Currently, my hacked version (and indeed any of the python versions) will break upon encountering any p1 spaceship, due to a small detail in the implementation of the bijoscar() function.
x₁=ηx
V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
b3s12/C1 hasn't updated in hours and there's a bunch of hauls lined up
EDIT: seems many other rules are doing this EDIT2: including normal life....the fuuuuu...?
EDIT: seems many other rules are doing this
EDIT2: including normal life....the fuuuuu...?
It's having a hard time searching B2e3/S23-j, an extremely stable rule. What's going on (and I said heyyeyaaeyaaaeyaeyaa)
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
Current rule interest: B2ce3-ir4a5y/S2-c3-y
I think you meant to post this on the Hacking apgsearch thread, but in any case something is going wrong with the rule parsing. When I tried a search with apgsearch-2016-2-06-v0.54+0.21i-update.py it was actually trying to search B23/S23 - doomed to failure. Searching works fine when I used my version of non-totalistic apgsearch (apgsearch-isotropic-v0.2.py) The search runs as expected. (No upload capability though.) I have no idea what's going wrong there.drc wrote:It's having a hard time searching B2e3/S23-j, an extremely stable rule. What's going on (and I said heyyeyaaeyaaaeyaeyaa)
Btw, that's a nice rule, lots of variety in the natural spaceships.
Did either of you generate the rule table before searching the rule? This is a drawback of 0.21i: it does not autogenerate the rule table for you; you have to use isotropic-rulegen.py to make the rule and enter it using an underscore, not a slash. I'll try to improve this when I get back from vacation in three weeks, or sooner if I can.wildmyron wrote:I think you meant to post this on the Hacking apgsearch thread, but in any case something is going wrong with the rule parsing. When I tried a search with apgsearch-2016-2-06-v0.54+0.21i-update.py it was actually trying to search B23/S23 - doomed to failure. Searching works fine when I used my version of non-totalistic apgsearch (apgsearch-isotropic-v0.2.py) The search runs as expected. (No upload capability though.) I have no idea what's going wrong there.drc wrote:It's having a hard time searching B2e3/S23-j, an extremely stable rule. What's going on (and I said heyyeyaaeyaaaeyaeyaa) Btw, that's a nice rule, lots of variety in the natural spaceships.
x₁=ηx
V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
I did, but it still searches like 3 soups a secondA for awesome wrote:Did either of you generate the rule table before searching the rule? This is a drawback of 0.21i: it does not autogenerate the rule table for you; you have to use isotropic-rulegen.py to make the rule and enter it using an underscore, not a slash. I'll try to improve this when I get back from vacation in three weeks, or sooner if I can.
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
Current rule interest: B2ce3-ir4a5y/S2-c3-y
This issue is exactly what I have with trying to get v1 running on Raspbian on a RPi. Everything is 32bit, so no issue there but I just can't see what needs to be entered here to make it happy.muzik wrote:When I do this, and click on it, it just gives a dialog box "Could not load the Python library" and asks me to type in the file adress for python27.dll and even when I do, the same box just comes up again.Kazyan wrote:Add apgsearch.py to Golly's Scripts folder, then double-click the file from within Golly, instead of your file explorer. Tried both with python27.dll at the end and not. Where exactly should the file be located? When I searched for it, I found it in system32, but I'm not sure if that's the correct file. Is there a way to generate one? On my (Ubuntu) laptop I had the same problem - turns out that sometimeswhen Python updated the file name changes. The old version was(as expected) libpython2.7.so (which didn't work), while the new file was named libpython2.7.so.1. Maybe you have a similar problemMoth-Wingthane wrote:This issue is exactly what I have with trying to get v1 running on Raspbian on a RPi. Everything is 32bit, so no issue there but I just can't see what needs to be entered here to make it happy.muzik wrote:When I do this, and click on it, it just gives a dialog box "Could not load the Python library" and asks me to type in the file adress for python27.dll and even when I do, the same box just comes up again.Kazyan wrote:Add apgsearch.py to Golly's Scripts folder, then double-click the file from within Golly, instead of your file explorer. Tried both with python27.dll at the end and not. Where exactly should the file be located? When I searched for it, I found it in system32, but I'm not sure if that's the correct file. Is there a way to generate one?
Creator of multiple rules that may or may not be what you'd expect
Non-isotropic rules aren't supported directly; you have to create a rules file first, e.g. using the isotropic-rule.py script (which I think is also floating around on the forums here somewhere). Running that script and entering "B37/S2-i34q" will create a file called B37_S2-i34q.rule; when then asked by apgsearch (Aidan's version) what rule to search, enter "B37_S2-i34q".Rhombic wrote:When I search for B37/S2-i34q, it just goes on with B37/S234 The isotropic-rule-gen.py script is hiding on the Rule Request thread.Apple Bottom wrote:Non-isotropic rules aren't supported directly; you have to create a rules file first, e.g. using the isotropic-rule.py script (which I think is also floating around on the forums here somewhere).
Thank you very much everyone, your help is greatly appreciated!
A bit disappointed at the slower searching speeds compared to totalistic rules (due to it being RuleTable) but better than having nothing at all. - Rhombic
A bit disappointed at the slower searching speeds compared to totalistic rules (due to it being RuleTable) but better than having nothing at all.
- Rhombic
|
Definitions: Let $T$ (for "time") be a random variable $T \sim \text{Exp}(\lambda)$ and $\Delta t$ is a realization (or called an observed value) of $T$. Let $D$ (for "delay") be a random variable $D \sim \text{Exp}(\mu)$. All the random variables (including those involved below) are mutually independent. Timed Balls-into-Bins Model: There are $n$ bins. Consider two robots $R_1$ and $R_2$ which can produce multiple balls instantaneously. At time 0, robot $R_1$ (1) produces $n$ balls instantaneously; (2) Immediately, these $n$ balls are independently sent to the $n$ bins, one ball per bin; (3) The delays for the balls going from the robot to its destination bin are IID with the same distribution as $D$ defined above. At time $\Delta t$ (defined above), robot $R_2$ independently does exactly the same thing as robot $R_1$ does (i.e., (1), (2), and (3) for robot $R_1$ above).
Consider the time point $t$ when half (more precisely, $\lfloor n/2 \rfloor + 1$) of the $n$ bins have received the balls from $R_2$ (
no constraints on the balls from $R_1$), and denote the set of these $\lfloor n/2 \rfloor + 1$ bins by $B$.
Question:What is the probability $\mathbb{P}$ of the event $E$ that there exists a bin $b_{\Box} \in B$ which receives a ball from $R_1$ before receiving a ball from $R_2$?
Have similar balls-into-bins models been studied in the literature?
Any suggestions (not a complete solution) towards a closed form/approximation/numerical results (by using mathematical systems)/(even) simulations of the probability $\mathbb{P}$? It is OK to simplify this model for possible tractability, for example, by using $D \sim U(a,b)$.
|
Current browse context:
astro-ph.HE
Change to browse by: Bookmark(what is this?) Astrophysics > High Energy Astrophysical Phenomena Title: The Optical Afterglow of GW170817: An Off-axis Structured Jet and Deep Constraints on a Globular Cluster Origin
(Submitted on 21 Aug 2019)
Abstract: We present a revised and complete optical afterglow light curve of the binary neutron star merger GW170817, enabled by deep Hubble Space Telescope (HST) F606W observations at $\approx\!584$ days post-merger, which provide a robust optical template. The light curve spans $\approx 110-362$ days, and is fully consistent with emission from a relativistic structured jet viewed off-axis, as previously indicated by radio and X-ray data. Combined with contemporaneous radio and X-ray observations, we find no spectral evolution, with a weighted average spectral index of $\langle \beta \rangle = -0.583 \pm 0.013$, demonstrating that no synchrotron break frequencies evolve between the radio and X-ray bands over these timescales. We find that an extrapolation of the post-peak temporal slope of GW170817 to the luminosities of cosmological short GRBs matches their observed jet break times, suggesting that their explosion properties are similar, and that the primary difference in GW170817 is viewing angle. Additionally, we place a deep limit on the luminosity and mass of an underlying globular cluster of $L \lesssim 6.7 \times 10^{3}\,L_{\odot}$, or $M \lesssim 1.3 \times 10^{4}\,M_{\odot}$, at least 4 standard deviations below the peak of the globular cluster mass function of the host galaxy, NGC4993. This limit provides a direct and strong constraint that GW170817 did not form and merge in a globular cluster. As highlighted here, HST (and soon JWST) enables critical observations of the optical emission from neutron star merger jets and outflows. Submission historyFrom: Wen-Fai Fong [view email] [v1]Wed, 21 Aug 2019 18:00:00 GMT (414kb,D)
|
closed as no longer relevant by Robin Chapman, Akhil Mathew, Yemon Choi, Qiaochu Yuan, Pete L. Clark Aug 22 '10 at 9:00
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question.
$e^{\pi i} + 1 = 0$
Stokes' Theorem
Trivial as this is, it has amazed me for decades:
$(1+2+3+...+n)^2=(1^3+2^3+3^3+...+n^3)$
$$ \frac{24}{7\sqrt{7}} \int_{\pi/3}^{\pi/2} \log \left| \frac{\tan t+\sqrt{7}}{\tan t-\sqrt{7}}\right| dt\\ = \sum_{n\geq 1} \left(\frac n7\right)\frac{1}{n^2}, $$where $\left(\frac n7\right)$ denotes the Legendre symbol. Not really my favorite identity, but it has the interesting feature that it is aconjecture! It is a rare example of a conjectured explicit identitybetween real numbers that can be checked to arbitrary accuracy.This identity has been verified to over 20,000 decimal places.See J. M. Borwein and D. H. Bailey,
Mathematics by Experiment: Plausible Reasoning in the 21st Century, A K Peters, Natick, MA,2004 (pages 90-91).
There are many, but here is one.
$d^2=0$
Mine is definitely $$1+\frac{1}{4}+\frac{1}{9}+\cdots+\frac{1}{n^2}+\cdots=\frac{\pi^2}{6},$$ an amazing relation between integers and pi.
There's lots to choose from. Riemann-Roch and various other formulas from cohomology are pretty neat. But I think I'll go with
$$\sum\limits_{n=1}^{\infty} n^{-s} = \prod\limits_{p \text{ prime}} \left( 1 - p^{-s}\right)^{-1}$$
1+2+3+4+5+... = -1/12
Once suitably regularised of course :-)
$$\frac{1}{1-z} = (1+z)(1+z^2)(1+z^4)(1+z^8)...$$
Both sides as formal power series work out to $1 + z + z^2 + z^3 + ...$, where all the coefficients are 1. This is an analytic version of the fact that every positive integer can be written in exactly one way as a sum of distinct powers of two, i. e. that binary expansions are unique.
$V - E + F = 2$
Euler's characteristic for connected planar graphs.
I'm currently obsessed with the identity $\det (\mathbf{I} - \mathbf{A}t)^{-1} = \exp \text{tr } \log (\mathbf{I} - \mathbf{A}t)^{-1}$. It's straightforward to prove algebraically, but its combinatorial meaning is very interesting.
$196884 = 196883 + 1$
For a triangle with angles a, b, c $$\tan a + \tan b + \tan c = (\tan a) (\tan b) (\tan c)$$
Given a square matrix $M \in SO_n$ decomposed as illustrated with square blocks $A,D$ and rectangular blocks $B,C,$
$$M = \left( \begin{array}{cc} A & B \\\ C & D \end{array} \right) ,$$
then $\det A = \det D.$
What this says is that, in Riemannian geometry with an orientable manifold, the Hodge star operator is an isometry, a fact that has relevance for Poincare duality.
But the proof is a single line:
$$ \left( \begin{array}{cc} A & B \\\ 0 & I \end{array} \right) \left( \begin{array}{cc} A^t & C^t \\\ B^t & D^t \end{array} \right) = \left( \begin{array}{cc} I & 0 \\\ B^t & D^t \end{array} \right). $$
It's too hard to pick just one formula, so here's another: the Cauchy-Schwarz inequality:
||x|| ||y|| >= |(x.y)|, with equality iff x&y are parallel.
Simple, yet incredibly useful. It has many nice generalizations (like Holder's inequality), but here's a cute generalization to three vectors in a real inner product space:
||x||
2||y|| 2||z|| 2+ 2(x.y)(y.z)(z.x) >= ||x|| 2(y.z) 2+ ||y|| 2(z.x) 2+ ||z|| 2(x.y) 2, with equality iff one of x,y,z is in the span of the others.
There are corresponding inequalities for 4 vectors, 5 vectors, etc., but they get unwieldy after this one.
All of the inequalities, including Cauchy-Schwarz, are actually just generalizations of the 1-dimensional inequality:
||x|| >= 0, with equality iff x = 0,
or rather,
instantiations of it in the 2 nd, 3 rd, etc. exterior powers of the vector space.
I always thought this one was really funny: $1 = 0!$
I think that Weyl's character formula is pretty awesome! It's a generating function for the dimensions of the weight spaces in a finite dimensional irreducible highest weight module of a semisimple Lie algebra.
$2^n>n $
It has to be the ergodic theorem, $$\frac{1}{n}\sum_{k=0}^{n-1}f(T^kx) \to \int f\:d\mu,\;\;\mu\text{-a.e.}\;x,$$ the central principle which holds together pretty much my entire research existence.
Gauss-Bonnet, even though I am not a geometer.
Ἐν τοῖς ὀρθογωνίοις τριγώνοις τὸ ἀπὸ τῆς τὴν ὀρθὴν γωνίαν ὑποτεινούσης πλευρᾶς τετράγωνον ἴσον ἐστὶ τοῖς ἀπὸ τῶν τὴν ὀρθὴν γωνίαν περιεχουσῶν πλευρῶν τετραγώνοις.
That is,
In right-angled triangles the square on the side subtending the right angle is equal to the squares on the sides containing the right angle.
The formula $\displaystyle \int_{-\infty}^{\infty} \frac{\cos(x)}{x^2+1} dx = \frac{\pi}{e}$. It is astounding in that we can retrieve $e$ from a formula involving the cosine. It is not surprising if we know the formula $\cos(x)=\frac{e^{ix}+e^{-ix}}{2}$, yet this integral is of a purely real-valued function. It shows how complex analysis actually underlies even the real numbers.
It may be trivial, but I've always found
$\sqrt{\pi}=\int_{-\infty}^{\infty}e^{-x^{2}}dx$
to be particularly beautiful.
For X a based smooth manifold, the category of finite covers over X is equivalent to the category of actions of the fundamental group of X on based finite sets:
\pi-sets === et/X
The same statement for number fields essentially describes the Galois theory. Now the ideathat those should be somehow unifiedwas one of the reasons in the development of abstract schemes, a very fruitful topic that is studied in the amazing area of mathematics called the
abstract algebraic geometry. Also, note that "actions on sets" is very close to "representations on vector spaces" and this moves us in the direction of representation theory.
Now you see, this simple line actually somehow relates number theory and representation theory. How exactly? Well, if I knew, I would write about that, but I'm just starting to learn about those things.
(Of course, one of the specific relations hinted here should be the Langlands conjectures, since we're
so close to having L-functions and representations here!)
E[X+Y]=E[X]+E[Y] for any 2 random varibles X and Y
$\prod_{n=1}^{\infty} (1-x^n) = \sum_{k=-\infty}^{\infty} (-1)^k x^{k(3k-1)/2}$
$ D_A\star F = 0 $
Yang-Mills
$\left(\frac{p}{q}\right) \left(\frac{q}{p}\right) = (-1)^{\frac{p-1}{2} \frac{q-1}{2}}$.
My favorite is the Koike-Norton-Zagier product identity for the j-function (which classifies complex elliptic curves):
j(p) - j(q) = p
-1 \prod m>0,n>-1 (1-p mq n) c(mn),
where j(q)-744 = \sum
n >-2 c(n) q n = q -1 + 196884q + 21493760q 2 + ... The left side is a difference of power series pure in p and q, so all of the mixed terms on the right cancel out. This yields infinitely many identities relating the coefficients of j.
It is also the Weyl denominator formula for the monster Lie algebra.
|
How to Perform a Nonlinear Distortion Analysis of a Loudspeaker Driver
A thorough analysis of a loudspeaker driver is not limited to a frequency-domain study. Some desirable and undesirable (but nonetheless exciting) effects can only be caught by a nonlinear time-domain study. Here, we will discuss how system nonlinearities affect the generated sound and how to use the COMSOL Multiphysics® software to perform a nonlinear distortion analysis of a loudspeaker driver.
Understanding Linear and Nonlinear Distortions
A transducer converts a signal of one energy form (input signal) to a signal of another energy form (output signal). In regard to a loudspeaker, which is an electroacoustic transducer, the input signal is the electric voltage that, in the case of a moving coil loudspeaker, drives its voice coil. The output signal is the acoustic pressure that the human ear perceives as a sound. A distortion occurs when the output signal quantitatively and/or qualitatively differs from the input signal.
Schematic representation of a moving coil loudspeaker.
The distortion can be divided into two principal parts:
Linear distortion Nonlinear distortion
The term
linear distortion, which might sound rather confusing, implies that the output signal has the same frequency content as the input signal. In this distortion, it is the amplitude and/or phase of the output signal that is distorted. In contrast, the term nonlinear distortion suggests that the output signal contains frequency components that are absent in the input signal. This means that the energy is transferred from one frequency at the input to several frequencies at the output. Input and output signals in linear and nonlinear transducers.
Let the input sinusoidal signal, A_\text{in} \sin \left( 2\pi f t \right), be applied to a transducer with a nonlinear transfer function. The frequency content of the output signal will then have more than one frequency. Apart from the fundamental portion, which corresponds to the frequency f, there will be a distorted portion. Its spectrum usually (but not always) consists of the frequencies f^{(2)}, f^{(3)}, f^{(4)}, \ldots, which are multiples of the fundamental frequency f^{(n)} = n f, in which n \geq 2. These frequencies, called
overtones, are present in the sound, and it is the overtones that make musical instruments sound different: A note played on a violin sounds different from the same note played on a guitar. The same happens with the sound emitted from a loudspeaker.
The distortion is a relative quantity that can be described by the value of the
total harmonic distortion (THD). This value is calculated as the ratio of the amplitude of the distorted portion of the signal to that of the fundamental part:
The profile of a signal with a higher THD visibly differs from the pure sinusoidal.
Unfortunately, the value of the THD of the output signal itself might not be enough to judge the quality of the loudspeaker. A signal with a lower THD may sound worse than a signal with a higher THD. The reason is that the human ear perceives various overtones differently.
The distortion can be represented as a set of individual even-order, 2nf, and odd-order, (2n-1)f, components. The former are due to asymmetric nonlinearities of the transducer, while the latter are due to symmetric nonlinearities. The thing is that the sound containing even-order harmonics is perceived as “sweet” and “warm”. This can be explained by the fact that there are octave multiples of the fundamental frequency among them. The odd-order harmonics sound “harsh” and “gritty”. That is quite alright for a guitar distortion pedal, but not for a loudspeaker. What matters is, of course, not just the presence of those harmonics, but rather their level in the output signal.
Another interesting effect, called
intermodulation, occurs when the input signal contains more than one frequency component. The corresponding output signals start to interact with each other, producing frequency components absent in the input signal. In practice, if a two-tone sine wave such as A_\text{in} \sin \left( 2\pi f_1 t \right) + B_\text{in} \sin \left( 2\pi f_2 t \right) (in which f_2 > f_1) is applied to the input, the system nonlinearities will result in the modulation of the higher-frequency component by the lower one. That is, the frequencies f_2 \pm f_1, f_2 \pm 2f_1, and so on will appear in the frequency spectrum of the output signal. The quantitative measure of the intermodulation that corresponds to the frequency, f_2 \pm (n-1) f in which n \geq 2, is the n th-order intermodulation distortion (IMD) coefficient. It is defined as:
In practice, using an input signal containing three or more frequencies for the IMD analysis is not advisable, as the results become harder to interpret.
Transient Nonlinear Analysis of a Loudspeaker Driver
To summarize, the linear analysis of the loudspeaker, though a powerful tool for a designer, might not be sufficient. The loudspeaker can only be completely described if an additional nonlinear analysis is carried out. The nonlinear analysis is supposed to answer the following questions:
How does the nonlinear behavior of the loudspeaker affect the output signal? What are the limits of the input signal that ensure the loudspeaker functions acceptably? How should I compensate for the undesired distortion of the loudspeaker?
From the simulation point of view, there is both bad and good news. The bad news is that the full nonlinear analysis cannot be performed in the frequency domain. It requires the transient simulation of the loudspeaker, which is more demanding and time consuming than the frequency-domain analysis. The good news is that the effect of certain nonlinearities is only significant at low frequencies.
For example, the voice coil displacement is greater at lower frequencies and therefore the finite strain theory must be used to model the mechanical parts of the motor. Using the finite strain theory is redundant at higher frequencies, where the infinitesimal strain theory is applicable. The figures below show the results for the transient loudspeaker tutorial, driven by the same amplitude (V_0 = 10 V) of input voltage:
Voice coil motion in the air gap of the loudspeaker driver for a single-tone input voltage signal: 70 Hz on the left and 140 Hz on the right. Acoustic pressure at the listening point for a single-tone input voltage. The blue curves correspond to the nonlinear time-domain analysis, while the red curves correspond to the frequency-domain analysis: 70 Hz on the left and 140 Hz on the right.
The animations above depict the magnetic field in the voice coil gap and the motion of the former and the spider (both in pink) as well as the voice coil (in orange). As expected, the displacements, as well as the spider deformation, are higher at the lower frequency. The spider deformation obeys the geometrically nonlinear analysis and therefore the linear approximation is inaccurate in this case. This is confirmed by the output signal plots. These plots depict the acoustic pressure at the listening point located about 14.5 cm in front of the speaker dust cap tip.
The acoustic pressure profile obtained from the nonlinear time-domain modeling for the 70-Hz input signal deviates from the sinusoidal shape to a certain extent, which means that higher-order harmonics start playing a definite role. This is not visible for the input signal at 140 Hz: There’s only a slight difference in the amplitude between the linear frequency-domain and nonlinear time-domain simulation results. The THD value of the output signal drops from 4.3% in the first case to 0.9% in the second case. The plots below show how the harmonics contribute to the sound pressure level (SPL) at the listening point.
Frequency spectra of the SPL at the listening point: single-tone input voltage (70 Hz on the left and 140 Hz on the right).
The IMD analysis of the loudspeaker is carried out in a similar way. What’s different is the input signal applied to the voice coil, which contains two harmonics parts:
whose amplitudes, V_1 and V_2, usually correlate as 4 : 1, which corresponds to 12 dB.
The example below studies the IMD of the same test loudspeaker driver. The dual-frequency input voltage, in which f_1 = 70 Hz and f_2 = 700 Hz, serves as the input signal. The SPL plot on the left shows how the second- and third-order harmonics arising in the low-frequency part of the output signal generate a considerable level of the corresponding order IMDs in the high-frequency part. The IMD level becomes sufficiently lower if the signal frequency f_1 is increased to 140 Hz. This is seen in the right plot below.
Frequency spectra of the SPL at the listening point for a two-tone input voltage. Modeling Tips for Analyzing a Loudspeaker Driver
Since transient nonlinear simulations tend to be demanding, the loudspeaker driver model should not be overcomplicated. The 2D axisymmetric formulation is a good starting approach and was used for the tutorial examples in the previous section. After that, it’s important to estimate which effects are more important than others. This will help you set up an adequate multiphysics model of a loudspeaker.
The system nonlinearities include, but are not limited to, the following:
Nonlinear behavior of the magnetic field in the loudspeaker pole piece made of high-permeability metal Geometric nonlinearities in the moving parts of the motor Topology change as the voice coil moves up and down in the air gap
Speaking the lumped parameters’ language, this means that they are no longer constants like the Thiele-Small parameters, but functions of the voice coil position, x, and the input voltage, V. The above-mentioned nonlinearities will be reflected in the nonlinear inductance, L \left( x, V \right); compliance, C \left( x, V \right); and dynamic force factor, Bl \left( x, V \right). For instance, the tutorial example shows that the nonlinear behavior of the force factor is more distinct at 70 Hz, whereas it is almost flat (that is, closer to linear) at 140 Hz.
Nonlinear (left) and almost linear (right) behavior of the dynamic force factor: 70 Hz on the left and 140 Hz on the right.
With the following steps, the discussed nonlinearities can be incorporated into the model. First, the nonlinear magnetic effects are taken into account through the constitutive relation for the corresponding material. In the test example, the BH curve option is chosen for the iron pole piece. Next, the
Include geometric nonlinearity option available under the Study Settings section forces the structural parts of the model to obey the finite strain theory. Lastly, the topology change is captured by the Moving Mesh feature. Whenever applied, the feature ensures that the mesh element nodes move together with the moving parts of the system. Since the displacements can be quite high, it is likely that the mesh element distortion reaches extreme levels and the numerical model becomes unstable. The Automatic Remeshing option is used as a remedy against highly distorted mesh elements.
All in all, the nonlinear time-domain analysis of the loudspeaker requires much more effort and patience than the linear frequency-domain study. This is especially relevant when the model includes the
Moving Mesh feature with the Automatic Remeshing option activated. Investing some time in the geometry and mesh preprocessing will pay off, as the moving mesh is very sensitive to the mesh quality. That is, highly distorted mesh elements and near-zero angles between the geometric entities have to be avoided. A proper choice of the Condition for Remeshing option may also require some trial and error.
The loudspeaker design discussed here might not be considered “good” by most standards. The odd-order harmonics prevail in the frequency content of the output signal.
Next Steps
To perform your own nonlinear distortion analysis of a loudspeaker, click on the button below. This will take you to the Application Gallery, where you can find the MPH-files for this model together with detailed modeling instructions. (Note: You must have a COMSOL Access account and valid software license.)
Additional Resources Check out other examples of modeling loudspeakers in these tutorials: Further reading: L.L. Beranek and T.J. Mellow, Acoustics: Sound Fields and Transducers, Academic Press, 2012. Brüel & Kjær, “Audio Distortion Measurements,” Application Note BO0385, 1993. W. Marshall Leach, Jr., Introduction to Electroacoustics and Audio Amplifier Design, Kendall Hunt, 2010. L.L. Beranek and T.J. Mellow, Comments (4) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
Ranked alphabet is very often used in Ranked Trees definition, like here for instance. In that example for given set $\Sigma=\{a,b,c\}$ ranks assigned by arity function $ar : \Sigma\rightarrow\mathcal{N}$ as:
$ar(a)=2, ar(b)=2, ar(c)=1$.
And Ranked Tree over $\Sigma$ is defined as:
$T_{\Sigma_r}$, the set of ranked trees, is the smallest set of terms $f(t_1,\dots,t_k)$ such that: $f\in\Sigma_r$, $k = ar(f)$, and $t_i\in T_{\Sigma_r}$ for all $1\leq i\leq k$.
The tree in this example looks like:
b / \ a b / \ / \ b c c c / \c c
But what about trees like that?
b / \ a b / \ / \ b c c c | | c a
This is also valid tree, but it is obviously is unranked.
My question: do any research regarding
unranked alphabet trees exist?
What I've found so far is related only to logic for unranked trees.
|
Taiwanese Journal of Mathematics Taiwanese J. Math. Volume 18, Number 6 (2014), 1957-1979. NON-NEHARI MANIFOLD METHOD FOR SUPERLINEAR SCHRÖDINGER EQUATION Abstract
We consider the boundary value problem \begin{equation} \tag{0.1} \left\{ \begin{array}{ll} -\triangle u+V(x)u=f(x, u), \ \ \ \ & x\in \Omega,\\ u=0, \ \ \ \ & x\in \partial\Omega, \end{array} \right. \end{equation} where $ \Omega \subset \mathbb R^N$ be a bounded domain, $\inf_{\Omega}V(x)\gt -\infty$, $f$ is a superlinear, subcritical nonlinearity. Inspired by previous work of Szulkin and Weth (2009)) [21] and (2010) [22], we develop a more direct and simpler approach on the basis of one used in [21], to deduce weaker conditions under which problem (0.1) has a ground state solution of generalized Nehari type or infinity many nontrivial solutions. Unlike the Nehari manifold method, the main idea of our approach lies on finding a minimizing Cerami sequence for the energy functional outside the generalized Nehari manifold by using the diagonal method.
Article information Source Taiwanese J. Math., Volume 18, Number 6 (2014), 1957-1979. Dates First available in Project Euclid: 21 July 2017 Permanent link to this document https://projecteuclid.org/euclid.twjm/1500667506 Digital Object Identifier doi:10.11650/tjm.18.2014.3541 Mathematical Reviews number (MathSciNet) MR3284041 Zentralblatt MATH identifier 1357.35163 Citation
Tang, X. H. NON-NEHARI MANIFOLD METHOD FOR SUPERLINEAR SCHRÖDINGER EQUATION. Taiwanese J. Math. 18 (2014), no. 6, 1957--1979. doi:10.11650/tjm.18.2014.3541. https://projecteuclid.org/euclid.twjm/1500667506
|
Finite Fourier Transform on a 2d waveHow does the finite fourier transform work exactly?The transform of f(x) is\widetilde{f}(\lambda_{n}) =\int^{L}_{0} f(x) X_{n} dxIf I had a 3d wave equation pde and I applied Finite fourier transform on the pde forz(x,y,t)=X(x)Y(y)T(t)...
1. Homework Statement [/prove f(x1,y1)-f(x0,y0)=fx(x*,y*)(x1-x0)+fy(x*,y*)(y1-y0)prove there exists a point (x*,y*)if fx and fy are all continuous on a circular region and contain A(x0,y0) and B(x1,y1)2. Homework Equations[b]3. The Attempt at a SolutionI'm thinking mean value...
Javascript is in an external file liked by this:<script type="text/javascript" src="myscript.js"></script>in the head tagi have<form name="myform" action="" method="POST" onSumbit="return validateForms()">and some form stuff and then:<input type="submit" value="Submit"...
1. Homework StatementF(x,y,z)=0x=a(y,z)y=b(x,z)z=c(x,y)what does\frac{\partial c}{\partial x} \frac{\partial b}{\partial z} \frac{\partial a}{\partial y} equal.2. Homework Equationsmaybe you could tell me?3. The Attempt at a Solutioni've spent hours and hours...
1. Homework Statementz=1-x^2-2y^2find volume under curvebounded by the xy plane.is the answer sheet wrong? (see below)why am i struggling so much with this??!?!?! how do i do it?2. Homework Equationsaccording to other answer sheet, it is pi/sqrt 23. The Attempt at a...
1. Homework Statementprove 0u=0 where u is in the vector space.2. Homework Equationsthe 10 various axioms for addition and scalar multiplication.3. The Attempt at a Solutionpretty much just(u+-u)u=0or(1-1)u=01u+-1u=0and then i get stuck. i can prove that...
1. Homework StatementIn set theory, i have a two part question, the first is showing that the system S={set of all real numbers( \Re )}, #}where a#b=a+b-abwe have to show that it's not a group.and then find what c is so that the system = { \Re \cap\overline{c}, # } is a group.2. Homework...
Can you give me 3-10 points why you believe in the theory of evolution and/or the big bang? this is independant of any other theories.I'd like it numbered. and just brief points, like a topic and brief explanation of why this makes you believe evolution and/or the big bang to be true. I'll...
RFID attendance marking, what infrastructure??im trying to plan for an RFID system.1)what resources and material and general infrastructure do i need?2) i think i want it to connect it up to my Uni's intranet maybe wireless, so i'd only need to wire the power source? this good?3)...
How do you write error messagesfor when a function has been given an input that is illogical for what the function does.in matlabfunction b = bla(a)for example if a < 0i want it to display a message
1. Homework Statementromeo proposed to juliet. now hes waiting for her response.R = 'event that she replies'W='event that she wants to get married'Mon = 'event on monday'Tue = 'event on Tuesday'P(R\wedgeMon | W) = 0.2P(R\wedgeTue | W) = 0.25P(R\wedgeMon| \bar{W}) = 0.05...
so i found this program matlab on the computers at uni. ive been messing around with it. right now im learning a bit about floating point systems and stuff and i thought it would be cool to write up a function that finds the smallest and largest numbers but turns out its harder than i first...
im trying to study but i have um misplaced my textbook. (it ran away from me, after i neglected it whilst doing other assesments)so i would greatly appreciate just a couple questions with answers, (feel free to give me more though ;) )on any of these things (in order of priority)-...
1. Homework Statementthere is a line through A : (1;-2; 1) and B : (0; 2; 3)and the line through C : (4; 1;-2) and D : (2; 2; 2)are they parallel?3. The Attempt at a Solutioni said no they weren't.i had the lines as being:(-t+1)i+(4t+2)j+(2t+1)kand (-2t+4)i+(t+1)j+(4t-2)k...
when integrating by trig substitution why do you use what you use??for example int. (1+x^2)^0.5 dxwhy do you use x= tan ui mean obviously because it works, but if you didn't know it works how would you figure it out?i would think that you should use x=sinh ubut ive been trying...
what do you do when m=0, do you just have 1 term + a constant?what do you do when there is only 1 value for m. do you have Ae^mt + Ate^mt or what?im a little confused.EDIT-for differential equations i mean, when youre trying to solve for y and the equation has differentials of y in...
apply integration by parts twice and swapping the u and v's and then rearranging the equation to say that it is equal to bla/2like for example i said u is x^n or something and v' is sin x or something.later on when im faced with another integral, i apply the int by parts but this time...
1. Homework Statementdo a laplace transform on a given non homogeneous equation.to eventually find out y2. Homework Equationsi dont think the equation is needed. however this non homogeneous equation is equal to Ae^kt3. The Attempt at a Solutionwell. i applied the laplace...
1 being the worst you can buy right now, 100 the best you can buy right nowi got a mobile intel R 4 series express chipset familywith a booster: nvidia geforce G210M cuda 512Mbi think thats right. >.> could be wrong though :sthe dxdiag window thing just says the mobile intel r 4 stuff but...
Can someone explain Laplace transformations!!!!!!!!! i dont understand ittttt.[edit] sorry. i hadnt even heard of laplace transformations until i found it in my assignment.basically i want to know how to use themso, some simple, well explained examples perhaps?
1. Homework Statementi have to create a general formula for integral of (x^n * e^x) dxusing whatever method i deem appropriate. (the only way i could think of is by parts)2. Homework Equationsint(x^n * e^x)dxint(uv')dx=uv-int(vu')dx3. The Attempt at a Solutioni used...
i was taught that i2 = -1because ((-1)1/2)2=(-1)2/2=(-1)1=-1but isnt it mathematically correct to say\sqrt{-1} * \sqrt{-1} = \sqrt{-1*-1} = \sqrt{1} = 1what the heck!!Is there some bleedingly obvious thing im missing.
1. Homework Statementi'm sorry i know i've been bombarding physics forums with questions but i need help :pusing reverse product rule \int uv' = uv - \int vu'and say i have a*bi noticed my teaher said that a=u and b=v not v' and he simly made that into a v' by deriving.is there a...
1. Homework Statementi have to integrate e^(-y) / yand i found out that you have to use this exponential integral and someone else said it doesnt have an integral. either way im thoroughly confused3. The Attempt at a Solutioni have no clue what so ever. The original question had it...
|
Consider the following so-called $U$-statistic of order 2: $$U = \frac1{\binom{m}{2}} \sum_{i < j} h(w_i,w_j)$$ where $w_1,\dots,w_m$ are IID from some distribution and $h$ is symmetric. If $|h(w_1,w_2)| \le B$ a.s., then a classical result gives the following concentration inequality: \begin{align} \mathbb P( | U - \mathbb E U| \ge t) \le 2 \exp(- m t^2 /(8B^2)). \end{align} (Maybe there are better constants known here.)
I have the following two questions:
It seems to me that this immediately generalizes to the case where $(w_1,\dots,w_n)$ has an exchangeable distribution, due to
de Finetti's theorem saying that any such distribution is a result of a mixture of IID ones. More precisely, there is a random measure $G$, such that conditional on $G$, $w_1,\dots,w_n$ are IID draws from $G$. By conditioning on $G$, using the inequality above for the IID case, and then taking expectations, we get the result for the exchangeable case. Is this argument correct?
What is the state of the art in terms of concentration bounds for such $U$-statistics, esp in the case where $h$ is not bounded? Are there clear generalization known, esp. for the exchangable case? The above argument seems to rely on boundedness of $h$ in a critical way.
EDIT: I should say that for the first point, it might be that we have to assume $h$ to be surely bounded (?)
EDIT: As was pointed out, the argument in point one is flawed. In the hindsight, one needs extra conditions. The case where $w_1=w_2=\dots=w_m$ is an example of an exchangeable distribution for which the concentration (with $m$ in the exponent) need not hold.
|
This has been moved from math.stackexchange;
I am attempting to prove/disprove convergence of the following sum$$ \lim_{n \to \infty} \frac{1}{n} \sum_{p \leq n} \sum_{k=0}^\infty \ln p \left\{\frac{n}{(p-1)p^k} \right\}$$where $\{ x\}$ denotes the
fractional part of $x$.
Let $\epsilon_n$ denote the above double sum. It's a trivial fact $\epsilon_n > 0$, since the summand contains positive terms. On the other hand, it can be shown $\epsilon_n \geq 1$. Using the facts $\{x\} + \{y\} \geq \{x+y\}$, $\{\frac{m}{n}\} \leq 1-\frac{1}{|n|}$, and $\sum_{p \leq n} \ln p \sim n$, we have
$$\epsilon_n = \frac{1}{n} \sum_{p \leq n} \ln p \sum_{k=0}^\infty \left\{\frac{n}{(p-1)p^k} \right\} \geq \frac{1}{n} \sum_{p \leq n} \ln p \left\{\frac{n}{p-1}\sum_{k=0}^\infty \frac{1}{p^k} \right\} $$
$$= \frac{1}{n} \sum_{p \leq n} \ln p \left\{\frac{np}{(p-1)^2} \right\} \geq \left(\frac{1}{n} \sum_{p \leq n} \frac{-\ln p}{(p-1)^2} +\ln p \right) \sim 1-\frac{C_n}{n} $$
where $C_n = \sum_{p \leq n} \frac{\ln p}{(p-1)^2}$. Now, the limit $C_n$ is known to be convergent (by the limit comparison test to the derivative of the prime zeta function $P(s)$, which is known to be convergent for $\mathrm{Re}(s) > 1$), so $\lim_{n \to \infty} \frac{C_n}{n} \to 0$. Thus $\epsilon_n \geq 1$. Perhaps this may not be of help, as I experience difficulty establishing an upper bound.
|
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty).
And Chrome has a Personal Blocklist extension which does what you want.
: )
Of course you already have a Google account but Chrome is cool : )
Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies?
do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created.
@QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value.
I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$.
@QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0.
@KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc.
In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people
in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results
@QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O
@NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that.
@NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment.
@QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h).
@KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow)
Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
|
This is a really beautiful problem!
As there is some discussion in the question/comments/one of the answers about the history of this problem, let me narrate here what I found. I’m making this community-wiki as it is not exactly an answer to the question.
In
Mathematics Magazine, which is published five times a year by the Mathematical Association of America, in the November 1982 issue (Vol. 55, No. 5), in the problems section (p. 300), the following (much easier) problem was posed anonymously (or rather, by “Anon, Erewhon-upon-Spanish River”, who seems a prolific contributor) as problem 1158 (I’ve changed the notation slightly):
Set $a_0 = 1$ and for $n \ge 1$, $a_n = a_{\lfloor n/2 \rfloor} + a_{\lfloor n/3 \rfloor}$. Find $\lim_{n\to\infty} a_n/n$.
This is a much easier problem, as $\frac12 + \frac13 \neq 1$. (Hint: try polynomial growth.)
Solutions to this problem 1158 were given in the January 1984 issue (Vol. 57, No. 1)’s Problems section (pp. 49–50), under the title
A Pseudo-Fibonacci Limit, where it was solved by a host of people.
One of them was
Daniel A. Rawsthorne, Wheaton, Maryland, who in the same section of the same issue (p. 42), proposed the harder problem *1185. The asterisk means that he proposed the problem without supplying a solution himself.
Set $a_0 = 1$ and for $n \ge 1$, $a_n = a_{\lfloor n/2 \rfloor} + a_{\lfloor n/3 \rfloor} + a_{\lfloor n/6 \rfloor}$. Find $\lim_{n\to\infty} a_n/n$.
This is a much harder problem, as we need to determine not just the rate of growth ("linear"), but also the constant proportionality factor.
Solutions were given in the January 1985 issue (Vol. 58, No. 1), in the Problems section (pp. 51–52), under the title
A Very Slowly Converging Sequence, by (together) P. Erdős, A. Hildebrand, A. Odlyzko, P. Pudaite, and B. Reznick.
Note that the same page also says:
Also solved by
Noam Elkies (student), who used Dirichlet series and the residue theorem; and partially (under the assumption that the limit exists) by Don Coppersmith, who gave the explicit formula
$$ a_n = 1 + 2 \sum \frac{(r+s+t)!}{r!s!t!} $$
where the sum is extended over all triples $(r, s, t)$ of nonnegative integers such that $2^r3^s6^t \le n$.
Anyway, in their solution, the authors EHOPR also say
We are writing a paper inspired by this problem and its generalizations.
and give general results, such as the following (I've simplified the numerator a bit):
Suppose $a_0 = 1$, and $a_n = \sum_{i=1}^{s} \lambda_i a_{\lfloor n/m_i \rfloor}$ for $n \ge 1$. Suppose also that not all $m_i$s are (integer) powers of some common integer. Then
$$ \lim_{n\to\infty} \frac{a_n}{n^{\tau}} =
\frac{ \sum_{i=1}^{s} \lambda_i - 1}{\tau \sum_{i=1}^{s} p_i \log m_i} $$
where $\tau$ is the unique real number satisfying $\sum_{i=1}^{s} \lambda_i / m_i^\tau = 1$, and $p_i = \lambda_i / m_i^\tau$ (assuming $\tau \neq 0$).
So in this case, we have$$\lim_{n\to\infty} \frac{a_n}{n} = \frac{3 - 1}{\frac12\log2 + \frac13\log 3 + \frac16\log 6}= \frac{12}{\log 432} \approx 1.977$$The sequence is OEIS A007731.
For the earlier problem (1158), we have, with $\tau \approx 0.78788$ the solution to $(1/2)^x + (1/3)^x = 1$, $p_1 = \frac1{2^\tau}$, and $p_2 = \frac1{3^\tau} = 1 - p_1$, the ratio $\frac{1}{\tau 2^{-\tau}\log 2 + 3^{-\tau}\log 3} \approx 1.469$, so $$a_n \sim 1.469 n^{0.78788}.$$
The paper they wrote was published as:
P. Erdős, A. Hildebrand, A. Odlyzko, P. Pudaite, and B. Reznick,
The asymptotic behavior of a family of sequences,
Pacific Journal of Mathematics, Vol. 126, No. 2 (1987), pp. 227–241: Link 1 (PDF), Link 2
|
The unit of illumination is the lux, lumens per square meter.
What is the minimum lux required for reading? How many lux does the Sun provide at distance D?
What is the minimum lux required for reading?
You can plug all sorts of numbers into this depending on how good your eyes are, how big the print is, and how close you hold it to your face. I'm going to use civil twilight which is 3.4 lux. Others may like 1 foot-candle, 1 lumen at one foot, or about 10.764 lux. Whatever value you prefer, you can plug it into the formula below.
How many lux does the Sun provide at distance D?
To calculate the lux the Sun puts out we first have to calculate how much solar radiation (raw energy) it puts out at distance D? You take the luminosity of the Sun and distribute it over the surface of a sphere of radius D.
$$ \text{solar radiation in} \, \frac{Watts}{m^2} = \frac {3.846 \times 10^{26} W}{4 \pi D^2} $$
Not all of that is visible light. We need to convert that into lux, lumens per square meter. To do that you need to integrate the power output over the curve of visible light output using the Luminosity Function which, fortunately, someone has already done coming out with a luminous efficacy of 93 lumens per W.
$$ \text{illumination in lux at} \, D = \frac{93 \frac{lumens}{W} \times 3.846 \times 10^{26} W}{4 \pi D^2} $$
At the Earth where $ D = 1.5 \times 10^{11} \, m $ we get $ 1.27 \times 10^{5} \, lux $. This is higher than direct sunlight at the Earth's surface because it does not account for atmosphere. This is good because New Horizons is in a vacuum.
How many lux are available to New Horizons?
Pluto is currently 32.6 AU from the Sun, or $ D = 4.89 \times 10^{12} \, m $. Plugging that in we get $ 1.19 \times 10^2 $ or $ 119 \, lux $. Plenty!
Or is it? That's how much light New Horizons is receiving from the Sun, but how much is bouncing off Pluto? Pluto has an albedo of about .6 so a little more than half the Sun's light is reflected back to New Horizons, or about 70 lux which is about the same as your average office hallway. Not great, but plenty for a long exposure.
This paper on LORRI agrees, "
At Pluto encounter, 33 AU from the Sun, the illumination level is ~1/1000 that at Earth".
How far out from the Sun is visible light still sufficient to read a book?
We need to solve for D.
$$ I = \frac{93 \frac{lumens}{W} \times 3.846 \times 10^{26} W}{4 \pi D^2} $$
$$ I D^2 = \frac{93 \frac{lumens}{W} \times 3.846 \times 10^{26} W}{4 \pi} $$
$$ D^2 = \frac{93 \frac{lumens}{W} \times 3.846 \times 10^{26} W}{4 \pi I} $$
$$ D = \sqrt{ \frac{93 \frac{lumens}{W} \times 3.846 \times 10^{26} W}{4 \pi I}} $$
Plug in $I = 3.4 \, lux$ and get $D = 2.8 \times 10^{13}$ or $186 AU$ which puts us past the Kuiper Belt and well into the Scattered Disc.
When using a different number for reading light note that lux changes with the
square root of the distance. If you double the lux required you decrease the distance by 1.4. If we triple I to 10 lux (aka the foot-candle), the distance drops by 1.7 to about 108 AU. Still very far out.
Could we expect colors at this distance?
At Pluto you will see colors fine. At 186 AU you will see colors about as well as you can see colors at civil twilight.
New Horizons has two instruments measuring visible light. LORRI is a long range panchromatic camera, meaning it acts like a regular digital camera and captures an approximation of what the human eye sees.
The other is the Ralph telescope. It is a multispectral visible and infrared imager meaning it takes multiple images at several different wavelengths. These will appear grey, the grey is a measurement of the light intensity at a specific wavelength. This is usually how spacecraft "see" color because scientists aren't interested in pretty pictures with multiple wavelengths smeared together, they want data about specific wavelengths. NASA public relations people mix the images together to approximate what matches what the human eyeball would see for press releases. They can't always get it quite right. Phil Plat discusses it in detail with the Mars landers.
To make these single-filter images into a color composite is not easy. If the red filter lets in less total light than the blue, you need to compensate for that when you add the images together. If the red filter is wider (lets in a wider range of reds) than the blue filter, you have to compensate for that, and so on.
Short version: pictures from LORRI are closer to what you would see than from Ralph.
|
Corollaries to the Uniform Boundedness Principle
Corollaries to the Uniform Boundedness Principle
Recall from The Uniform Boundedness Principle page that if $X$ is a Banach space and $Y$ is a normed linear space where $\mathcal F$ is a collection of bounded linear operators from $X$ to $Y$ such that for every $x \in X$ we have that $\sup_{T \in \mathcal F} \| T(x) \| < \infty$ then:(1)
\begin{align} \quad \sup_{T \in \mathcal F} \| T \| < \infty \end{align}
We will now look at some corollaries to the uniform boundedness principle.
Corollary 1: Let $X$ be a Banach space and let $A \subseteq X^*$. Then $A$ is bounded in $X^*$ if and only if for every $x \in X$ we have that $\sup_{f \in A} |f(x)| < \infty$. Proof: Recall that $X^* = \mathcal B(X, \mathbb{R})$. (2) $\Leftarrow$ Suppose that $A$ is bounded. Then there exists an $M > 0$ such that for all $f \in A$ we have that $\| f \| < M$. Let $x \in X$. Then:
\begin{align} \quad \sup_{f \in A} |f(x)| \leq \sup_{f \in A} \| f \| \| x \| = \| x \| \sup_{f \in A} \| f \| \leq \| x \| M < \infty \end{align}
(3) $\Rightarrow$ Since $X$ is a Banach space and $A$ is a collection of bounded linear operators from $X$ to $\mathbb{R}$, and since for every $x \in X$ we are given that $\sup_{f \in A} |f(x)| < \infty$, by the principle of uniform boundedness we have that:
\begin{align} \quad \sup_{f \in A} \| f \| < \infty \end{align}
So let $\sup_{f \in A} \| f \| = M$. Then for all $f \in A$ we have that $\| f \| \leq M$ which shows that $A$ is bounded. $\blacksquare$
Corollary 2: Let $X$ and $Y$ be Banach spaces. If $(T_n)$ is a sequence of bounded linear operators from $X$ to $Y$ such that for every $x \in X$, $\displaystyle{\lim_{n \to \infty} T_n(x)}$ exists, then $\displaystyle{T(x) = \lim_{n \to \infty} T_n(x)}$ is a bounded linear operator from $X$ to $Y$. (4) Proof: Let $\mathcal F = \{ T_n \}$. For each $x \in X$ we have that $\displaystyle{\lim_{n \to \infty} T_n(x) = y_x}$ for some $y_x \in Y$. Therefore $\displaystyle{\lim_{n \to \infty} \| T_n(x) - y_x \| = 0}$ and by the triangle inequality, $\lim_{n \to \infty} \| T_n(x) \| = \| y_x \|$. So for each $x \in X$ we see that the sequence $(T_n(x))$ is bounded, and so for each $x \in X$:
\begin{align} \quad \sup_{n \geq 1} \| T_n(x) \| < \infty \end{align}
(5) Since each $T_n$ is a bounded linear operator and $X$ is a Banach space, by the uniform boundedness principle:
\begin{align} \quad \sup_{n \geq 1} \| T_n \| < \infty \end{align}
(6) In other words, there exists an $M \in \mathbb{R}$, $M \geq 0$ such that:
\begin{align} \quad \sup_{n \geq 1} \| T_n \| = M \end{align}
(7) And so for every $x \in X$ we have that $\| T_n(x) \| \leq M \| x \|$. Taking the limit as $n \to \infty$ gives us:
\begin{align} \quad \| T(x) \| = \left \| \lim_{n \to \infty} T_n(x) \right \| = \lim_{n \to \infty} \| T_n(x) \| \leq M \| x \| \end{align}
So $T$ is bounded. $\blacksquare$
Theorem 3 (The Banach-Steinhaus Theorem): Let $X$ and $Y$ be Banach spaces and let $\{ T_n \}$ be a sequence of bounded linear operators from $X$ to [[ $ Y $]] such that for every $x \in X$ there exists a $y \in Y$ for which $\lim_{n \to \infty} \| T_n(x) - y \| = 0$. Then there exists a a bounded linear operator $T : X \to Y$ such that for every $x \in X$ we have that $\lim_{n \to \infty} \| T_n(x) - T(x) \| = 0$ and $\sup \| T_n \| < \infty$. (8) Proof: Let $T : X \to Y$ be defined for each $x \in X$ by:
\begin{align} \quad T(x) = \lim_{n \to \infty} T_n(x) = y \end{align}
Note that $T$ is is well-defined and furthermore, $T$ is linear since limits are linear. We will now show that $T$ is bounded. (9) Let $\mathcal F = \{ T_n : n \in \mathbb{N} \}$. Then $\mathcal F$ is a collection of bounded linear operators from $X$ to $Y$. For each $x \in X$ since $T_n(x)$ converges to $y$ we must have that $\{ \| T_n(x) \| : n \in \mathbb{N} \}$ is bounded, that is, for each $x \in X$ we have that:
\begin{align} \quad \sup_{n \geq 1} \| T_n(x) \| < \infty \end{align}
(10) So by the uniform boundedness principle we have that:
\begin{align} \quad \sup_{n \geq 1} \| T_n \| < \infty \end{align}
(11) Let $M = \sup_{n \geq 1} \| T_n \|$. Then for all $n \in \mathbb{N}$ we have that $\| T_n \| \leq M$. Suppose that $x \in X$ is such that $\| x \| \leq 1$. Then for all $n \in \mathbb{N}$ we have that:
\begin{align} \quad \| T(x) \| = \| T(x) - T_n(x) + T_n(x) \| \leq \| T(x) - T_n(x) \| + \| T_n(x) \| \leq \|T(x) - T_n(x) \| + M \leq \| T - T_n(x) \| \| x \| + M \leq \| T - T_n \| + M \end{align}
(12) Taking the limit as $n \to \infty$ gives us that:
\begin{align} \quad \| T(x) \| \leq M \end{align}(13)
\begin{align} \quad \| T \| = \sup_{x \in X, \| x \| \leq 1} \| T(x) \| \leq M \end{align}
Which implies that $T$ is a bounded linear operator. Lastly, for all $x \in X$ we have that $\lim_{n \to \infty} \| T_n(x) - T(x) \| = \lim_{n \to \infty} \| T_n(x) - y \| = 0$ and by the principle of uniform boundedness we also have that $\sup \| T_n \| < \infty$. $\blacksquare$
|
The Chebyshev Metric
Recall from the Metric Spaces page that if $M$ is a nonempty set then a function $d : M \times M \to [0, \infty)$ is called a metric if for all $x, y, z \in M$ we have that the following three properties hold:
$d(x, y) = d(y, x)$. $d(x, y) = 0$ if and only if $x = y$. $d(x, y) \leq d(x, z) + d(z, y)$.
Furthermore, the set $M$ with the metric $d$, denoted $(M, d)$ is called a metric space.
We will now look at a very important metric on $\mathbb{R}^n$ known as the Chebyshev metric.
Definition: The Chebyshev Metric on $\mathbb{R}^n$ is the function $d : \mathbb{R}^n \times \mathbb{R}^n \to [0, \infty)$ defined for all $\mathbf{x} = (x_1, x_2, ..., x_n), \mathbf{y} = (y_1, y_2, ..., y_n) \in \mathbb{R}^n$ by $d(\mathbf{x}, \mathbf{y}) = \max_{1 \leq k \leq n} \mid x_k - y_k \mid$.
We must verify that the Chebyshev metric satisfies the three conditions to be a metric.
For the first condition we have that once again $\mid x_k - y_k \mid = \mid y_k - x_k \mid$ for all $k \in \{1, 2, ..., n \}$ so:(1)
For the second condition, let $d(\mathbf{x}, \mathbf{y}) = 0$. Then $\max_{1 \leq k \leq n} \mid x_k - y_k \mid = 0$. But $\mid x_k - y_k \mid \geq 0$ for all $k \in \{1, 2, ..., n \}$ so $\mid x_k - y_k \mid = 0$ for each $k$ and $x_k - y_k = 0$ so $x_k = y_k$ for each $k$. Therefore $\mathbf{x} = \mathbf{y}$. Now let $\mathbf{x} = \mathbf{y}$. Then $\mid x_k - y_k \mid = 0$ for all $k \in \{1, 2, ..., n \}$ so:(2)
For the third condition we have that by the triangle inequality:(3)
Therefore $(\mathbb{R}^n, d)$ is a metric space.
|
Lovász theta-function of graphs¶
AUTHORS:
Dima Pasechnik (2015-06-30): Initial version
REFERENCE:
Functions¶
sage.graphs.lovasz_theta.
lovasz_theta(
graph)¶
Return the value of Lovász theta-function of graph
For a graph \(G\) this function is denoted by \(\theta(G)\), and it can be computed in polynomial time. Mathematically, its most important property is the following:\[\alpha(G)\leq\theta(G)\leq\chi(\overline{G})\]
For more information, see the Wikipedia article Lovász_number.
Note
Implemented for undirected graphs only. Use
to_undirectedto convert a digraph to an undirected graph.
This function requires the optional package
csdp, which you can install with
sage -i csdp.
EXAMPLES:
sage: C = graphs.PetersenGraph() sage: C.lovasz_theta() # optional csdp 4.0 sage: graphs.CycleGraph(5).lovasz_theta() # optional csdp 2.236068 Implemented for undirected graphs only. Use
|
Hyperplane in $n$-Dimensional Space Through Origin is a Subspace
Problem 352
A hyperplane in $n$-dimensional vector space $\R^n$ is defined to be the set of vectors\[\begin{bmatrix}x_1 \\x_2 \\\vdots \\x_n\end{bmatrix}\in \R^n\]satisfying the linear equation of the form\[a_1x_1+a_2x_2+\cdots+a_nx_n=b,\]where $a_1, a_2, \dots, a_n$ (at least one of $a_1, a_2, \dots, a_n$ is nonzero) and $b$ are real numbers.Here at least one of $a_1, a_2, \dots, a_n$ is nonzero.
Consider the hyperplane $P$ in $\R^n$ described by the linear equation\[a_1x_1+a_2x_2+\cdots+a_nx_n=0,\]where $a_1, a_2, \dots, a_n$ are some fixed real numbers and not all of these are zero.(The constant term $b$ is zero.)
Then prove that the hyperplane $P$ is a subspace of $R^{n}$ of dimension $n-1$.
In a set theoretical notation, the hyperplane is given by\[P=\left\{\, \mathbf{x}=\begin{bmatrix}x_1 \\x_2 \\\vdots \\x_n\end{bmatrix}\in \R^n \quad \middle | \quad a_1x_1+a_2x_2+\cdots+a_nx_n=0 \,\right\}\]
The defining relation\[a_1x_1+a_2x_2+\cdots+a_nx_n=0\]can be written as\[\begin{bmatrix}a_1 & a_2 & \dots & a_n\end{bmatrix}\begin{bmatrix}x_1 \\x_2 \\\vdots \\x_n\end{bmatrix}=0.\]
Let $A$ be the $1\times n$ matrix $A=\begin{bmatrix}a_1 & a_2 & \dots & a_n\end{bmatrix}$.Then the above equation is simply $A\mathbf{x}=0$, and the hyperplane $P$ is described by\[P=\{\mathbf{x}\in \R^n \mid A\mathbf{x}=0\},\]and this is exactly the definition of the null space of $A$. Namely, we have\[P=\calN(A).\]Since every null space of a matrix is a subspace, it follows that the hyperplane $P$ is a subspace of $\R^n$.
The dimension of the hyperplane is $n-1$.
Because $P=\calN(A)$, the dimension of $P$ is the nullity of the matrix $A$.Since not all of $a_i$’s are zero, the rank of the matrix $A$ is $1$. Then by the rank-nullity theorem, we have\[\text{rank of $A$ } + \text{ nullity of $A$ }=n.\]Hence the nullity of $A$ is $n-1$.
Rank and Nullity of a Matrix, Nullity of TransposeLet $A$ be an $m\times n$ matrix. The nullspace of $A$ is denoted by $\calN(A)$.The dimension of the nullspace of $A$ is called the nullity of $A$.Prove the followings.(a) $\calN(A)=\calN(A^{\trans}A)$.(b) $\rk(A)=\rk(A^{\trans}A)$.Hint.For part (b), […]
Prove a Given Subset is a Subspace and Find a Basis and DimensionLet\[A=\begin{bmatrix}4 & 1\\3& 2\end{bmatrix}\]and consider the following subset $V$ of the 2-dimensional vector space $\R^2$.\[V=\{\mathbf{x}\in \R^2 \mid A\mathbf{x}=5\mathbf{x}\}.\](a) Prove that the subset $V$ is a subspace of $\R^2$.(b) Find a basis for […]
Linear Transformation to 1-Dimensional Vector Space and Its KernelLet $n$ be a positive integer. Let $T:\R^n \to \R$ be a non-zero linear transformation.Prove the followings.(a) The nullity of $T$ is $n-1$. That is, the dimension of the nullspace of $T$ is $n-1$.(b) Let $B=\{\mathbf{v}_1, \cdots, \mathbf{v}_{n-1}\}$ be a basis of the […]
Linear Dependent/Independent Vectors of PolynomialsLet $p_1(x), p_2(x), p_3(x), p_4(x)$ be (real) polynomials of degree at most $3$. Which (if any) of the following two conditions is sufficient for the conclusion that these polynomials are linearly dependent?(a) At $1$ each of the polynomials has the value $0$. Namely $p_i(1)=0$ […]
Idempotent Matrices are DiagonalizableLet $A$ be an $n\times n$ idempotent matrix, that is, $A^2=A$. Then prove that $A$ is diagonalizable.We give three proofs of this problem. The first one proves that $\R^n$ is a direct sum of eigenspaces of $A$, hence $A$ is diagonalizable.The second proof proves […]
|
Multiple Products of Riemann-Stieltjes Integrable Functions with Increasing Integrators
Recall from The Product of Riemann-Stieltjes Integrable Functions with Increasing Integrators page that if $f$ and $g$ are functions defined on $[a, b]$ and $\alpha$ is an increasing function on $[a, b]$ then if $f$ and $g$ are Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$ then $fg$ is also Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$.
The following theorem generalizes the one stated above for products of more than two functions.
Theorem 1: Let $f_1, f_2, ..., f_n$ be functions for some $n \in \mathbb{N}$ defined on $[a, b]$ and let $\alpha$ be an increasing function. If $f_1, f_2, ..., f_n$ are all Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$ then the product $\prod_{i=1}^n f_i$ is also Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$. We will carry this proof out by induction. Proof:Let $S(n)$ be the statement that if the functions $f_1, f_2, ..., f_n$ are Riemann-Stieltjes (which we abbreviate as "R-S" from now on) integrable with respect to $\alpha$ then $\prod_{i=1}^{k} f_i$ is also Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$. For the base step, $S(1)$ is trivially true and $S(2)$ is true by the theorem stated above. Assume that for $k > 2$ then $S(k)$ is true, that is, if $f_1, f_2, ..., f_k$ are R-S integrable with respect to $\alpha$ on $[a, b]$ then $\prod_{i=1}^{k} f_i$ is also R-S integrable with respect to $\alpha$ on $[a, b]$. We want to then show that $S(k+1)$ is true, that is, if $f_1, f_2, ..., f_k, f_{k+1}$ are R-S integrable with respect to $\alpha$ on $[a, b]$ then so is $\prod_{i=1}^{k+1} f_i$. Note that: Let $g = \prod_{i=1}^{k} f_i$. Then by assumption $g$ is R-S integrable and $f_{k+1}$ is R-S integrable so $gf_{k+1}$ is R-S integrable with respect to $\alpha$ on $[a, b]$ so $S(k+1)$ is true. So for all $n \in \mathbb{N}$, if $f_1, f_2, ..., f_n$ is a set of Riemann-Stieltjes integrable functions with respect to $\alpha$ on $[a, b]$ then $\prod_{i=1}^{n} f_i$ is Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$. $\blacksquare$
|
There is a question in Mark Srednicki's Book (Problem 24.4, p.160) about $Sp(2N)$, but I am not sure I understand the significance (application?) of this group. In that chapter, he talks about $SO(N)$ and $SU(N)$, which are part of the Standard Model and from accepted answer of this I gather that the structure of the SM does not contain $Sp(2N)$. Is there a reason for Srednicki to give that group as an example, or is it given only as an exercise for a different group from those mentioned in the chapter?
Well, Srednicki also briefly mentions it on page 410 but doesn't explicitly use it anywhere else in the book.
This group though is of great importance in physics. At the classical level, $Sp(2N,\mathbb{R})$ is the group of linear transformations preserving the antisymmetric bilinear form $\Omega$ which determines the symplectic geometry of the phase space ${\mathcal{M}=(p^N,q^N)}$. So, you encounter this group each time you have a system with $N$ degrees of freedom which can be described by Hamiltonian equations. (As you quantize the system, you end up with the double cover of $Sp(2N,\mathbb{R})$, the group $Mp(2N,\mathbb{R})$.)
Besides this, there are some low-dimensional correspondences: ${Spin(3)=Sp(1)=SU(2)}$, ${Spin(4)=Sp(1)\times Sp(1)}$, ${Spin(5)=Sp(2)}$, which are also important in phyiscs,
As a reference, I would recommend Peter Woit's book "Quantum Theory, Groups and Representations".
It is true that symplectic geometry plays a pivotal role in the Hamiltonian formulation, but this aspect is not really explored in Srednicki's book in any depth. Non-abelian Lie groups are mostly discussed in the context of Yang-Mills gauge theory. Unitarity imposes conditions on the gauge group, cf. e.g. this & this Phys.SE posts.
The non-compact symplectic group $Sp(2N,\mathbb{F})$ is mentioned in Problem 24.4. It is not appropriate as a gauge group.
Srednicki mentions in the text below eq. (69.23) a compact symplectic group. Presumably he means $Sp(N)\equiv USp(2N)\equiv U(2N)\cap Sp(2N,\mathbb{C})$, which is the topic of this Phys.SE post.
Confusingly, he uses the same notation $Sp(2N)$ in both cases.
|
Surface plasmon resonance (SPR).
Surface plasmon resonance ( SPR) is the resonant oscillation of conduction electrons at the interface between a negative and positive permittivity material stimulated by incident light. The resonance condition is established when the frequency of incident photons matches the natural frequency of surface electrons oscillating against the restoring force of positive nuclei. SPR in subwavelength scale nanostructures can be polaritonic or plasmonic in nature.
SPR is the basis of many standard tools for measuring adsorption of material onto planar metal (typically gold or silver) surfaces or onto the surface of metal nanoparticles. It is the fundamental principle behind many color-based biosensor applications and different lab-on-a-chip sensors.
Contents Explanation 1 Realization 2 Applications 3 SPR Immunoassay 3.1 Data interpretation 3.2 Examples 4 Layer-by-layer self-assembly 4.1 Binding constant determination 4.2 Magnetic plasmon resonance 5 See also 6 References 7 Further reading 8 Explanation
The surface plasmon polariton is a non-radiative electromagnetic surface wave that propagates in a direction parallel to the negative permittivity/dielectric material interface. Since the wave is on the boundary of the conductor and the external medium (air, water or vacuum for example), these oscillations are very sensitive to any change of this boundary, such as the adsorption of molecules to the conducting surface.
[1]
To describe the existence and properties of surface plasmon polaritons, one can choose from various models (quantum theory, Drude model, etc.). The simplest way to approach the problem is to treat each material as a homogeneous continuum, described by a frequency-dependent relative permittivity between the external medium and the surface. This quantity, hereafter referred to as the materials' "dielectric function," is complex permittivity. In order for the terms that describe the electronic surface plasmon to exist, the real part of the dielectric constant of the conductor must be negative and its magnitude must be greater than that of the dielectric. This condition is met in the infrared-visible wavelength region for air/metal and water/metal interfaces (where the real dielectric constant of a metal is negative and that of air or water is positive).
LSPRs (Localized SPRs) are collective electron charge oscillations in metallic nanoparticles that are excited by light. They exhibit enhanced near-field amplitude at the resonance wavelength. This field is highly localized at the nanoparticle and decays rapidly away from the nanoparticle/dieletric interface into the dielectric background, though far-field scattering by the particle is also enhanced by the resonance. Light intensity enhancement is a very important aspect of LSPRs and localization means the LSPR has very high spatial resolution (subwavelength), limited only by the size of nanoparticles. Because of the enhanced field amplitude, effects that depend on the amplitude such as magneto-optical effect are also enhanced by LSPRs.
[2] [3] Realization
Otto configuration
Kretschmann configuration
In order to excite surface plasmons in a resonant manner, one can use electron bombardment or incident light beam (visible and infrared are typical). The incoming beam has to match its momentum to that of the plasmon.
[4] In the case of p-polarized light (polarization occurs parallel to the plane of incidence), this is possible by passing the light through a block of glass to increase the wavenumber (and the momentum), and achieve the resonance at a given wavelength and angle. S-polarized light (polarization occurs perpendicular to the plane of incidence) cannot excite electronic surface plasmons. Electronic and magnetic surface plasmons obey the following dispersion relation: K(\omega) = \frac{\omega}{c} \sqrt{\frac{\varepsilon_1 \varepsilon_2 \mu_1 \mu_2}{\varepsilon_1 \mu_1 + \varepsilon_2 \mu_2}}
where \epsilon is the dielectric constant, and \mu is the magnetic permeability of the material (1: the glass block, 2: the metal film).
Typical metals that support surface plasmons are silver and gold, but metals such as copper, titanium or chromium have also been used.
When using light to excite SP waves, there are two configurations which are well known. In the Otto setup, the light illuminates the wall of a glass block, typically a prism, and is totally internally reflected. A thin metal film (for example gold) is positioned close enough to the prism wall so that an evanescent wave can interact with the plasma waves on the surface and hence excite the plasmons.
In the Kretschmann configuration, the metal film is evaporated onto the glass block. The light again illuminates the glass block, and an evanescent wave penetrates through the metal film. The plasmons are excited at the outer side of the film. This configuration is used in most practical applications.
SPR emission
When the surface plasmon wave interacts with a local particle or irregularity, such as a rough surface, part of the energy can be re-emitted as light. This emitted light can be detected
behind the metal film from various directions. Applications
Surface plasmons have been used to enhance the surface sensitivity of several spectroscopic measurements including fluorescence, Raman scattering, and second harmonic generation. However, in their simplest form, SPR reflectivity measurements can be used to detect molecular adsorption, such as polymers, DNA or proteins, etc. Technically, it is common that the angle of the reflection minimum (absorption maximum) is measured. This angle changes in the order of 0.1° during thin (about nm thickness) film adsorption. (See also the Examples.) In other cases the changes in the absorption wavelength is followed.
[5] The mechanism of detection is based on that the adsorbing molecules cause changes in the local index of refraction, changing the resonance conditions of the surface plasmon waves.
If the surface is patterned with different biopolymers, using adequate optics and imaging sensors (i.e. a camera), the technique can be extended to surface plasmon resonance imaging (SPRI). This method provides a high contrast of the images based on the adsorbed amount of molecules, somewhat similar to Brewster angle microscopy (this latter is most commonly used together with a Langmuir–Blodgett trough).
For nanoparticles, localized surface plasmon oscillations can give rise to the intense colors of suspensions or sols containing the nanoparticles. Nanoparticles or nanowires of noble metals exhibit strong absorption bands in the ultraviolet-visible light regime that are not present in the bulk metal. This extraordinary absorption increase has been exploited to increase light absorption in photovoltaic cells by depositing metal nanoparticles on the cell surface.
[6] The energy (color) of this absorption differs when the light is polarized along or perpendicular to the nanowire. [7] Shifts in this resonance due to changes in the local index of refraction upon adsorption to the nanoparticles can also be used to detect biopolymers such as DNA or proteins. Related complementary techniques include plasmon waveguide resonance, QCM, extraordinary optical transmission, and dual polarization interferometry SPR Immunoassay
The first SPR immunoassay was proposed in 1983 by Liedberg, Nylander, and Lundström, then of the Linköping Institute of Technology (Sweden).
[8] They adsorbed human IgG onto a 600-angstrom silver film, and used the assay to detect anti-human IgG in water solution. Unlike many other immunoassays, such as ELISA, an SPR immunoassay is label free in that a label molecule is not required for detection of the analyte. [9] Data interpretation
The most common data interpretation is based on the Fresnel formulas, which treat the formed thin films as infinite, continuous dielectric layers. This interpretation may result in multiple possible refractive index and thickness values. However, usually only one solution is within the reasonable data range.
Metal particle plasmons are usually modeled using the Mie scattering theory.
In many cases no detailed models are applied, but the sensors are calibrated for the specific application, and used with interpolation within the calibration curve.
Examples Layer-by-layer self-assembly
One of the first common applications of surface plasmon resonance spectroscopy was the measurement of the thickness (and refractive index) of adsorbed self-assembled nanofilms on gold substrates. The resonance curves shift to higher angles as the thickness of the adsorbed film increases. This example is a 'static SPR' measurement.
When higher speed observation is desired, one can select an angle right below the resonance point (the angle of minimum reflectance), and measure the reflectivity changes at that point. This is the so-called 'dynamic SPR' measurement. The interpretation of the data assumes that the structure of the film does not change significantly during the measurement.
Binding constant determination
Association and dissociation signal
When the affinity of two ligands has to be determined, the binding constant must be determined. It is the equilibrium value for the product quotient. This value can also be found using the dynamic SPR parameters and, as in any chemical reaction, it is the association rate divided by the dissociation rate.
For this, a bait ligand is immobilized on the dextran surface of the SPR crystal. Through a microflow system, a solution with the prey analyte is injected over the bait layer. As the prey analyte binds the bait ligand, an increase in SPR signal (expressed in response units, RU) is observed. After desired association time, a solution without the prey analyte (usually the buffer) is injected on the microfluidics that dissociates the bound complex between bait ligand and prey analyte. Now as the prey analyte dissociates from the bait ligand, a decrease in SPR signal (expressed in resonance units, RU) is observed. From these association ('on rate',
k a) and dissociation rates ('off rate', k d), the equilibrium dissociation constant ('binding constant', K D) can be calculated.
The actual SPR signal can be explained by the electromagnetic 'coupling' of the incident light with the surface plasmon of the gold layer. This plasmon can be influenced by the layer just a few nanometer across the gold–solution interface i.e. the bait protein and possibly the prey protein. Binding makes the reflection angle change;
K_D = \frac{k_{\text{d}}}{k_{\text{a}}}
Magnetic plasmon resonance
Recently, there has been an interest in magnetic surface plasmons. These require materials with large negative magnetic permeability, a property that has only recently been made available with the construction of metamaterials.
See also References ^ S. Zeng; Baillargeat, Dominique; Ho, Ho-Pui; Yong, Ken-Tye (2014). "Nanomaterials enhanced surface plasmon resonance for biological and chemical sensing applications". Chemical Society Reviews 43 (10): 3426–3452. ^ González-Díaz, Juan B.; García-Martín, Antonio; García-Martín, José M.; Cebollada, Alfonso; Armelles, Gaspar; Sepúlveda, Borja; Alaverdyan, Yury; Käll, Mikael (2008). "Plasmonic Au/Co/Au nanosandwiches with Enhanced Magneto-Optical Activity". Small 4 (2): 202–5. ^ Du, Guan Xiang; Mori, Tetsuji; Suzuki, Michiaki; Saito, Shin; Fukuda, Hiroaki; Takahashi, Migaku (2010). "Evidence of localized surface plasmon enhanced magneto-optical effect in nanodisk array". Appl. Phys. Lett. 96 (8): 081915. ^ Zeng, Shuwen; Yu, Xia; Law, Wing-Cheung; Zhang, Yating; Hu, Rui; Dinh, Xuan-Quyen; Ho, Ho-Pui; Yong, Ken-Tye (2013). "Size dependence of Au NP-enhanced surface plasmon resonance based on differential phase measurement". Sensors and Actuators B: Chemical 176: 1128. ^ Minh Hiep, Ha; Endo, Tatsuro; Kerman, Kagan; Chikae, Miyuki; Kim, Do-Kyun; Yamamura, Shohei; Takamura, Yuzuru; Tamiya, Eiichi (2007). "A localized surface plasmon resonance based immunosensor for the detection of casein in milk". Sci. Technol. Adv. Mater. 8 (4): 331. ^ Pillai, S.; Catchpole, K. R.; Trupke, T.; Green, M. A. (2007). "Surface plasmon enhanced silicon solar cells". J. Appl. Phys. 101 (9): 093105. ^ Locharoenrat, Kitsakorn; Sano, Haruyuki; Mizutani, Goro (2007). "Phenomenological studies of optical properties of Cu nanowires". Sci. Technol. Adv. Mater. 8 (4): 277. ^ Liedberg, Bo; Nylander, Claes; Lunström, Ingemar (1983). "Surface plasmon resonance for gas detection and biosensing". Sensors and Actuators 4: 299. ^ Rich, RL; Myszka, DG (2007). "Higher-throughput, label-free, real-time molecular interaction analysis". Analytical biochemistry 361 (1): 1–6. Further reading
A selection of free-download papers on Plasmonics in New Journal of Physics Heinz Raether (1988). Surface plasmons on smooth and rough surfaces and on gratings. Springer Verlag, Berlin. Stefan Maier (2007). Plasmonics: Fundamentals and Applications. Springer. Richard B M Schasfoort; Anna J Tudos, eds. (2008). Handbook of Surface Plasmon Resonance. RSC publishing. A short detailed synopsis of how surface plasmon resonance works in practice
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
|
The Field of Real Numbers
We will now look at some axioms regarding the set of real numbers $\mathbb{R}$. We will note that an "axiom" is a statement that isn't meant to necessarily be proven and instead, they're statements that are given. These axioms are rather straightforward and may seem trivial, however, we will subsequently use them in order to prove many simple theorems and build a foundation for the set of real numbers.
The Axioms of the Field of Real Numbers
Let $\mathbb{R}$ denote the set of real numbers and let $+ : \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ denote the binary operation of addition and let $\cdot : \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ denote the binary operation of multiplication. Then for all $a, b, c \in \mathbb{R}$, the following axioms hold:
Axiom A1:$a + b = b + a$ (Commutativity of Addition). Axiom A2:$a + (b + c) = (a + b) + c$ (Associativity of Addition). Axiom A3:There exists an element $0 \in \mathbb{R}$ such that $a + 0 = 0 + a = a$ (Existence of an Additive Identity). Axiom A4:There exists an element $-a \in \mathbb{R}$ such that $a + (-a) = (-a) + a = 0$ (Existence of Additive Inverses). Axiom M1:$a \cdot b = b \cdot a$ (Commutativity of Multiplication). Axiom M2:$a \cdot (b \cdot c) = (a \cdot b) \cdot c$ (Associativity of Multiplication). Axiom M3:There exists an element $1 \in \mathbb{R}$ such that $a \cdot 1 = 1 \cdot a = a$ (Existence of a Multiplicative Identity). Axiom M4:There exists an element $a^{-1} \in \mathbb{R}$ such that $a \cdot a^{-1} = a^{-1} \cdot a = 1$ (Existence of Multiplicative Inverses). Axiom D:$a \cdot (b + c) = a \cdot b + a \cdot c$ (Distributivity of Multiplication over Addition).
We note that these axioms define a special algebraic structure known as a
field, so we say that $\mathbb{R}$ is a field under the operations of $+$ addition and $\cdot$ multiplication.
|
Physics of Rotational Motion
The laws and equations that govern nature and natural phenomena are described by physics. One prime focus of physics is motion. We have dealt in detail about translational motion (objects that move along a straight or curved line) in the previous chapters, and now we will expand our view towards other types of motions as well.
We see rotational motion in almost everything around us. Every machine, celestial bodies, most of the fun games in amusement parks and if you are a FIFA fan, and when you watch the David Beckham’s familiar shot, the ball is actually executing rotational motion.
Objects turn about an axis. All the particles and the mass centre do not undergo identical motions. All the particles of the body undergo identical motion. By definition, it becomes essential for us to explore how the different particles of a rigid body move when the body rotates.
Also Read: Moment of Inertia
Rotational Kinematics
In rotational kinematics, we will investigate the relation between kinematical parameters of rotation. We shall now revisit angular equivalents of the linear quantities: position, displacement, velocity and acceleration which we have already dealt in a circular motion.
Linear Kinematic Parameters Angular Kinematic Parameters Position s Angular position θ Displacement \(\Delta s={{s}_{1}}-{{s}_{2}}\) Angular displacement \(\Delta \theta ={{\theta }_{1}}-{{\theta }_{2}}\) Average velocity \({{v}_{avg}}=~\frac{\Delta s}{\Delta t}\) Average angular velocity \({{\omega }_{avg}}=~\frac{\Delta \theta }{\Delta t}\) Instantaneous velocity \({{v}_{ins}}\,\underset{\triangle t\to 0}{\mathop{\lim }}\,\frac{\Delta s}{\Delta t}=\frac{ds}{dt}\) Instantaneous angular velocity \(\underset{\Delta t\to 0}{\mathop{{{\omega }_{ins}}\lim }}\,\frac{\Delta \theta }{\Delta t}=\frac{d\theta }{dt}\) Average acceleration \({{a}_{avg}}=~\frac{\Delta v}{\Delta t}\) Average angular acceleration \({{\alpha }_{avg}}=~\frac{\Delta s}{\Delta t}\) Instantaneous acceleration\({{a}_{ins}}\,\underset{\Delta t\to 0}{\mathop{\lim }}\,\frac{\Delta v}{\Delta t}=\frac{dv}{dt}\)
Instantaneous angular acceleration \({{\alpha }_{ins}}\,\underset{\Delta t\to 0}{\mathop{\lim }}\,\frac{\Delta \omega }{\Delta t}=\frac{d\omega }{dt}\)
A case of constant angular acceleration is of great importance and a parallel set of equations holds for this case just as in constant linear acceleration.
Linear Equations of Motion Angular Equations of Motion \($v={{v}_{0}}+at$\) \(\omega ={{\omega }_{0}}+\alpha t\) \(x-{{x}_{0}}=~{{v}_{0}}+\frac{1}{2}a{{t}^{2}}\) \(\theta -{{\theta }_{0}}=~{{\omega }_{0}}+\frac{1}{2}\alpha {{t}^{2}}\) \({{v}^{2}}=v_{0}^{2}+2a\left( x-{{x}_{0}} \right)\) \({{\omega }^{2}}=\omega _{0}^{2}+2\alpha \left( \theta -{{\theta }_{0}} \right)\) Axis of Rotation
A rigid body of an arbitrary shape in rotation about a fixed axis (axis that does not move) called axis of rotation or rotation axis is shown in the figure
Types of Motion involving Rotation Rotation about a fixed axis (Pure rotation) Rotation about an axis of rotation (Combined translational and rotational motion) Rotation about an axis in the rotation (rotating axis – out of the scope of JEE) Rotation About a Fixed Axis
Rotation of a ceiling fan, opening and closing of the door, rotation of our planet, rotation of hour and minute hands in analogue clocks are few examples of this type.
Rotation about an axis of rotation
Rolling is an example of this category. Arguably, the most important application of rotational physics is in the rolling of wheels and wheels like objects as our world now is filled with automobiles and other rolling vehicles.
Rolling Motion of a body is a combination of both translational and rotational motion of a round shaped body placed on a surface. When a body is set in rolling motion, every particle of body has two velocities – one due to its rotational motion and the other due to its translational motion (of the centre of mass), and the resulting effect is the vector sum of both velocities at all particles
Check your understanding
Kinetic Energy of Rotation
The rapidly rotating blades of a table saw machine and the blades of a fan certainly have kinetic energy due tothe rotation. If we apply the familiar equation to the saw machine as a whole, it would give us kinetic energy of its centre of mass only, which is zero.
The right approach:
We shall treat the saw machine or any rotating rigid body as a collection of particles with different speeds. We shall sum up all the kinetic energies of the particles to find the rotational kinetic energy of the whole body.
What is Torque
Torque is a
rotational analogue of force and expresses the tendency of a force applied to an object to cause the object to rotate about a given point.
If you want to open a door, you will apply a force on the doorknob which is located as far as possible from the hinges of the door. If you try to apply the force nearer to the hinge line than the knob, or at any other angle other than 90ᴼ to the plane of the door, you must apply greater force than the former to rotate the door.
To determine how the applied force results in a rotation of the body about an axis, we resolve the Force (F) into two components. The tangential component (Fsinθ) is perpendicular to r and it does cause rotation whereas the radial component (Fcosθ) does not cause rotation because it acts along the line that intersects with the axis or pivot point.
The ability to rotate the body depends on the magnitude of the tangential component and also on how far from axis the force (r – moment of an arm) is applied. Therefore, mathematically it can be represented as \(\vec{\tau }=\vec{r}\times \vec{F}\)
SI unit of torque is Nm.
To find the direction of \(\vec{\tau },\) we use right hand thumb rule sweeping the fingers from \(\vec{\tau }\) (the first vector in the product) into \(\vec{F}\) (the second vector in the product),the outstretched thumb will give the direction of \(\vec{\tau }.\)
Newton’s Second law of Rotation
If the net torque acting on a body about any inertial axis is \(\vec{\tau }\) and the moment of inertia about that axis is I, then the angular acceleration of the body is given by the relation:\(\overrightarrow{~\tau }=I\overrightarrow{\alpha ~}\)
Rotational Equilibrium
The centre of mass of a body remains in equilibrium if the total external force acting on the body is zero. This follows from the equation F = Ma.
Similarly, a body remains in rotational equilibrium if the total external torque acting on the body is zero. This follows from the equation τ = Iα. Therefore a body in rotational equilibrium must either be in rest or rotation with constant angular velocity.
Thus, if a body remains at rest in an inertial frame, the total external force acting on the body should be zero in any direction and the total external torque should be zero about any line.
Under the action of several coplanar forces, the net torque is zero for rotational equilibrium.
Note: If the net force on the body is zero, then the net torque may or may not be zero.
Angular Momentum
The concept of linear momentum and conservation of linear momentum are extremely powerful tools to predict the collision of two objects without any other details of collision. Thus, the angular counterpart, angular momentum plays a crucial role in orbital mechanics.
Angular momentum of a particle about a given point is given by,\(\vec{l}=\vec{r}\times \vec{p}=m\left( \vec{r}\times \vec{v} \right)\)
The direction of angular momentum is also given by right hand rule. (refer torque)
Newton’s law in angular form:
The vector sum of all the torques acting on a particle is equal to the time rate of change of the angular momentum of that particle.\({{\overrightarrow{~\tau }}_{net}}=~\frac{d\vec{l}}{dt}\)
Conservation of Angular Momentum
By the definition of torque,\({{\overrightarrow{~\tau }}_{net}}=~\frac{d\vec{l}}{dt},if{{\overrightarrow{~\tau }}_{net}}=0,~then~\frac{d\vec{l}}{dt}=0,\vec{l}=constant\)
When the resultant torque acting on a system is zero, then the total vector angular momentum of the system remains constant. This is called as principle of conservation of angular momentum.
Examples of conservation of angular momentum
Combined Translational and Rotational Motion Rolling motion is one such example.
JEE FOCUS
Example:
|
I know that the geodesics for Euclidean Space are straight lines and likewise in the absence of forces like gravity, the geodesics are straight lines. But what if you took some curved lines and tried to work backwards to determine the geometry of the space consisting of the curved geodesics. How would one go about determining the shape of this space? Would this even be a possible or useful approach?
From the set of geodesic, as the previous answers, the shape of the space-time can be partially determined. However knowing a bit more of information, the metric can be fully delimited in a neighborhood of every point. I think is a useful calculation, because represents how we can, as observers inside of the spacetime determine the nature of it through experiments.
The argument is an sketch of the one given in Sec 3.2 in "
Large Scale Structure of the Space-Time" by Hawking and Ellis" Given the null-vectors of the spacetime, the functional form of the metric is determined by local causality and the material content. Part 1: Geodesics and null-vector relation
Consider an observer in the spacetime at a point $p$. The observer can throw test particles that will move under non-spacelike geodesics. The tangent vector of the geodesic is an element of $T_p$. Throwing enough test particles following different geodesics (which is equivalent to know all the time-like geodesics that pass through $p$) we can determine the null cone.
In simple words, throwing particles from $p$ and seeing which points of the manifold can be reached, the null cone can be determined as the boundary of such hypersurface.
Part 2: The null cone determine the functional form of the metric up to a conformal factor.
Consider known all the vectors of the null cone, as well as the timelike vectors (i.e. we can distinguish which geodesic are causal in our spacetime). Then every vector of the spacetime that is not null nor timelike, must be spacelike.
Let $\mathbf{T}$ be a timelike vector, and $\mathbf{S}$ a spacelike vector. Then there exist two values of $\lambda\in\mathbb{R}-\{0\}$ for which $\mathbf{T}+\lambda\mathbf{S}$ is null, so
$$ 0=\mathbf{g}(\mathbf{T}+\lambda\mathbf{S},\mathbf{T}+\lambda\mathbf{S})=\mathbf{g}(\mathbf{T},\mathbf{T})+2\lambda\mathbf{g}(\mathbf{T},\mathbf{S})+\lambda^2\mathbf{g}(\mathbf{S},\mathbf{S}) $$
This is a polynomial on $\lambda$ for which the roots $\lambda_1,\lambda_2$ are known (since we know all the vectors, and their character, we can determine for a given pair $\mathbf{T},\mathbf{S}$ which $\lambda$ makes $\mathbf{T}+\lambda\mathbf{S}$ null), then is true that:
$$ \mathbf{g}(\mathbf{T},\mathbf{T})+2\lambda\mathbf{g}(\mathbf{T},\mathbf{S})+\lambda^2\mathbf{g}(\mathbf{S},\mathbf{S})=\mathbf{g}(\mathbf{S},\mathbf{S})(\lambda-\lambda_1)(\lambda-\lambda_2)\Rightarrow \lambda_1\lambda_2=\frac{\mathbf{g}(\mathbf{T},\mathbf{T})}{\mathbf{g}(\mathbf{S},\mathbf{S})} $$
So the ratio between the norm of a timelike and spacelike vector can be found knowing the null cone.
Now let $\mathbf{W},\mathbf{Z}$ be two non-null vectors, then
$$ \mathbf{g}(\mathbf{W},\mathbf{Z})=\frac{1}{2}\left(\mathbf{g}(\mathbf{W},\mathbf{W})+\mathbf{g}(\mathbf{Z},\mathbf{Z})-\mathbf{g}(\mathbf{W+Z},\mathbf{W+Z})\right) $$
Each of the terms on the RHS can be connected to $\mathbf{g}(\mathbf{S},\mathbf{S})$ using different values of $\lambda_1,\lambda_2$. Sor for every pair $\mathbf{W},\mathbf{Z}$, the value of $\mathbf{g}(\mathbf{W},\mathbf{Z})$ is known up to a factor $\mathbf{g}(\mathbf{S},\mathbf{S})$.
Part 3: The material content determine the conformal factor up to a measuring gauge.
For now we have that $\mathbf{\hat{g}}=\Omega^2\mathbf{g}$ where $\mathbf{g}$ is known.
Let the energy momentum tensor for the material fields be $T^{ab}$, satisfying $\nabla_aT^{ab}=0$. Since the spacetime must be locally Minkowsky (equivalent to take normal coordinates), there is a neighbourhood of $p$ in which we can define "almost killing vectors", taking the killing vectors of the minkowsky spacetime $K_a$. Since $K_aT^{ab}$ is a conserved current in Minkowsky, it will be almost conserved in our neighbourhood, in the sense that the first approximation vanishes. In particular that means that energy and momentum conservation hold approximatelly in the neighbourhood of $p$.
Given the timelike geodesic with respect to the metric $\mathbf{g}$ trajectory of a particle $\gamma(t)$ with tangent vector $\mathbf{K}=\partial_t$, the geodesic equation reads:
$$ K^{\left[b\right.}\frac{\mathbf{\hat{D}}}{\partial_t}K^{\left.a\right]}=K^{\left[b\right.}\frac{\mathbf{D}}{\partial_t}K^{\left.a\right]}-(K^cK^d\hat{g}_{cd})K^{\left[b\right.}g^{\left.a\right]e}\partial_e(\log\Omega) $$
Since $\gamma$ is a geodesic with respect $\mathbf{g}$, the first term vanishes. By considering another curve $\gamma^\prime$ whose tangent vector is not paralell to $\mathbf{K}$ $\Omega$ can be found up to a constant factor. This constant factor correspond to an arbitrary normalization (i.e. choosing a measure of time).
The quantity you would like to obtain is the
metric, which completely characterizes the shape of a space (Riemannian manifold, to be more precise). Unfortunately, knowing even every geodesic on a Riemannian manifold does not uniquely determine the metric. See, for example, this discussion: https://mathoverflow.net/questions/132244/can-one-recover-a-metric-from-geodesics.
As explained there, $\mathbb{R}^4$ with either Euclidean or Minkowski metric produces the same set of geodesics, namely straight lines, so in general it cannot be possible to obtain the metric from just the geodesics. Other notions of shape, such as the curvature tensor and sectional curvature, are defined in terms of this metric.
This problem is a case of obtaining
local properties from global ones. The metric is a local quantity; it depends only on the chosen point of the manifold and a local neighborhood around it. Geodesics, on the other hand, connect points on the manifold which may not even lie on the same coordinate chart, appealing to a larger global structure of the manifold.
There are a variety of theorems dealing with the relationship between local and global properties in Riemannian geometry, a prolific example of which is the Hopf-Rinow theorem. This tells you that a Riemannian manifold is a complete metric space if and only if it is geodesically complete - that is, if at any point you can extend a geodesic infinitely far in any direction, the metric on your space is such that the manifold is complete. We can deduce a few interesting properties like this, but we cannot fully determine the metric from information about the geodesics.
Not sure if this is exactly what you're looking for, but Synge's Theorem is a classic result in Riemannian geometry relating curvature to topology. In the proof of the theorem, one essentially performs a stability analysis of closed geodesics, which relates the properties of geodesics to the curvature of the manifold. This involves taking the second variation of the arc length functional, which unsurprisingly comes out with a term proportional to the curvature tensor. By examining how the closed geodesics behave under small variations, one can draw conclusions about the global properties of the space (i.e. its topology). For an example of how this works related to your immediate question on surfaces, consider the closed geodesics of $S^2$. These are great circles. Any of these geodesics can be shrunk by a small variation, for example by shifting them slightly towards one of the poles. By repeating this process, one can then shrink the geodesic to a point. This is clearly related to the fact that $S^2$ is simply connected. Hope this helped.
|
Is the Set of All Orthogonal Matrices a Vector Space? Problem 611
An $n\times n$ matrix $A$ is called
orthogonal if $A^{\trans}A=I$. Let $V$ be the vector space of all real $2\times 2$ matrices.
Consider the subset
\[W:=\{A\in V \mid \text{$A$ is an orthogonal matrix}\}.\] Prove or disprove that $W$ is a subspace of $V$. Solution.
We claim that $W$ is not a subspace of $V$.
One way to see that $W$ is not a subspace of $V$ is to note that the zero vector $O$ in $V$, which is the $n\times n$ zero matrix, is not in $W$ as we have $O^{\trans}O=O\neq I$.
Thus, $W$ is not a subspace of $V$. Another approach
You may also show that scalar multiplication (or addition) is not closed in $W$.
For example, the identity matrix $I$ is orthogonal as $I^{\trans}I=I$, and thus $I$ is an element in $W$.
However, the scalar product $2I$ is not orthogonal since \[(2I)^{\trans}(2I)=4I\neq I.\]
Add to solve later
|
Partial Linearity of Integrals of Upper Functions on General Intervals
Recall from the Upper Functions and Integrals of Upper Functions page that a function $f$ defined on an interval $I$ is said to be an upper function on $I$ if there exists an increasing sequence of functions $(f_n(x))_{n=1}^{\infty}$ that converges to $f$ almost everywhere on $I$ and such that $\displaystyle{\lim_{n \to \infty} \int_I f_n(x) \: dx}$ is finite.
Furthermore, we defined the integral of $f$ on $I$ to be:(1)
We also noted that the choice of generating sequence $(f_n(x))_{n=1}^{\infty}$ of $f$ did not matter as long as it satisfied the conditions above.
In the following theorems we will see that the part of the linearity properties of integrals of upper functions hold.
Theorem 1: Let $f$ and $g$ both be upper functions defined on $I$. Then $f + g$ is an upper function on $I$ and $\displaystyle{\int_I [f(x) + g(x)] \: dx = \int_I f(x) \: dx + \int_I g(x) \: dx}$. Proof:Let $f$ and $g$ both be upper functions defined on $I$. Then there exists increasing sequences $(f_n(x))_{n=1}^{\infty}$ and $(g_n(x))_{n=1}^{\infty}$ that converge to $f$ and $g$ (respectively) almost everywhere on $I$ and such that $\displaystyle{\lim_{n \to \infty} \int_I f_n(x) \: dx}$ and $\displaystyle{\lim_{n \to \infty} \int_I g_n(x) \: dx}$ are finite. Consider the sequence $(f_n(x) + g_n(x))_{n=1}^{\infty}$. Then this is an increasing sequence of functions that converges to $f + g$ almost everywhere on $I$ and furthermore, $\displaystyle{\lim_{n \to \infty} \int_I [f_n(x) + g_n(x)] \: dx = \lim_{n \to \infty} \int_I f_n(x) \: dx + \lim_{n \to \infty} \int_I g_n(x) \: dx}$ which is finite. So $f +g$ is an upper function on $I$ and furthermore:
Theorem 2: Let $f$ be an upper function defined on $I$ and let $c \in \mathbb{R}$, $c \geq 0$. Then $cf$ is an upper function on $I$ and $\displaystyle{\int_I cf(x) \: dx = c \int_I f(x) \: dx}$. It is very important to note that $c \geq 0$. Proof:Let $f$ be an upper function defined on $I$. Then there exists an increasing sequence $(f_n(x))_{n=1}^{\infty}$ that converges to $f$ almost everywhere on $I$ and such that $\displaystyle{\lim_{n \to \infty} \int_I f_n(x) \: dx}$ is finite. For $c \geq 0$, consider the sequence $(cf_n(x))_{n=1}^{\infty}$. Then this is an increasing sequence of functions that converges to $cf$ almost everywhere on $I$ and furthermore, $\displaystyle{\lim_{n \to \infty} \int_ cf_n(x) \: dx = c \lim_{n \to \infty} \int_I f_n(x) \: dx}$ is finite. So $cf$ is an upper function on $I$ and furthermore:
|
Let $\mathscr{M}$ be a model category and let $\mathscr{I}$ be a small category. Consider any homotopy colimit functor $\text{hcolim}_{\mathscr{M}}^{\mathscr{I}}\colon\mathscr{M}^{\mathscr{I}}\longrightarrow \mathscr{M}$ of shape $\mathscr{I}$ on $\mathscr{M}$. Here I adopt the notion of homotopy colimit considered in Homotopy Limit Functors on Model Categories and Homotopical Categories where a homotopy colimit is defined as a left approximation of the colimit functor.
Given a functor $X\colon\mathscr{I}\longrightarrow \mathscr{M}$, let me call a cocone $X\rightarrow P$ from $X$ (that is, a natural transformation $X\rightarrow cP$, where $P\in\mathscr{M}$ and $c$ is the constant functor $\mathscr{M}\longrightarrow \mathscr{M}^{\mathscr{I}}$) a
homotopy colimit cocone if the canonical composite map$$\text{hcolim}_{\mathscr{M}}^{\mathscr{I}}X\rightarrow \text{colim}_{\mathscr{M}}^{\mathscr{I}}X\rightarrow P$$ is a weak equivalence. Note that, given another homotopy colimit functor $\textbf{hcolim}_{\mathscr{M}}^{\mathscr{I}}$ of shape $\mathscr{I}$ over $\mathscr{M}$, the map $\text{hcolim}_{\mathscr{M}}^{\mathscr{I}}X\rightarrow P$ is a weak equivalence if and only if $\textbf{hcolim}_{\mathscr{M}}^{\mathscr{I}}X\rightarrow P$ is a weak equivalence because $\text{hcolim}_{\mathscr{M}}^{\mathscr{I}}$ and $\textbf{hcolim}_{\mathscr{M}}^{\mathscr{I}}$ are naturally weakly equivalent as functors over the colimit functor $\text{colim}_{\mathscr{M}}^{\mathscr{I}}$.
Let me now say that a functor $F\colon \mathscr{M}\longrightarrow \mathscr{N}$ between model categories
preserves homotopy colimit cocones if, for every $X\in\mathscr{M}^{\mathscr{I}}$ (for a small category $\mathscr{I}$) and every homotopy colimit cocone $X\rightarrow P$ from $X$, $F\circ X\rightarrow FP$ is a homotopy colimit cocone from $F\circ X$.
Of course, we can dually define the concepts of
homotopy limit cones and of preserving homotopy limit cones.
I would like to know if these notions are somehow well understood and/or useful. I am in particular interested in the following questions which I can not answer:
1) Let $F\colon \mathscr{M}\longrightarrow \mathscr{N}$ be a left Quillen functor and let $Q\colon\mathscr{M}\longrightarrow \mathscr{M}$ be a cofibrant replacement functor on $\mathscr{M}$. Does $F\circ Q$ preserve homotopy colimit cocones (and dually for a right Quillen functor)?
2) If $F\colon \mathscr{M}\longrightarrow \mathscr{N}$ is a left Quillen functor which is part of a Quillen equivalence and $Q$ is a cofibrant replacement functor on $\mathscr{M}$, does $F\circ Q$ preserve homotopy limit cones (and dually for a right Quillen functor)?
|
A Comparison Theorem for Integrals of Upper Functions on General Intervals
Recall from the Upper Functions and Integrals of Upper Functions page that a function $f$ on $I$ is said to be an upper function on $I$ if there exists an increasing sequence of functions $(f_n(x))_{n=1}^{\infty}$ that converges to $f$ almost everywhere on $I$ and such that $\displaystyle{\lim_{n \to \infty} \int_I f_n(x) \: dx}$ is finite.
On the Partial Linearity of Integrals of Upper Functions on General Interval page we saw that if $f$ and $g$ were both upper functions on $I$ then $f + g$ is an upper function on $I$ and:(1)
Furthermore, we saw that if $c \in \mathbb{R}$, $c \geq 0$, then $cf$ is an upper function on $I$ and:(2)
We will now look at some more nice properties of integrals of upper functions on general intervals.
Theorem 1: Let $f$ and $g$ be upper functions on the interval $I$. If $f(x) \leq g(x)$ almost everywhere on $I$ then $\displaystyle{\int_I f(x) \: dx \leq \int_I g(x) \: dx}$ Proof:Let $f$ and $g$ be upper functions on the interval $I$. Then there exists increasing sequences $(f_n(x))_{n=1}^{\infty}$ and $(g_n(x))_{n=1}^{\infty}$ that converge to $f$ and $g$ (respectively) almost everywhere on $I$ and such that $\displaystyle{\lim_{n \to \infty} \int_I f_n(x) \: dx}$ and $\displaystyle{\lim_{n \to \infty} \int_I g_n(x) \: dx}$ are finite. Now since $f(x) \leq g(x)$ and since $(f_n(x))_{n=1}^{\infty}$ is an increasing sequence of step functions that generates $f$ we have that for all $n \in \mathbb{N}$ that: By applying the theorem presented on the Another Comparison Theorem for Integrals of Step Functions on General Intervals page, we see that then: Taking $n \to \infty$ gives us that:
Corollary 1: Let $f$ and $g$ be upper functions on the interval $I$. If $f(x) = g(x)$ almost everywhere on $I$ then $\displaystyle{\int_I f(x) \: dx = \int_I g(x) \: dx}$ Proof:If $f(x) = g(x)$ almost everywhere on $I$ then $f(x) \leq g(x)$ almost everywhere on $I$ and by Theorem 1 we see that: But also $f(x) \geq g(x)$ almost everywhere on $I$ and using Theorem 1 again we have: Combining $(*)$ and $(**)$ shows us that $\displaystyle{\int_I f(x) \: dx = \int_I g(x) \: dx}$ as desired. $\blacksquare$
|
The Set of all Subsets of Natural Numbers is Uncountable
Table of Contents
The Set of all Subsets of Natural Numbers is Uncountable
Theorem 1: The set $\mathcal P (\mathbb{N})$ of all subsets of $\mathbb{N}$ is uncountable. In the proof below, we use the famous diagonalization argument to show that the set of all subsets of $\mathbb{N}$ is uncountable. Proof:Suppose that $\mathcal P (\mathbb{N})$ is countable. We can denote each set $A \subseteq \mathbb{N}$ by a decimal number of the form:
\begin{align} \quad 0.x_0x_1...x_n... \end{align}
where each $x_n \in \{ 0, 1 \}$ and such that $x_n = 0$ if $n \in A$ and $x_n = 1$ if $n \not \in A$. For example, the set $\{ 1, 3, 4, 5, 6 \}$ has decimal representation $0.1011110...$. Now since we assume that $\mathcal P (\mathbb{N})$ is countable, there exists a bijective function $f : \mathbb{N} \to \mathcal P(\mathbb{N})$ defined for all $n \in \mathbb{N}$ by $f(n) = A_n$. We list the sets $\{ A_1, A_2, ..., A_n, ... \}$ and their decimal representations as follows:
\begin{align} \quad A_1, \: & 0.x_{1,1}x_{1,2} \cdots x_{1,n} \cdots \\ \quad A_2, \: & 0.x_{2,1}x_{2,2} \cdots x_{2,n} \cdots \\ \quad & \vdots \\ \quad A_n, \: & 0.x_{n,1}x_{n,2} \cdots x_{n,n} \cdots \\ \quad & \vdots \end{align}
Now construct a decimal number $x = 0.x_1x_2...x_n...$ as follows. We let $x_1 = 0$ if $x_{1,1} = 1$ and we let $x_1 = 1$ if $x_{1,1} = 0$. We let $x_2 = 0$ if $x_{2,2} = 1$ and we let $x_2 = 1$ if $x_{2,2} = 0$. In general, we let $x_n = 0$ if $x_{n,n} = 1$ and we let $x_n = 1$ if $x_{n,n} = 0$. Then $X$ differs from all of the decimal numbers in the list above in at least one decimal spot. So the decimal $x$ corresponds to a subset $X$ of $\mathbb{N}$ that is not on the list above. This is a contradiction since $f$ is assumed to be a bijection. Therefore the set $\mathcal P(\mathbb{N})$ is uncountable. $\blacksquare$
|
What is meant by Laplace Transform? asked | 579 views | 1 answers. Definition of Laplace Transform and give the basic Laplace Transforms and its application. MathematicsLaplaceTransform
0
Laplace transform is yet another operational tool for solving constant coefficients linear differential equations. The process of solution consists of three main steps:
- The given "hard" problem is transformed into a "simple" equation.
- This simple equation is solved by purely algebraic manipulations.
- The solution of the simple equation is transformed back to obtain the solution of the given problem.
In this way the Laplace transformation reduces the problem of solving a differential equation to an algebraic problem.
The Laplace transform is named after mathematician and astronomer Pierre-Simon Laplace, who used a similar transform(now called z-transform) in his work on probability theory.
Basic Laplace Transforms: [$$:] \begin{align} & f(t) & L[f(t)]\\ 1.\; & 1 & \frac{1}{s} \\ 2.\; & e^{at} & \frac{1}{s-a} \\ 3.\; & t^n, n=1,2,3... & \frac{n!}{s^{n+1}} \\ 4. \; & t^p, p>-1 & \frac{\Gamma{(p+1)}}{s^{p+1}} \\ 5.\; & \sqrt{t} & \frac{\sqrt{\pi}}{2s^{\frac{3}{2}}} \\ 6.\; & t^{n-1/2} , n=1,2,3... & \frac{1.2.3..(2n-1)\sqrt{\pi}}{2^ns^{n+1/2}} \\ 7. \; & sin(at) & \frac{a}{s^2+a^2} \\ 8. \; & cos(at) & \frac{s}{s^2+a^2} \\ \end{align} [/:$$]
Post Your Answer Here :
|
Differentiation and Integration of Power Series
We will now look at differentiating and integrating power series term by term, a technique that will be very useful. We first note that power series have terms which are polynomials, and polynomials are relatively easily to differentiate and integrate. The following theorems will allow us to differentiate and integrate power series as we would have expected.
Differentiation of Power Series
Theorem 1 (Differentiation of Power Series): Let the power series $\sum_{n=0}^{\infty} a_n(x-c)^n = a_0 + a_1x + a_2x^2 + a_3x^3 + ...$ converge to $f(x)$ on the interval $(c-R , c+R)$. Then $f$ is differentiable on the same interval $(c-R, c+R)$ and $f'(x) = \sum_{n=1}^{\infty} na_n(x - c)^{n-1} = a_1 + 2a_2(x-c) + 3a_3(x-c)^2 + ...$. We will note that if a power series is differentiated, then the differentiated series has the same interval of convergence as the original series EXCEPT possibly the loss of one or both end points of the interval of convergence if the original series was convergent at these point.
Of course, we could go further and derive the following formulas for second, third, etc… derivatives of power series as follows:
$f''(x) = \sum_{n=2}^{\infty} n(n-1)a_n(x - c)^{n-2}$ over the interval $(c-R, c+R)$. $f'''(x) = \sum_{n=3}^{\infty} n(n-1)(n-2)a_n(x-c)^{n-3}$ over the interval $(c-R, c+R)$.
Let's look at an example of differentiation with regards to power series.
Example 1 Find a power series representation of the function $f(x) = \frac{2}{(1 - x)^2}$.
We first note that $\frac{d}{dx} \frac{1}{1 - x} = \frac{1}{(1 - x)}^2$. We already know a power series for $\frac{1}{1 - x}$, namely $\frac{1}{1 - x} = \sum_{n=0}^{\infty} x^n$ for $-1 < x < 1$, and so if we multiply this equality by 2 and then differentiate both sides as follows, we will obtain our power series:(1)
We note that the power series we differentiated, $\frac{1}{1 - x}$ is not convergent at its endpoints, and so the derivative series is also not convergent at its endpoints and so our series represents $f(x)$ for $-1 < x < 1$. The graph below represents the function $f(x)$ in blue, and the series $\sum_{n=0}^{\infty} 2 n x^{n-1}$ in red as a representation of $f(x)$ on $(-1, 1)$:
Integration of Power Series
Theorem 2 (Integration of Power Series): Let the power series $\sum_{n=0}^{\infty} a_n(x-c)^n = a_0 + a_1x + a_2x^2 + a_3x^3 + ...$ converge to $f(x)$ on the interval $(c-R , c+R)$. Then $f$ is integrable over any subinterval of $(c-R, c+R)$ and $\int_{0}^{x} f(t) \: dt = \sum_{n=0}^{\infty} \frac{a_n}{n+1}(x - c)^{n+1} = a_0(x - c)+ \frac{a_1}{2}(x - c)^2 + \frac{a_2}{3}(x-c)^3 + ...$. If a power series is integrated, then the integrated series has the same interval of convergent as the original series EXCEPT possibly the gain of one or both end points of the interval of convergence if the original series wasn't convergent at the end points already.
We will now look at an example of where integrating power series can be useful.
Example 2 Find a power series representation of the function $g(x) = \ln (3 - x)$.
First consider the function $\frac{1}{3 - x}$. Note that if we integrate this function, we get that $\int \frac{1}{3 - x} \: dx = -\ln (3 - x)$. Let's first come up with a power series for the function $\frac{1}{3 - x} = \frac{1}{3} \cdot \frac{1}{1 - \frac{x}{3}}$.(2)
The power series above is a representation of $\frac{1}{3 - x}$ on the interval $-3 < x < 3$. Now we will apply theorem 2 of integration to determine a power series for $g(x)$:(3)
We now have to check to see if we've gained convergence of the end points.
First check $x = -3$ which produces the series $\ln(3) - \sum_{n=0}^{\infty} \frac{(-1)^{n+1}3^{n+1}}{3^{n+1}(n+1)} = \ln(3) - \sum_{n=0}^{\infty} \frac{(-1)^n}{n+1}$ which converges by the alternating series test.
Now check $x = 3$, which produces the series $\ln(3) - \sum_{n=0}^{\infty} \frac{3^{n+1}}{3^{n+1}(n+1)} = \sum_{n=0}^{\infty} \frac{1}{n+1}$, which is the harmonic series and diverges.
Therefore $g(x) = \ln (3) - \sum_{n=0}^{\infty} \frac{x^{n+1}}{3^{n+1}(n+1)}$ for $-3 ≤ x < 3$, i.e., on the interval $[-3, 3)$.
The graph below depicts our function $g(x)$ is blue and our power series representation in red.
|
How Laplace Transform related to Fourier Transform? asked | 754 views | 1 answers. Difference between Laplace Transform and Fourier Transform and how they are related. LaplaceTransformFourier
0
This brief note is about Fourier transform and how related to Laplace Transform.
To begin, we first state the definition of the Fourier transform [$$:]\hat{f}(\omega) = \int_{-\infty}^\infty f(x) e^{-j\omega x} dx[/:$$] and its inverse [$$:]f(x) = \frac{1}{2\pi}\int_{-\infty}^\infty \hat{f}(\omega) e^{j\omega x} d\omega[/:$$] There are two notable features. One, with Laplace transform, the lower integration limit is zero, not negative infinity. Second, Laplace transform uses a transform variable in the complex plane. The transform variable in Fourier transform is a pure imaginary number, restricted to [$:]s = j\omega[/:$].
Generally, Laplace transform is for functions that are semi-infinite or piecewise continuous, as in the step or rectangular pulse functions. We also impose the condition that the function is zero at negative times: [$$:]f(t)=0,\;t<0[/:$$] More formally, we say the function must be of exponential order as [$:]t[/:$] approaches infinity so that the transform integral converges.
A function is of exponential order if there exists(real) constants [$:]K[/:$],[$:]c[/:$] and [$:]T[/:$] such that [$$:]|f(t)| < Ke^{ct}\; \text{ for }\; t > T[/:$$] or in other words, the quantity [$:]e^{-ct} |f(t)|[/:$] is bounded. If [$:]c[/:$] is chosen sufficiently large, the so-called abscissa of convergence, then [$:]e^{-ct} |f(t)|[/:$] should approach zero as [$:]t[/:$] approaches infinity. In terms of the Laplace transform integral[$:]\int_0^{\infty}f(t) e^{-st} dt[/:$], it means that the real part of [$:]s[/:$] must be larger than the real part of all the poles of [$:]f(t)[/:$] in order the integral to converge. Otherwise, we can force a function to be transformable with [$:]e^{-\gamma t} f(t)[/:$] if we can choose [$:]\gamma > c[/:$] such that [$:]Ke^{-(\gamma -c)t}[/:$] approaches zero as [$:]t[/:$] goes to infinity.
We now do a quick two-step to see how the definition of Laplace transform may arise from that of Fourier transform. First, we write the inverse transform of the Fourier transform of the function [$:] e^{-\gamma t} f(t)[/:$], which of course, should recover the function itself: [$$:]e^{-\gamma t} f(t) = \frac{1}{2 \pi}\int_{-\infty}^\infty e^{j\omega t} d \omega \int_0^\infty e^{-\gamma \tau} f(\tau) e^{-j\omega \tau} d\tau[/:$$] where we have changed the lower integration limit of the Fourier transform from[$:]-\infty[/:$] to [$:]0[/:$] because [$:] f(t) = 0[/:$] when [$:]t < 0[/:$]. Next, we move the exponential function [$:]e^{-\gamma t}[/:$] to the RHS to go with the inverse integral and then combine the two exponential functions in the transform integral to give [$$:] f(t) = \frac{1}{2 \pi}\int_{-\infty}^\infty e^{(\gamma + j\omega)t} d \omega \int_0^\infty f(\tau) e^{-(\gamma + j\omega) \tau} d\tau[/:$$] Now we define [$$:] s = \gamma + j\omega[/:$$] So, [$$:] f(t) = \frac{1}{2 \pi j}\int_{\gamma-j\infty}^{\gamma+j\infty} e^{st} ds \int_0^\infty f(\tau) e^{-s \tau} d\tau [/:$$] From this form, we can extract the definitions of Laplace transform and its inverse: [$$:]F(s) = \int_0^\infty f(t) e^{-st} dt [/:$$] and [$$:]f(t) = \frac{1}{2 \pi j}\int_{\gamma-j\infty}^{\gamma+j\infty} F(s) e^{st} ds[/:$$] where again, [$:]\gamma[/:$] must be chosen to be larger than the real parts of all the poles of [$:]F(s)[/:$]. Thus the path of integration of the inverse is the imaginary axis shift by the quantity [$:]\gamma[/:$] to the right.
Post Your Answer Here :
|
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ...
The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial.
This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ...
I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv...
As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists?
I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib...
@EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc.
Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/…
You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago
So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball.
@ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why?
@AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially...
@vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes.
@RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself
@AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that?
@ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions...
When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former.
@RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that
And that is what I mean by "the basics".
Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers
@RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14
The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for...
@vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world.
@Slereah It's like the brain has a limited capacity on math skills it can store.
@NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life"
I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money
It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge
Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it
|
A canon is fired Horizontal to the ground 80 meters above ground The canon ball is fired a $T=0$ and hits the ground at $T_g$. Calculate the height at the time $T_g/2$So far I have calculated time. which gives me the equation$$y=y_i+v_{iy}(4.05) - (0.5)(9.82)(4.05^2)$$Now my problem is that when I try to calculate $v_{iy}$ I am getting $-37\text{ m/s}$ which I know is not the case; it wouldn't make any sense. $Y$ initial is equal to $80\ \ V_{iy}$ I don't know $4.05$ is the time it takes the projectile to hit the ground. $9.82$ is gravity which is my acceleration.
So my question is two parts. First, is this the best equation to find what I'm trying to find? Second, how do I find velocity in the $y$ direction when $\theta$ is $180$ degrees? This is an edited question I do apologize for being unclear. Thank you for responding and requesting clarification. I was using the incorrect equation a simpler way to find the answer would be $y=.75\times H$. Because change in $y = -.5\times g\times t^2$ sense initial velocity in the $y$ direction equals $0$. Thank you for the help in structuring my question better.
A canon is fired Horizontal to the ground 80 meters above ground The canon ball is fired a $T=0$ and hits the ground at $T_g$. Calculate the height at the time $T_g/2$So far I have calculated time. which gives me the equation$$y=y_i+v_{iy}(4.05) - (0.5)(9.82)(4.05^2)$$Now my problem is that when I try to calculate $v_{iy}$ I am getting $-37\text{ m/s}$ which I know is not the case; it wouldn't make any sense. $Y$ initial is equal to $80\ \ V_{iy}$ I don't know $4.05$ is the time it takes the projectile to hit the ground. $9.82$ is gravity which is my acceleration.
closed as off-topic by Brandon Enright, Kyle Kanos, Abhimanyu Pallavi Sudhir, Waffle's Crazy Peanut, Emilio Pisanty Mar 2 '14 at 5:01
This question appears to be off-topic. The users who voted to close gave this specific reason:
"Homework-like questions should ask about a specific physics conceptand show some effortto work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – Brandon Enright, Kyle Kanos, Abhimanyu Pallavi Sudhir, Waffle's Crazy Peanut, Emilio Pisanty
[Edit: If you don't consider air friction and if you're not asked anything about the range of the projectile, the equations are the same as a vertical freefall]. If you are trying to find the velocity of the object at any given time, it is not $v_{iy}$ that you need to calculate since it is the initial velocity of the object at $t=0$.
Step by step for a vertical freefall (1D) with origin on the ground at the vertical of the initial position of the object and y axis toward the object: $$-g=a_y$$ $$\Rightarrow v(t) = \int^t_0 -g.dt=-g.t+v_{0y}$$ $$\Rightarrow y(t) = \int^t_0 (-g.t+v_{0y}).dt=-\frac{1}{2}g.t^2+v_{0y}.t+y_0$$ Here $v_{0y}$ is 0 and $y_0=80m$. You're interested in $v(T_g/2)$ where $T_g$ is $t$ so that $y(t)=0$.
|
By what way and with which variables could you determine a plane's maximum rate of climb per time? If I'm not mistaken, I'm looking for V
Y.
By what way and with which variables could you determine a plane's maximum rate of climb per time? If I'm not mistaken, I'm looking for V
To calculate your possible climb speed $v_z$, you will need
Your engine's thrust $T$ Your airplane's drag $D$ Your airplane's mass $m$
Calculate how much power is needed to overcome drag, and any excess can be used for climbing: $$v_z = v\cdot sin\gamma = v\cdot\frac{T-D}{m\cdot g}$$
Note that this equation makes use of several simplifications, but works well for propeller and slow turbofan aircraft with moderate flight path angles $\gamma$.
To do this with more precision, you need to account for the fact that the aircraft should accelerate during the climb to stay at the same polar point. Now you further need:
The gradient of air temperature over altitude (lapse rate $\Gamma$) The local speed of sound $a$, and The gas constant $R$ of air.
You need to add a correction factor $C$ which has several components: $$C = 1 + \frac{1}{2}\cdot\kappa\cdot R_w\cdot\Gamma_w\cdot Ma^2 + \frac{(1+0.2\cdot Ma^2)^{\frac{\kappa}{\kappa-1}}-1}{(1+0.2\cdot Ma^2)^{\frac{1}{\kappa-1}}}$$
where $\kappa$ is the ratio of the specific heats of air and is 1.405, the index w denotes the wet adiabatic gas constant and lapse rate of air, and $Ma$ is your flight Mach number. $\Gamma$ can vary between -0.004°/m and -0.0097°/m, but if you use the average of -0.0065°/m, this equation can be simplified to: $$C = 1 - 0.13335\cdot Ma^2 + \frac{(1+0.2\cdot Ma^2)^{3.5}-1}{(1+0.2\cdot Ma^2)^{2.5}}$$
which is probably the form you will find in most books which care to cover this topic. The second summand takes care of the reduction of atmospheric temperature with altitude and disappears in the stratosphere, and the third summand covers the additional energy needed for acceleration in terms of flight Mach number.
Acceleration factor over the flight Mach number in the troposphere for the Standard Atmosphere
Now your climb speed becomes $$v_z = \frac{v}{C}\cdot sin\gamma = \frac{v}{C}\cdot\frac{T-D}{m\cdot g}$$
As you can see from the graph above, the correction factor is only important at higher speeds, but will cut the climb speed in half at Mach 2. Some jet aircraft need to climb at a high constant Mach number, and then the aircraft needs to decelerate while climbing. Now the correction factor becomes smaller than unity and the climb speed gets a boost because kinetic energy is converted into potential energy while climbing.
Optimum speeds
To pick the flight speed where the climb speed or the fight path angle reaches a maximum, you now need to describe how thrust will change with flight speed. To simplify things, we can say that thrust changes over speed in proportion to the expression $v^{n_v}$ where $n_v$ is a constant which depends on engine type. Piston aircraft have constant power output, and thrust is inverse with speed over the speed range of acceptable propeller efficiencies, hence $n_v$ becomes -1 for piston aircraft. Turboprops make some use of ram pressure, so they profit a little from flying faster, but not much. Their $n_v$ is -0.8 to -0.6. Turbofans are better in utilizing ram pressure, and their $n_v$ is -0.5 to -0.2. The higher the bypass ratio, the more negative their $n_v$ becomes. Jets (think J-79 or even the old Jumo-004) have approximately constant thrust over speed, at least in subsonic flow. Their $n_v$ is around 0. Positive values of $n_v$ can be found with ramjets - they develop more thrust the faster they move through the air.
The flight speed for maximum rate of climb ($v_y$) is reached at a lift coefficient $c_L$ of $$c_L = -\frac{n_v+1}{2}\cdot\frac{T\cdot\pi\cdot AR\cdot\epsilon}{m\cdot g}\cdot \sqrt{\frac{(n_v+1)^2}{4}\cdot\left(\frac{T\cdot\pi\cdot AR\cdot\epsilon}{m\cdot g}\right)^2 + 3\cdot c_{D0}\cdot\pi\cdot AR\cdot\epsilon}$$
whereas the steepest climb is possible with a $c_L$ of $$c_L = -\frac{n_v}{4}\cdot\frac{T\cdot\pi\cdot AR\cdot\epsilon}{m\cdot g}\cdot \sqrt{\frac{n_v^2}{16}\cdot\left(\frac{T\cdot\pi\cdot AR\cdot\epsilon}{m\cdot g}\right)^2 + c_{D0}\cdot\pi\cdot AR\cdot\epsilon}$$
Nomenclature:
$c_L \:\:\:$ lift coefficient $T \:\:\:\:$ thrust $m \:\:\:\:$ aircraft mass $g \:\:\:\:\:$ gravity $\pi \:\:\:\:\:$ 3.14159$\dots$ $AR \:\:$ aspect ratio of the wing $\epsilon \:\:\:\:\:$ the wing's Oswald factor $c_{D0} \:$ zero-lift drag coefficient
You are correct that Vy will give you the max RoC. However Vy is actually only a speed, not a climb rate, and will correspond to slightly different rates of climb depending on a few factors. The ones that spring to mind are A/C weight, temperature, and density altitude. Along with power setting but that one is somewhat self explanatory. As for determining the speed, normally you would just consult your PPOH or AOM to find a performance chart that will give you the base speed and you can correct it for each of the variables listed in order to have a precise climb rate.
Variables will have to include air conditions (temperature, humidity, altitude starting from), engine performance data at said altitude, and the vy speed for the airplane in question.
|
This is a crosspost from MSE. It's been up there for a few weeks now. A 200 rep bounty yielded no results (or even comments). I'm hoping someone here has some helpful ideas. See this post for the original.
Consider $U$ a nice compact region in $\mathbb{C}$ with boundary $\Gamma$. Let $S_1$ b the ideal of trace class operators on a separable complex Hilbert space $H$. We will let $\|\cdot \|$ be the operator norm and $\|\cdot \|_1$ be the trace norm. Suppose $W:U\to S_1$ is complex analytic in the operator norm.
Under what conditions is $W(\lambda)$ analytic in the $\| \cdot \|_1$ norm?
I have proved that the following are equivalent when $W(\lambda)$ is operator analytic:
$W(\lambda)$ is continuous in the operator norm for a fixed $M$, we have $\|W(\lambda)\|_1 <M$ for each $\lambda \in \Gamma$ tr $W(\lambda)B$ is analytic for each bounded operator $B$. $W(\lambda)$ is analytic in the $\|\cdot\|_1$ norm.
I can provide some ideas for these proofs if that's be helpful.
This leads us to
Question 1: What if we know that tr $W(\lambda)$ is analytic?
Is there a nice way to compare tr $W(\lambda)$ with tr $W(\lambda)B$? I would love an inequality like $$ |\text{tr }AB| \leq\|B\||\text{tr }A| $$ for $A \in S_1$ and $B$ bounded. Although it would probably be greedy to expect this in general.
Also, I'm willing to impose even stronger assumptions on $W$ if necessary. One very strong constraint is to assume that $W$ has a rank bound along $\Gamma$, I.E. for a fixed $N$ we have rank $W(\lambda)<N$ for each $\lambda \in \Gamma$. This actually guarantees analyticity as $$\|W(\lambda)\|_1 \leq N\sup_{\lambda \in \Gamma} \|W(\lambda)\|$$
Attempting to weaken this condition, we arrive at
Question 2: What happens if $W(\lambda)$ is finite rank for each $\lambda$, but has no rank bound?
I suspect that finite rank and analytic actually implies rank bounded, but I do not know.
Edit 1Here's a fun idea to that might help prove finite rank implies rank bounded. The set $$S_n = \{\lambda : W(\lambda) \text{ has rank at most}n\}$$
is closed (continuity of $W$ tells us singular values are continuous). So Baire Category theorem tells us that some $S_n$ is dense somewhere. So in some open set, $W$ is rank bounded. So can I use an analytic extension in the trace norm to do something? This looks like the proofs of the open mapping theorem and whatnot...
Edit 2Here's another fact that may be helpful. Consider a sequence of complex analytic functions $f_n:U\to \mathbb{C}$. Suppose they converge pointwise to an analytic function $f$. Then $f$ is analytic on an open dense neighborhood of $U$. This is potentially helpful because for any orthonormal basis $\phi_i$,$$\text{tr} W(\lambda) B = \sum_{i=1}^\infty \langle W(\lambda)B\phi_i,\phi_i\rangle$$And because $W(\lambda)B$ is analytic in the operator norm, so will the each inner product be analytic.
I am fairly familiar with Gohberg's work on trace class operators. Unfortunately, despite all of the great theorems on bounds for singular values, knowing that tr $W(\lambda)$ is analytic gives no information about the singular values.
|
The most frequently used evaluation metric of survival models is the concordance index (c index, c statistic). It is a measure of rank correlation between predicted risk scores $\hat{f}$ and observed time points $y$ that is closely related to Kendall’s τ. It is defined as the ratio of correctly ordered (concordant) pairs to comparable pairs. Two samples $i$ and $j$ are comparable if the sample with lower observed time $y$ experienced an event, i.e., if $y_j > y_i$ and $\delta_i = 1$, where $\delta_i$ is a binary event indicator. A comparable pair $(i, j)$ is concordant if the estimated risk $\hat{f}$ by a survival model is higher for subjects with lower survival time, i.e., $\hat{f}_i >\hat{f}_j \land y_j > y_i$, otherwise the pair is discordant. Harrell’s estimator of the c index is implemented in concordance_index_censored.
While Harrell’s concordance index is easy to interpret and compute, it has some shortcomings:
it has been shown that it is too optimistic with increasing amount of censoring [1], it is not a useful measure of performance if a specific time range is of primary interest (e.g. predicting death within 2 years).
Since version 0.8, scikit-survival supports an alternative estimator of the concordance index from right-censored survival data, implemented in concordance_index_ipcw, that addresses the first issue.
The second point can be addressed by extending the well known receiver operating characteristic curve (ROC curve) to possibly censored survival times. Given a time point $t$, we can estimate how well a predictive model can distinguishing subjects who will experience an event by time $t$ (sensitivity) from those who will not (specificity). The function cumulative_dynamic_auc implements an estimator of the cumulative/dynamic area under the ROC for a given list of time points.
The first part of this post will illustrate the first issue with simulated survival data, while the second part will focus on the time-dependent area under the ROC applied to data from a real study.
|
Edit My original answer was wrong. The pressure is in fact constant; if there were a gradient, then the layer between $z$ and $z+dz$ would have a net force on it, and the gas would not be in a steady state. This is in contrast to the fact that in a gravitational field, there must be a net force on such a layer that counteracts gravity in the steady state as you indicated. I should not get credit for this observation; see this question I just posted: Ideal gas temperature and pressure gradients?
On another note however, are you sure that the temperature gradient would be linear as you have indicated? This would be true if ideal gases had a constant thermal conductivity, but as far as I can tell according to these notes, the thermal conductivity of an idea gas scales as the square root of temperature; $k=\alpha\sqrt{T}$ in which case by Fourier's Law one gets that the temperature gradient in the $z$-direction is$$ T(z) = \left[T_1^{3/2}+(T_2^{3/2}-T_1^{3/2})\frac{z}{L}\right]^{2/3}$$
Moreover, now I'm curious to know where my first argument about chemical potential breaks down; I guess the assumption about diffusive equilibrium doesn't hold in this case.
Original (Incorrect) Answer
Cool question! Anytime one wants to compute concentration gradients, the first thing that comes to my mind is the chemical potential since it is associated with particle number; when diffusive equilibrium is achieved, each "infinitesimal" gas layer is in equilibrium with the next which means that the chemical potential as a function of $z$ is a constant.$$ \mu(z) = \mu(0).$$On the other hand, the chemical potential of an ideal gas (according to Kittel & Kroemer) is$$ \mu =kT\ln\left(\frac{n}{n_Q}\right)$$where $n$ is the concentration $n_Q$ is the so-called *quantum concentration" and is defined as$$ n_Q = \left(\frac{m k T}{2\pi\hbar^2}\right)^{3/2}$$and $m$ is the molecular mass. In your setup, the temperature is a function of $z$, which makes the quantum concentration a function of $z$, and so is the concentration. By plugging the expression for the chemical potential into the condition for diffusive equilibrium, we obtain the following equation for $n(z)$:$$ k T(z)\ln\left(\frac{n(z)}{n_Q(z)}\right) =\mu(0)$$Whose solution, after plugging in the explicit expressions for $n_Q(z)$ and $\mu(0)$ is$$ \boxed{n(z) = \left(\frac{m k \,T(z)}{2\pi\hbar^2}\right)^{3/2}\exp\left[\frac{\mu(0)}{ k\,T(z)}\right]}$$Barring a massive conceptual error, I think this is call correct. It's also nice cause it apparently applies to any temperature gradient $T(z)$. Please tell me of any errors (conceptual or otherwise) if you think of any! Yay thermo!
Cheers!
|
It's been a while since my last time using Mactex + Texstudio, which worked fine back then. Now with fresh installation I'm getting the below error:
Package inputenc Error: Unicode character 〖 (U+3016)(inputenc) not set up for use with LaTeX. \end{align}Package inputenc Error: Unicode character 〗 (U+3017)(inputenc) not set up for use with LaTeX. \end{align}
I've tried to:
Define the settings for UTF-8
Well this was not enough
\usepackage[utf8]{inputenc} \DeclareUnicodeCharacter{3016}{\{} \DeclareUnicodeCharacter{3017}{\}}
Which worked.
But I don't understand why I was getting this error, the specific file that was pointed by the error hold nearly the same "align" block:
No error block:
\begin{align}\label{eq:unixy} \nonumberF_z(z)&=\mathbb{P}(\max(x,y)\leq z)\\&=\mathbb{P}[(x\leq z,x>y)\cup(y\leq z,x\leq y)] \nonumber\\&=\mathbb{P}[(x\leq z,x>y)+(y\leq z),x\leq y]\end{align}
Error block:
\begin{align}\label{eq:minCDF}F_Z (z)&=\mathbb{P}(\min(x,y)\le z)=\mathbb{P}(-\max(-x,-y)\le z)\nonumber\\&=\mathbb{P}(\max〖(-x,-y)\geq -z〗 )\nonumber\\&=1-\mathbb{P}(\max(-x,-y)\leq-z)\nonumber\\&=1-\mathbb{P}(-x\le-z)\mathbb{P}(-y\le-z)\end{align}
|
Tagged: determinant of a matrix Problem 718
Let
\[ A= \begin{bmatrix} 8 & 1 & 6 \\ 3 & 5 & 7 \\ 4 & 9 & 2 \end{bmatrix} . \] Notice that $A$ contains every integer from $1$ to $9$ and that the sums of each row, column, and diagonal of $A$ are equal. Such a grid is sometimes called a magic square.
Compute the determinant of $A$.Add to solve later
Problem 686
In each of the following cases, can we conclude that $A$ is invertible? If so, find an expression for $A^{-1}$ as a linear combination of positive powers of $A$. If $A$ is not invertible, explain why not.
(a) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=0$.
Add to solve later
(b) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=-1$. Problem 582
A square matrix $A$ is called
nilpotent if some power of $A$ is the zero matrix. Namely, $A$ is nilpotent if there exists a positive integer $k$ such that $A^k=O$, where $O$ is the zero matrix.
Suppose that $A$ is a nilpotent matrix and let $B$ be an invertible matrix of the same size as $A$.
Is the matrix $B-A$ invertible? If so prove it. Otherwise, give a counterexample. Problem 571
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.
There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold). The time limit was 55 minutes.
This post is Part 2 and contains Problem 4, 5, and 6.
Check out Part 1 and Part 3 for the rest of the exam problems.
Problem 4. Let \[\mathbf{a}_1=\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}, \mathbf{a}_2=\begin{bmatrix} 2 \\ -1 \\ 4 \end{bmatrix}, \mathbf{b}=\begin{bmatrix} 0 \\ a \\ 2 \end{bmatrix}.\]
Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$.
Problem 5. Find the inverse matrix of \[A=\begin{bmatrix} 0 & 0 & 2 & 0 \\ 0 &1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 \end{bmatrix}\] if it exists. If you think there is no inverse matrix of $A$, then give a reason. Problem 6. Consider the system of linear equations \begin{align*} 3x_1+2x_2&=1\\ 5x_1+3x_2&=2. \end{align*} (a) Find the coefficient matrix $A$ of the system. (b) Find the inverse matrix of the coefficient matrix $A$. (c) Using the inverse matrix of $A$, find the solution of the system.
(
Linear Algebra Midterm Exam 1, the Ohio State University) Read solution Problem 546
Let $A$ be an $n\times n$ matrix.
The $(i, j)$
cofactor $C_{ij}$ of $A$ is defined to be \[C_{ij}=(-1)^{ij}\det(M_{ij}),\] where $M_{ij}$ is the $(i,j)$ minor matrix obtained from $A$ removing the $i$-th row and $j$-th column.
Then consider the $n\times n$ matrix $C=(C_{ij})$, and define the $n\times n$ matrix $\Adj(A)=C^{\trans}$.
The matrix $\Adj(A)$ is called the adjoint matrix of $A$.
When $A$ is invertible, then its inverse can be obtained by the formula
For each of the following matrices, determine whether it is invertible, and if so, then find the invertible matrix using the above formula.
(a) $A=\begin{bmatrix} 1 & 5 & 2 \\ 0 &-1 &2 \\ 0 & 0 & 1 \end{bmatrix}$. (b) $B=\begin{bmatrix} 1 & 0 & 2 \\ 0 &1 &4 \\ 3 & 0 & 1 \end{bmatrix}$. Problem 509
Using the numbers appearing in
\[\pi=3.1415926535897932384626433832795028841971693993751058209749\dots\] we construct the matrix \[A=\begin{bmatrix} 3 & 14 &1592& 65358\\ 97932& 38462643& 38& 32\\ 7950& 2& 8841& 9716\\ 939937510& 5820& 974& 9 \end{bmatrix}.\]
Prove that the matrix $A$ is nonsingular.Add to solve later
Problem 505
Let $A$ be a singular $2\times 2$ matrix such that $\tr(A)\neq -1$ and let $I$ be the $2\times 2$ identity matrix.
Then prove that the inverse matrix of the matrix $I+A$ is given by the following formula: \[(I+A)^{-1}=I-\frac{1}{1+\tr(A)}A.\]
Using the formula, calculate the inverse matrix of $\begin{bmatrix}
2 & 1\\ 1& 2 \end{bmatrix}$. Problem 486
Determine whether there exists a nonsingular matrix $A$ if
\[A^4=ABA^2+2A^3,\] where $B$ is the following matrix. \[B=\begin{bmatrix} -1 & 1 & -1 \\ 0 &-1 &0 \\ 2 & 1 & -4 \end{bmatrix}.\]
If such a nonsingular matrix $A$ exists, find the inverse matrix $A^{-1}$.
(
The Ohio State University, Linear Algebra Final Exam Problem) Read solution Eigenvalues of Orthogonal Matrices Have Length 1. Every $3\times 3$ Orthogonal Matrix Has 1 as an Eigenvalue Problem 419 (a) Let $A$ be a real orthogonal $n\times n$ matrix. Prove that the length (magnitude) of each eigenvalue of $A$ is $1$.
Add to solve later
(b) Let $A$ be a real orthogonal $3\times 3$ matrix and suppose that the determinant of $A$ is $1$. Then prove that $A$ has $1$ as an eigenvalue.
|
Prove that the Center of Matrices is a SubspaceLet $V$ be the vector space of $n \times n$ matrices with real coefficients, and define\[ W = \{ \mathbf{v} \in V \mid \mathbf{v} \mathbf{w} = \mathbf{w} \mathbf{v} \mbox{ for all } \mathbf{w} \in V \}.\]The set $W$ is called the center of $V$.Prove that $W$ is a subspace […]
Subspaces of Symmetric, Skew-Symmetric MatricesLet $V$ be the vector space over $\R$ consisting of all $n\times n$ real matrices for some fixed integer $n$. Prove or disprove that the following subsets of $V$ are subspaces of $V$.(a) The set $S$ consisting of all $n\times n$ symmetric matrices.(b) The set $T$ consisting of […]
The Centralizer of a Matrix is a SubspaceLet $V$ be the vector space of $n \times n$ matrices, and $M \in V$ a fixed matrix. Define\[W = \{ A \in V \mid AM = MA \}.\]The set $W$ here is called the centralizer of $M$ in $V$.Prove that $W$ is a subspace of $V$.Proof.First we check that the zero […]
The Intersection of Two Subspaces is also a SubspaceLet $U$ and $V$ be subspaces of the $n$-dimensional vector space $\R^n$.Prove that the intersection $U\cap V$ is also a subspace of $\R^n$.Definition (Intersection).Recall that the intersection $U\cap V$ is the set of elements that are both elements of $U$ […]
Determine the Values of $a$ so that $W_a$ is a SubspaceFor what real values of $a$ is the set\[W_a = \{ f \in C(\mathbb{R}) \mid f(0) = a \}\]a subspace of the vector space $C(\mathbb{R})$ of all real-valued functions?Solution.The zero element of $C(\mathbb{R})$ is the function $\mathbf{0}$ defined by […]
Sequences Satisfying Linear Recurrence Relation Form a SubspaceLet $V$ be a real vector space of all real sequences\[(a_i)_{i=1}^{\infty}=(a_1, a_2, \cdots).\]Let $U$ be the subset of $V$ defined by\[U=\{ (a_i)_{i=1}^{\infty} \in V \mid a_{k+2}-5a_{k+1}+3a_{k}=0, k=1, 2, \dots \}.\]Prove that $U$ is a subspace of […]
|
Let $F$ be a field and let $A\in M_n(F)$ be a matrix with $det(A) = \pm 1 $. How can I show that $A$ is a product of involutions ? Of course the converse is true and clear.
By involution I mean a matrix whose square is the identity matrix.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This looks trivial. Using elementary row/column operations, one can always decompose a square matrix into a product of the form $PDQ$, where $P$ and $Q$ are products of shear matrices and row/column exchange matrices, and $D$ is a diagonal matrix. Clearly, every row/column exchange matrix is an involution. And every shear matrix is a product of two involutions: $$ \pmatrix{1&x\\ 0&1}=\pmatrix{1&-x\\ 0&-1}\pmatrix{1&0\\ 0&-1}. $$ So, we only need to show that every diagonal matrix with determinant $\pm 1$ is a product of involutions. It's easy to see that $D=D_0\prod_{j=1}^{n-1} D_j$, where $D_0$ is either the identity matrix or $\operatorname{diag}(1,\ldots,1,-1)$ (hence $D_0$ is an involution) and every other $D_j$ is a diagonal matrix of determinant $1$ with at most two diagonal entries (that are reciprocal to each other) unequal to $1$. Yet each $D_j$ is a product of two involutions because $$ \pmatrix{x&0\\ 0&1/x}=\pmatrix{0&x\\ 1/x&0}\pmatrix{0&1\\ 1&0}. $$
|
I'm not able to come up with an argument that is fundamentally different from the "committee argument", but it is possible to cast the entire proof in geometric terms if one is willing to move into the third dimension.
Consider a three-dimensional grid analogous to your two-dimensional one. Your grid is a quadrant of an infinite two-dimensional grid; the three-dimensional analogue would be an octant of an infinite three-dimensional grid. In your grid, there are two directions in which one can move, left and right. A point in your grid is given by coordinates $(n,k),$ where $n\ge0$ is the total number of steps and $k$ ($0\le k\le n$) is the number of right steps. In the three-dimensional version, there are three directions in which one can move, which we will call left, right, and back. (Think of "back" as the direction into the page.) A point in the grid can be given coordinates $(n,k,a),$ where $n$ is the total number of steps, $k$ ($0\le k\le n$) is the number of left and right steps (that is, non-back steps), and $a$ ($0\le a\le k$) is the number of left steps.
The two sides of your identity both equal the number of three-dimensional grid paths joining $(0,0,0)$ to $(n,k,a),$ and correspond to different ways of decomposing a three-dimensional grid path into two two-dimensional grid paths. In two dimensions, the two directions of movement might be given coordinates$$L:\ \left(-\frac{1}{\sqrt{2}},-\frac{1}{\sqrt{2}}\right),\qquad R:\ \left(\frac{1}{\sqrt{2}},-\frac{1}{\sqrt{2}}\right).$$In three dimensions, the three directions might be given coordinates$$L:\ \left(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{6}},-\frac{1}{\sqrt{3}}\right),\qquad R:\ \left(-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{6}},-\frac{1}{\sqrt{3}}\right),\qquad B:\ \left(0,-\sqrt{\frac{2}{3}},-\frac{1}{\sqrt{3}}\right).$$If we project out the $x$-coordinate in this coordinate system, then left and right steps look the same: both correspond to a move which we might call "front",$$F:\ \left(0,\frac{1}{\sqrt{6}},-\frac{1}{\sqrt{3}}\right).$$A path from $(0,0,0)$ to $(n,k,a)$ then consists of $k$ front steps and $n-k$ back steps. The number of such paths is $\binom{n}{k}.$ If we project out the back coordinate, that is, we project onto the plane of the left and right steps, then we effectively remove all back steps from the path. Such a path then consists of $k$ steps, $a$ of which are left steps, and $k-a$ of which are right steps. There are $\binom{k}{a}$ such paths. The number of three-dimensional paths is given by the product of the numbers of paths in these two projections, which is the left side of your identity.
To get the right side of the identity, we perform a projection so that right and back steps look like each other and left steps are unchanged. (It's not terribly relevant to the combinatorics, but this projection is achieved by the map $v\mapsto v-(n\cdot v)n$ where$$n=\left(\frac{1}{2},-\frac{\sqrt{3}}{2},0\right)$$is the unit vector orthogonal to the $L$ direction and the $z$ axis. Under this projection, the $R$ and $B$ directions both map to$$N:\ \left(-\frac{1}{2\sqrt{2}},-\frac{1}{2\sqrt{6}},-\frac{1}{\sqrt{3}}\right).$$Here $N$ stands for "non-left".) The three-dimensional path then projects to a path consisting of $a$ left steps and $n-a$ non-left steps. There are $\binom{n}{a}$ such paths. If we project onto the plane of the right and back steps, which effectively removes left steps, we are left with a path of $n-a$ steps, $k-a$ of which are right steps and $n-k$ of which are back steps. There are $\binom{n-a}{k-a}$ such paths. Again, the number of three-dimensional paths is given by the product of the numbers of paths in these two projections.
This is, of course, the committee argument: the left and right steps are the committee members, and the left steps are the special committee members. The back steps are the non-members of the committee.
Added: In my opinion, the best way to look a the identity was given in Marc van Leeuwen's answer here. Let $b=k-a$ and let $c=n-k$ so that $k=a+b$ and $n=a+b+c.$ Then the identity becomes$$\binom{n}{a+b}\binom{a+b}{a}=\binom{n}{a}\binom{n-a}{b},$$which can be rewritten as$$\binom{n}{c}\binom{n-c}{a}=\binom{n}{a}\binom{n-a}{b}.$$The latter consists of two pieces of the six-way identity$$\begin{aligned}\binom{n}{a}\binom{n-a}{b}=\binom{n}{a}\binom{n-a}{c}&=\binom{n}{b}\binom{n-b}{a}=\binom{n}{b}\binom{n-b}{c}\\&=\binom{n}{c}\binom{n-c}{a}=\binom{n}{c}\binom{n-c}{b}\end{aligned}$$in which the roles of $a,$ $b,$ and $c$ have been permuted in all possible ways. These correspond to six different ways of writing the trinomial coefficient$$\binom{n}{a,b,c}=\frac{n!}{a!\,b!\,c!}.$$The six-way identity is easy to show since all six expressions telescope to give the right hand side of the formula above. For example,$$\binom{n}{b}\binom{n-b}{a}=\frac{n!}{b!(n-b)!}\frac{(n-b)!}{a!(n-a-b)!}=\frac{n!}{a!\,b!\,c!},$$where we have used $n-a-b=c.$ The same proof works for all six expressions.
In terms of the geometric picture I outlined above, the first two of the six expressions correspond to projecting so that right and back steps look alike, and then projecting onto the plane of right and back steps; the second two correspond to projecting so that back and left steps look alike, and then projecting onto the plane of back and left steps; the last two correspond to projecting so that left and right steps look alike, and then projecting onto the plane of left and right steps. In each pair, both expressions count the same thing: for example, in the first pair, $\binom{n-a}{b}$ and $\binom{n-a}{c}$ both count the number of paths consisting of $b$ right steps and $c$ back steps.
Note that the $6=3!$ telescoping expressions for the trinomial coefficient become $\ell!$ expressions in the case of multinomial coefficients (with $n=a_1+a_2+\ldots+a_\ell$):$$\begin{aligned}&\binom{n}{a_1,a_2,\ldots,a_\ell}=\frac{n!}{a_1!\,a_2!\,\ldots,a_\ell!}\\&\quad=\frac{n!}{a_1!(n-a_1)!}\frac{(n-a_1)!}{a_2!(n-a_1-a_2)!}\frac{(n-a_1-a_2)!}{a_3!(n-a_1-a_2-a_3)!}\ldots\frac{(n-a_1-\ldots-a_{\ell-1})!}{a_\ell!(n-a_1-\ldots-a_\ell)!}\\&\quad=\binom{n}{a_1}\binom{n-a_1}{a_2}\binom{n-a_1-a_2}{a_3}\ldots\binom{n-a_1-\ldots-a_{\ell-1}}{a_\ell}.\end{aligned}$$Observe that the $\ell^\text{th}$ binomial coefficient in the product always equals $1$ and can be omitted since $n-a_1-\ldots-a_{\ell-1}=a_\ell.$ This is what was done in your original identity. Permutations of the $a_j$ give $\ell!$ different expressions for the multinomomial coefficient. Such expressions should be interpretable in terms of different series of projections of an $\ell$-dimensional grid.
Further comments: The difficulty with interpreting this identity geometrically is that it contains products of binomial coefficients. The product could represent concatenation of paths, but the set of paths described on the left doesn't look much like the set of paths described on the right. (They have different ending points, different numbers of steps, and so on.) One can devise a bijection between the set of paths counted by the left side and the set of paths counted by the right side, but this is essentially the committee argument, and isn't at all natural—at least the way I came up with isn't. My answer was an attempt to interpret multiplication in terms of combining different projections rather than in terms of concatenation. I haven't yet been able to come up with any other geometrical interpretation of multiplication.
|
Suppose you have a data set $Y_{1}, ..., Y_{n}$ from a continuous distribution with density $p(y)$ supported on $[0,1]$ that is not known, but $n$ is pretty large so a kernel density (for example) estimate, $\hat{p}(y)$, is pretty accurate. For a particular application I need to transform the observed data to a finite number of categories to yield a new data set $Z_{1}, ..., Z_{n}$ with an implied mass function $g(z)$.
A simple example would be $Z_{i} = 0$ when $Y_{i} \leq 1/2$ and $Z_{i} = 1$ when $Y_{i} > 1/2$. In this case the induced mass function would be
$$ \hat{g}(0) = \int_{0}^{1/2} \hat{p}(y) dy, \ \ \ \hat{g}(1) = \int_{1/2}^{1} \hat{p}(y)dy$$
The two "tuning parameters" here are the number of groups, $m$, and the $(m-1)$ length vector of thresholds $\lambda$. Denote the induced mass function by $\hat{g}_{m,\lambda}(y)$.
I'd like a procedure that answers, for example, "What is the best choice of $m, \lambda$ so that increasing the number of groups to $m+1$ (and choosing the optimal $\lambda$ there) would yield a negligible improvement?". I feel like perhaps a test statistic can be created (maybe with the difference in KL divergence or something similar) whose distribution can be derived. Any ideas or relevant literature?
Edit: I have evenly spaced temporal measurements of a continous variable and am using an inhomogenous Markov chain to model the temporal dependence. Frankly, discrete state markov chains are much easier to handle and that is my motivation. The observed data are percentages. I'm currently using an ad hoc discretization that looks very good to me but I think this is an interesting problem where a formal (and general) solution is possible. Edit 2: Actually minimizing the KL divergence would be equivalent to not discretizing the data at all, so that idea is totally out. I've edited the body accordingly.
|
I assume that you are using the OLS estimator on this linear regression model. You can use the
, which will be the solution to a minimization problem under inequality constraints. Using standard matrix notation (vectors are column vectors) the minimization problem is stated as inequality constrained least-squares estimator
$$\min_{\beta} (\mathbf y-\mathbf X\beta)'(\mathbf y-\mathbf X\beta) \\s.t.-\mathbf Z\beta \le \mathbf 0 $$
...where $\mathbf y$ is $n \times 1$ , $\mathbf X$ is $n\times k$, $\beta$ is $k\times 1$ and $\mathbf Z$ is the $m \times k$ matrix containing the out-of-sample regressor series of length $m$ that are used for prediction. We have $m$ linear inequality constraints (and the objective function is convex, so the first order conditions are sufficient for a minimum).
The Lagrangean of this problem is
$$L = (\mathbf y-\mathbf X\beta)'(\mathbf y-\mathbf X\beta) -\lambda'\mathbf Z\beta = \mathbf y'\mathbf y-\mathbf y'\mathbf X\beta - \beta'\mathbf X'\mathbf y+ \beta'\mathbf X'\mathbf X\beta-\lambda'\mathbf Z\beta$$
$$= \mathbf y'\mathbf y - 2\beta'\mathbf X'\mathbf y+ \beta'\mathbf X'\mathbf X\beta-\lambda'\mathbf Z\beta $$
where $\lambda$ is a $m \times 1$ column vector of non-negative Karush -Kuhn -Tucker multipliers. The first order conditions are (you may want to review rules for matrix and vector differentiation)
$$\frac {\partial L}{\partial \beta}= \mathbb 0\Rightarrow - 2\mathbf X'\mathbf y +2\mathbf X'\mathbf X\beta - \mathbf Z'\lambda $$
$$\Rightarrow \hat \beta_R = \left(\mathbf X'\mathbf X\right)^{-1}\mathbf X'\mathbf y + \frac 12\left(\mathbf X'\mathbf X\right)^{-1}\mathbf Z'\lambda = \hat \beta_{OLS}+ \left(\mathbf X'\mathbf X\right)^{-1}\mathbf Z'\xi \qquad [1]$$
...where $\xi = \frac 12 \lambda$, for convenience, and $\hat \beta_{OLS}$ is the estimator we would obtain from ordinary least squares estimation.
The method is fully elaborated in Liew (1976).
|
I know that there are plenty of
Machine learning algorithms out there for prediction and classification tasks, but why then we need Neural networks even? Why Neural nets?
The basic idea of implementing neural networks is to mimic human
brain or simply develop a model that works like a human brain. In Machine learning techniques we depend on high level statistics and do not implement how actually a human brain works on the problem. And lots of problems that are difficult for machine learning models can be solved easily by Deep learning which involves neural networks. How they actually work ?
I know the term Neural networks sounds strange but it is very easy than typical machine learning methods we follow. You don’t believe me?
Okay, then consider a situation that you are going to a
movie with your friends by a car. How would you control the speed of the vehicle? It depends on various factors like Temperature, Traffic density, Remaining Time. This task can be emulated as below.
Let’s say the Temperature, Traffic Density and Time remaining acts like
neurons in the brain that decide how the speed to be maintained by you.
Seems easy right? Okay not so fast, there is more. From the above shown image we can say that the speed to be maintained depends more on
Time remaining and Traffic Density and the very less on Temperature. So how would we show that behavior? We assign some weights associated to each neuron, and there by we can say how much the each neuron contribute to the overall progress of the output neuron.
If we say $ X_1,X_2,X_3$ be the inputs then $ W_1,W_2,W_3$ are the weights associated to each of them respectively. Moreover we need to make sure that all the inputs are
feature scaled for better performance as the units of time and temperature are not equal. So to feature-scale the data we use two methods as follows. Standardizing:
$$ X=\frac{X-\mu}{\sigma}$$
$\mu – mean, \\ \sigma – standard \ deviation$
Min-Max Scaling:
$$X = \frac{X-X_{min}}{X_{max}-X_{min}}$$
How to predict/classify?
Now that we have our data ready, and we simply need to
multiply the input feature and the weight associated with it and pass it to an activation function
(g).
Therfore,
$$Z = X_1W_1+X_2W_2+X_3W_3$$
or
$$Z = \sum_{i=1}^{n}X_iW_i$$
then applying activation gives output,
$$ O = g(Z) $$
What is activation function?
Yeah ! I know the term activation function is somewhat new to you, let me explain in an intuitive way. The value of $Z$ tends to be anything between $-\infty$ to $+\infty$ which is not good to proceed as we don’t know what would be the resulting output values range between. It would have been better if the output is scaled between known
limits. There comes the activation function.
The activation function simply takes a value and outputs a value that is in a
predetermined limit. We use several activation functions and again it is not trivial to select particular activation function for a particular task, it can be any.
The
sigmoid non-linearity ranges between , where as the 0 and +1 hyperbolic tangentis in between
-1 and +1
The above table is enough to understand different types of activation functions. Though it is good to remember that we also use another function called
ReLu which is short for Rectified Linear units which gives great performance in image classification and processing techniques. It is simply defined as $max(0,x)$. Which activation function to use?
It’s purely depends on your application and the purpose of using neural networks. For
classification tasks we mainly use Sigmoid,Hyperbolic tangent and ReLu where as in the context of regression analysis we use Linear activation. Is that all about neural networks?
No, there is more. We will first dive into the native
perceptron algorithm which mimics a single layer neuron as shown above in the later post. And in the series of this posts we will discuss the best practices and even feature extraction with some examples.
|
NTS ABSTRACT
Return to NTS Spring 2017
Contents Sept 7
David Zureick-Brown Progress on Mazur’s program B I’ll discuss recent progress on Mazur’s ”Program B”, including my own recent work with Jeremy Rouse which completely classifies the possibilities for the 2-adic image of Galois associated to an elliptic curve over the rationals. I will also discuss a large number of other very recent results by many authors. Sept 14
Solly Parenti Unitary CM Fields and the Colmez Conjecture Pierre Colmez conjectured a formula for the Faltings height of a CM abelian variety in terms of log derivatives of Artin L-functions arising from the CM type. We will study the relevant class functions in the case where our CM field contains an imaginary quadratic field and use this to extend the known cases of the conjecture. Sept 21
Chao Li Goldfeld's conjecture and congruences between Heegner points Given an elliptic curve E over Q, a celebrated conjecture of Goldfeld asserts that a positive proportion of its quadratic twists should have analytic rank 0 (resp. 1). We show this conjecture holds whenever E has a rational 3-isogeny. We also prove the analogous result for the sextic twists of j-invariant 0 curves. For a more general elliptic curve E, we show that the number of quadratic twists of E up to twisting discriminant X of analytic rank 0 (resp. 1) is >> X/log^{5/6}X, improving the current best general bound towards Goldfeld's conjecture due to Ono--Skinner (resp. Perelli--Pomykala). We prove these results by establishing a congruence formula between p-adic logarithms of Heegner points based on Coleman's integration. This is joint work with Daniel Kriz. Sept 28
Daniel Hast Rational points on solvable curves over Q via non-abelian Chabauty By Faltings' theorem, any curve over Q of genus at least two has only finitely many rational points—but the bounds coming from known proofs of Faltings' theorem are often far from optimal. Chabauty's method gives much sharper bounds for curves whose Jacobian has low rank, and can even be refined to give uniform bounds on the number of rational points. This talk is concerned with Minhyong Kim's non-abelian analogue of Chabauty's method, which uses the unipotent fundamental group of the curve to remove the restriction on the rank. Kim's method relies on a "dimension hypothesis" that has only been proven unconditionally for certain classes of curves; I will give an overview of this method and discuss my recent work with Jordan Ellenberg where we prove this dimension hypothesis for any Galois cover of the projective line with solvable Galois group (which includes, for example, any hyperelliptic curve). Oct 12
Matija Kazalicki Supersingular zeros of divisor polynomials of elliptic curves of prime conductor and Watkins' conjecture For a prime number p, we study the mod p zeros of divisor polynomials of elliptic curves E/Q of conductor p. Ono made the observation that these zeros of are often j-invariants of supersingular elliptic curves over F_p. We relate these supersingular zeros to the zeros of the quaternionic modular form associated to E, and using the later partially explain Ono's findings. We notice the curious connection between the number of zeros and the rank of elliptic curve.
In the second part of the talk, we briefly explain how a special case of Watkins' conjecture on the parity of modular degrees of elliptic curves follows from the methods previously introduced. This is a joint work with Daniel Kohen.
Oct 19
Andrew Bridy Arboreal finite index for cubic polynomials Let K be a global field of characteristic 0. Let f \in K[x] and b \in K, and set K_n = K(f^{-n}(b)). The projective limit of the groups Gal(K_n/K) embeds into the automorphism group of an infinite rooted tree. A major problem in arithmetic dynamics is to find conditions that guarantee the index is finite; a complete answer would give a dynamical analogue of Serre's celebrated open image theorem. I solve the finite index problem for cubic polynomials over function fields by proving a complete list of necessary and sufficient conditions. For number fields, the proof of sufficiency is conditional on both the abc conjecture and a form of Vojta's conjecture. This is joint work with Tom Tucker. Oct 19
Jiuya Wang Malle's conjecture for compositum of number fields Abstract: Malle's conjecture is a conjecture on the asymptotic distribution of number fields with bounded discriminant. We propose a general framework to prove Malle's conjecture for compositum of number fields based on known examples of Malle's conjecture and good uniformity estimates. By this method, we prove Malle's conjecture for $S_n\times A$ number fields for $n = 3,4,5$ and $A$ in an infinite family of abelian groups. As a corollary, we show that Malle's conjecture is true for $C_3\wr C_2$ in its $S_9$ representation, whereas its $S_6$ representation is the first counter example of Malle's conjecture given by Kl?\"uners. By a sieve method, we further prove the secondary term for $S_3\times A$ extensions for infinitely many odd abelian groups $A$ over $\mathbb{Q}$. Nov 2
Carl Wang-Erickson The rank of the Eisenstein ideal Abstract: In his landmark 1976 paper "Modular curves and the Eisenstein ideal", Mazur studied congruences modulo p between cusp forms and an Eisenstein series of weight 2 and prime level N. We use deformation theory of pseudorepresentations to study the corresponding Hecke algebra. We will discuss how this method can be used to refine Mazur's results, quantifying the number of Eisenstein congruences. Time permitting, we'll also discuss some partial results in the composite-level case. This is joint work with Preston Wake. Nov 9
Masahiro Nakahara Index of fibrations and Brauer-Manin obstruction Abstract: Let X be a smooth projective variety with a fibration into varieties that either satisfy a condition on representability of zero-cycles or that are torsors under an abelian variety. We study the classes in the Brauer group that never obstruct the Hasse principle for X. We prove that if the generic fiber has a zero-cycle of degree d over the generic point, then the Brauer classes whose orders are prime to d do not play a role in the Brauer--Manin obstruction. As a result we show that the odd torsion Brauer classes never obstruct the Hasse principle for del Pezzo surfaces of degree 2, certain K3 surfaces, and Kummer varieties. Nov 16
Joseph Gunther Irrational points on random hyperelliptic curves Abstract:Let d and g be positive integers with 1 < d < g. If d is odd, we prove there exists B_d such that a positive proportion of odd genus g hyperelliptic curves over Q have at most B_d points of degree d. If d is even, we similarly bound the degree d points not lazily pulled back from degree d/2 points of the projective line. The proofs use tropical geometry work of Park, as well as results of Bhargava and Gross on average ranks of hyperelliptic Jacobians. This is joint work with Jackson Morrow.
Time willing, we'll discuss rich, delicious interactions with work of next week's speaker.
Nov 30
Reed Gordon-Sarney Zero-Cycles on Torsors under Linear Algebraic Groups Abstract:In this talk, the speaker will discuss his thesis on the following question of Totaro from 2004: if a torsor under a connected linear algebraic group has index d, does it have a close etale point of degree d? The d = 1 case is an open question of Serre from the `60s. The d > 1 case, surprisingly, has a negative answer. Dec 7
Rafe Jones How do you (easily) find the genus of a plane curve? Abstract: If you’ve ever wanted to show a plane curve has only finitely many rational points, you’ve probably wished you could invoke Faltings’ theorem, which requires the genus of the curve to be at least two. At that point, you probably asked yourself the question in the title of this talk. While the genus is computable for any given irreducible curve, it depends in a delicate way on the singular points. I’ll talk about a much nicer formula that applies to irreducible “variables separated” curves, that is, those given by A(x) = B(y) where A and B are rational functions.
Then I’ll discuss how to use this to resolve the question that motivated me originally: given an integer m > 1 and a rational function f defined over a number field K, does f possess a K-orbit containing infinitely many mth powers of elements of K? The answer turns out to be no unless f has a very special form: for m > 4 the map f must essentially be the mth power of some rational function, while for smaller m other exceptions arise, including maps closely related to multiplication on elliptic curves. If time permits I’ll discuss a connection to an arithmetic dynamical analogue of the Mordell-Lang conjecture.
Dec 14
Robert J. Lemke Oliver Selmer groups, Tate-Shafarevich groups, and ranks of abelian varieties in quadratic twist families Abstract: We determine the average size of the $\phi$-Selmer group in any quadratic twist family of abelian varieties having an isogeny $\phi$ of degree 3 over any number field. This has several applications towardsthe rank statistics in such families of quadratic twists. For example, it yields the first known quadratic twist families of absolutely simple abelian varieties over $\mathbb{Q}$, of dimension greater than one, for which the average rank is bounded; in fact, we obtain such twist families in arbitrarily large dimension. In the case that $E/F$ is an elliptic curve admitting a 3-isogeny, we prove that the average rank of its quadratic twists is bounded; if $F$ is totally real, we moreover show that a positive proportion of these twists have rank 0 and a positive proportion have $3$-Selmer rank 1. We also obtain consequences for Tate-Shafarevich groups of quadratic twists of a given elliptic curve. This is joint work with Manjul Bhargava, Zev Klagsbrun, and Ari Shnidman.
|
We are always introduced to magnets by pictures illustrating a parallepiped magnet, a horseshoe magnet, etc...
Anyway the first time we actually calculate a magnetic field is for eelectromagnets. Indeed I've rarely seen explaining or calculating the magnetic field of a permanent magnet. I know formulas for the magnetic field of a moving charge, around a conductive wire, inside a solenoid, but I don't have idea on how to go about calcualting the filed of a simple permanent magnet. It's like that without the help of electricity I'm stuck. I'd guess it would be heavily dependent on the magnetic material, its shape and the medium in which is placed. I hope that someone would provide me some insights on why such formulas are never introduced in most physics courses and lay down some examples of a permanent magnet field caluclation.
We are always introduced to magnets by pictures illustrating a parallepiped magnet, a horseshoe magnet, etc...
It is a textbook problem to get the magnetic field of a permanent magnet sphere that is perfectly uniform. It turns out it looks exactly like the field of a ideal dipole.
The solution is a bit messy already, but if the magnet has a total magnetic moment of $\mathbf{m}$ and radius $a$, then the magnetic field for $r>a$ is:
$$\mathbf{B}(r>a)=\frac{\mu_0}{4\pi} \left[ -\frac{\mathbf{m}}{r^3}+\frac{3(\mathbf{m}\cdot\mathbf{r})\mathbf{r}}{r^5}\right]$$
If you look at large enough distances $r>>a$, most magnets roughly have fields like that in the above equation. However, when you have a strangely shaped magnet and are looking at the field really close to it, you can get more complex behavior.
See for example: http://farside.ph.utexas.edu/teaching/jk1/lectures/node61.html
|
Electronic Journal of Probability Electron. J. Probab. Volume 20 (2015), paper no. 95, 35 pp. Random walk on random walks Abstract
In this paper we study a random walk in a one-dimensional dynamic random environment consisting of a collection of independent particles performing simple symmetric random walks in a Poisson equilibrium with density $\rho \in (0,\infty)$. At each step the random walk performs a nearest-neighbour jump, moving to the right with probability $p_{\circ}$ when it is on a vacant site and probability $p_{\bullet}$ when it is on an occupied site. Assuming that $p_\circ \in (0,1)$ and $p_\bullet \neq \tfrac12$, we show that the position of the random walk satisfies a strong law of large numbers, a functional central limit theorem and a large deviation bound, provided $\rho$ is large enough. The proof is based on the construction of a renewal structure together with a multiscale renormalisation argument.
Article information Source Electron. J. Probab., Volume 20 (2015), paper no. 95, 35 pp. Dates Accepted: 12 September 2015 First available in Project Euclid: 4 June 2016 Permanent link to this document https://projecteuclid.org/euclid.ejp/1465067201 Digital Object Identifier doi:10.1214/EJP.v20-4437 Mathematical Reviews number (MathSciNet) MR3399831 Zentralblatt MATH identifier 1328.60226 Subjects Primary: 60F15: Strong theorems 60K35: Interacting random processes; statistical mechanics type models; percolation theory [See also 82B43, 82C43] 60K37: Processes in random environments Secondary: 82B41: Random walks, random surfaces, lattice animals, etc. [See also 60G50, 82C41] 82C22: Interacting particle systems [See also 60K35] 82C44: Dynamics of disordered systems (random Ising systems, etc.) Rights This work is licensed under aCreative Commons Attribution 3.0 License. Citation
Hilário, Marcelo; den Hollander, Frank; Sidoravicius, Vladas; Soares dos Santos, Renato; Teixeira, Augusto. Random walk on random walks. Electron. J. Probab. 20 (2015), paper no. 95, 35 pp. doi:10.1214/EJP.v20-4437. https://projecteuclid.org/euclid.ejp/1465067201
|
This was one of those big questions in the 19th century. It still causes some consternation. If you have a composite system, such as the nucleus of an atom, some other force is necessary. This force of course is the nuclear interaction. This keeps the protons from flying apart, though for some unstable nuclei there are transitions that eject charged particles, electrons or positrons, due to weak interactions. In the case of the proton it is composed of three quarks and these are bound to each other by the QCD (quantum chromodynamics) interaction. The gauge bosons called gluons interact most strongly at low energy and these keep the quarks, with charges $2/3,2/3,-1/3$ in a bound state.
Things are a bit more mysterious with point-like particles, such as the electron and other leptons and quarks. We generally do not regard such particles as composite, though this has not stopped people from proposing constituents called preons or rishons that make them up.
There is a problem with defining the mass of the electron or any point-like electrically charged particle. The mass of the electric field is$$m_\textrm{em}~=~\frac{1}{2}\int E^2~\mathrm d^3r~=~\frac{1}{2}\int_r^\infty\left(\frac{e}{4\pi r^2}\right)^24\pi r^2~\mathrm dr~=~\frac{e^2}{8\pi r}.$$if the electron has zero radius this is divergent. There is the classical radius of the electron $r~=~\alpha\lambda_c$ $=~2.8\times10^{-13}~\mathrm{cm}$ for $\lambda_c~=~\hbar/mc$ the Compton wavelength. This raises some questions, for the classical radius suggests "structure," and it also has a relationship to something called
Zitterbewegung.
A more standard approach to this is renormalization. A screenshot of this is to look at this integral with the variables $p~=~1/r$ so in this integral above $\mathrm dr/r~\rightarrow~-\mathrm dp/p$. Here we are thinking of momentum and wavelength or position as reciprocally related. This integral is then evaluated for a finite $r$ as equivalent to being evaluated for a finite momentum cut off $\Lambda$$$I(\Lambda)~=~\int_0^\Lambda\frac{\mathrm dp}{p}~\simeq~1~+~2^{-1}~+~3^{-1}~\dots$$which is equal to $$\lim_{\Lambda\rightarrow\infty}I(\Lambda)~=~-\zeta(1)$$In some ways this is a removal of infinities. Another curious way to look at this is with p-adic number theory. This is a topic that could consume a lot of bandwidth.
We have another way to look at this. This comes down to the question of what do we mean by "composite." It also forces us to think about what we mean by the locality of field operators. The Dirac magnetic monopole is a solenoid with an opening to an infinite coil. The condition for the Dirac monopole is that the Aharanov-Bohm phase of a quantum system is zero as it passes the "tube" of the solenoid $\psi~\rightarrow~\exp\left(ie/\hbar\displaystyle\oint{\vec A}\cdot ~\mathrm d{\vec r}\right)\psi$. This might be compared to "cutting off the tail" on the magnetic monopole charge. The vanishing of this is equivalent to saying$$2\pi N~=~\frac{e}{\hbar}\displaystyle\oint{\vec A}\cdot ~\mathrm d{\vec r}~=~\frac{e}{\hbar}\iint\nabla\times{\vec A}\cdot{\vec a},$$for the integral evaluated over units of area of the opening. This is of course the magnetic field ${\vec B}~=~-\nabla\times{\vec A}$ evaluated in a Gauss' law that gives the magnetic monopole charge $g~=~\displaystyle\iint\nabla\times{\vec A}\cdot{\vec a}$ and we use this expression to see the S-duality relationship between the electric and magnetic monopole charge $$eg~=~2\pi N\hbar,$$sometimes called the Montenen-Olive relationship.
This means that if we have an electric charge we can use the renormalization machinery to illustrate how the vacuum around it is polarized with virtual particles according to $\alpha~=~\frac{e^2}{4\pi\epsilon\hbar c}$. The electric charge is comparatively weak in strength with a modest polarization of the vacuum expanded in orders of $\alpha$ for $N$ internal lines or loops. This S-dual relationship tells us that while this is modest, the magnetic monopole is very strong and the vacuum is a "bee's nest" of lots of particles. This then means the dual of the electric field is a magnetic monopole field that in some ways appears composite.
This means in some ways we have questions needed to be asked about the locality of field operators. Something that appears local, point-like and "nice" may be dual to something that appears not so local, more composite-like and not renormalizable. As a result there are still open questions on this, and even Feynman agreed with Dirac that the situation with QED was not perfectly satisfactory.
|
The Character of a Group Representation
Definition: Let $G$ be a group and let $(V, \rho)$ be a group representation of $G$. The Character of $(V, \rho)$ is the function $\chi_V : G \to \mathbb{C}$ defined for all $g \in G$ by $\chi_V(g) = \mathrm{trace} (\rho(g))$. Recall that if $A$ is a square matrix then $\mathrm{trace} (A)$ is the sum of the main diagonal entries of $A$. Example 1
Consider the symmetric group $S_3 = \langle g, r : g^3 = e, r^2 = e, rgr = g^2 \rangle$ and let consider the $2$-dimensional representation $\rho$ of $S_3$ specified by the generators of $G$ by:(1)
Then we have that:(2)
So we see that:(3)
Proposition 1: Let $G$ be a finite group and let $(V, \rho_V)$ and $(W, \rho_W)$ be group representations of $G$. Then: a) $\chi_V$ is conjugation invariant, that is, $\chi_V(g) = \chi_V(hgh^{-1})$ for all $g, h \in G$. b) $\chi_V(e) = \mathrm{dim}(V)$ where $e \in G$ denotes the identity element of $G$. c) $\chi_V(g^{-1}) = \overline{\chi_V(g)}$ for all $g \in G$.
The above example is a $2$-dimensional representation of $S_3$ and we calculated that $\chi(e) = 2$, which was to be expected by Proposition 1.b above.
Proposition 2: Let $G$ be a finite group and let $(V, \rho_V)$ and $(W, \rho_W)$ be group representations of $G$. Then: a) $\chi_{V^*}(g) = \chi_{V}(g^{-1})$ for all $g \in G$. b) $\chi_{V \oplus W} = \chi_V + \chi_W$. c) $\chi_{V \otimes W} = \chi_V \cdot \chi_W$. Recall that if $(V, \rho_V)$ is a group representation, then the corresponding dual group representation is $(V^*, \rho_V^*)$ where $V^* = \mathrm{How}(V, \mathbb{C})$ is the space of all linear functionals on $V$, and $\rho_V^*$ is defined for all $f \in V^*$ by $\rho_V^*(g)(f)(v) = [f \circ \rho(g)](v)$. Also recall that if $(V, \rho_V)$ and $(W, \rho_W)$ are group representations of $G$ then $(V \otimes W, \rho_V \oplus \rho_W)$ is another group representation of $G$ where $V \oplus W$ is the direct sum space of $V$ and $W$, and $\rho_V \oplus \rho_W$ is defined via the block matrix $(\rho_V \oplus \rho_W)(g) = \begin{bmatrix} \rho_V(g) & \mathbf{0} \\ \mathbf{0} & \rho_W(g) \end{bmatrix}$.
|
Consider a doublet of complex scalar fields $\mathbf{\Phi}$. The $SU(2)$ transformation is $\textrm{exp}\left(it^{i}\tau_{i}\right)$ with generators $\tau_{i}=\frac{\sigma_{i}}{2}$, whilst the $U(1)$ transformation is $\textrm{exp}\left(\frac{1}{2}i\alpha I\right)$. Hence we may write an $SU(2)\times U(1)$ transformation as $$\mathbf{\Phi}\rightarrow\textrm{exp}\left(\frac{1}{2}i\alpha I\right)\textrm{exp}\left(it^{i}\tau_{i}\right){\mathbf{\Phi}}\approx\left(I+\frac{1}{2}i\alpha I+it^{i}\tau_{i}\right)\mathbf{\Phi}$$ where the last equality applies in the infinitesimal case.
I have been looking at vacuum solutions $\mathbf{v}=\left(0,v_{0}\right)^{T}$ for Lagrangians that are invariant under this transformation. In order to determine the Goldstone modes, one needs to find the generators of the group that do not map the vacuum to zero. The result of this process is to find that $$\tau_{1}\mathbf{v}\neq0$$ $$\tau_{2}\mathbf{v}\neq0$$ $$(\tau_{3}-\frac{1}{2}I)\mathbf{v}\neq0$$ $$(\tau_{3}+\frac{1}{2}I)\mathbf{v}=0$$ and so there are three massless bosons.
However I am unable to understand why the generators of $SU(2)\times U(1)$ have been chosen as the specific combinations $\tau_{1}$, $\tau_{2}$, $(\tau_{3}-\frac{1}{2}I)$ and $(\tau_{3}+\frac{1}{2}I)$.
Edit (due to comment below): There is free choice of Lie algebra basis, but I don't see how that is an answer. In the case of $SU(2)$ symmetry breaking I have found that the number of broken generators is basis independent, and I would like to think that this would hold generally. So, surely there must be some additional constraint here.
|
The Annals of Statistics Ann. Statist. Volume 13, Number 2 (1985), 689-705. Additive Regression and Other Nonparametric Models Abstract
Let $(X, Y)$ be a pair of random variables such that $X = (X_1, \cdots, X_J)$ and let $f$ by a function that depends on the joint distribution of $(X, Y).$ A variety of parametric and nonparametric models for $f$ are discussed in relation to flexibility, dimensionality, and interpretability. It is then supposed that each $X_j \in \lbrack 0, 1\rbrack,$ that $Y$ is real valued with mean $\mu$ and finite variance, and that $f$ is the regression function of $Y$ on $X.$ Let $f^\ast,$ of the form $f^\ast(x_1, \cdots, x_J) = \mu + f^\ast_1(x_1) + \cdots + f^\ast_J(x_J),$ be chosen subject to the constraints $Ef^\ast_j = 0$ for $1 \leq j \leq J$ to minimize $E\lbrack(f(X) - f^\ast(X))^2\rbrack.$ Then $f^\ast$ is the closest additive approximation to $f,$ and $f^\ast = f$ if $f$ itself is additive. Spline estimates of $f^\ast_j$ and its derivatives are considered based on a random sample from the distribution of $(X, Y).$ Under a common smoothness assumption on $f^\ast_j, 1 \leq j \leq J,$ and some mild auxiliary assumptions, these estimates achieve the same (optimal) rate of convergence for general $J$ as they do for $J = 1.$
Article information Source Ann. Statist., Volume 13, Number 2 (1985), 689-705. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176349548 Digital Object Identifier doi:10.1214/aos/1176349548 Mathematical Reviews number (MathSciNet) MR790566 Zentralblatt MATH identifier 0605.62065 JSTOR links.jstor.org Citation
Stone, Charles J. Additive Regression and Other Nonparametric Models. Ann. Statist. 13 (1985), no. 2, 689--705. doi:10.1214/aos/1176349548. https://projecteuclid.org/euclid.aos/1176349548
|
Dynamical zeta functions and topology for negatively curved surfaces
主讲人:Semyon Dyatlov
时间: 周二、周三14:00-15:30,2019-7-2
地点:清华大学近春园西楼第三会议室
摘要
For a negatively curved compact Riemannian manifold (or more generally, for an Anosov flow), the Ruelle zeta function is defined by $$\zeta(s)=\prod_\gamma (1-e^{-s\ell_\gamma} ),\quad \Re s\gg 1,$$ where the product is taken over all primitive closed geodesics $\gamma$ with $\ell_\gamma>0$ denoting their length. Remarkably, this zeta function continues meromorphically to all of $ \mathbb C$. Using recent advances in the study of resonances for Anosov flows and simple arguments from microlocal analysis, we prove that for an orientable negatively curved surface, the order of vanishing of $\zeta(s)$ at $s=0$ is given by the absolute value of the Euler characteristic. In constant curvature this follows from the Selberg trace formula and this is the first result of this kind for manifolds which are not locally symmetric. This talk is based on joint work with Maciej Zworski.
|
As metioned in Wikipedia's biography, Shanks used Machin's formula$$ \pi = 16\arctan(\frac15) - 4\arctan(\frac1{239}) $$
The standard way to use that (and the various Machin-
like formulas found later) is to compute the arctangents using the power series
$$ \arctan x = x - \frac{x^3}3 + \frac{x^5}5 - \frac{x^7}7 + \frac{x^9}9 - \cdots $$
Getting $\arctan(\frac15)$ to 707 digits requires about 500 terms calculated to that precision. Each requires two long divisions -- one to divide the previous numerator by 25, another to divide it by the denominator.
The series for $\arctan(\frac1{239})$ converges faster and only needs some 150 terms.
(You can know how many terms you need because the series is
alternating (and absolutely decreasing) -- so once you reach a term that is smaller than your desired precision, you can stop).
The point of Machin-like formulas is that the series for $\arctan x$ converges faster the smaller $x$ is. We could just compute $\pi$ as $4\arctan(1)$, but the series converges
hysterically slowly when $x$ is as large as $1$ (and not at all if it is even larger). The trick embodied by Machin's formula is to express a straight angle as a sum/difference of the corner angles of (a small number of different sizes of) long and thin right triangles with simple integer ratios between the cathetes.
The arctangent gets easier to compute the longer and thinner each triangle is, and especially if the neighboring side is an integer multiple of the opposite one, which corresponds to angles of the form $\arctan\frac{1}{\text{something}}$. Then going from one numerator in the series to the next costs only a division, rather than a division
and a multiplication.
Machin observed that four copies of the $5$-$1$-$\sqrt{26}$ triangle makes the same angle as an $1$-$1$-$\sqrt2$ triangle (whose angle is $\pi/4$) plus one $239$-$1$-$\sqrt{239^2+1}$ triangle. These facts can be computed exactly using the techniques displayed here.
Later workers have found better variants of Machin's idea, nut if you're in prison without reference works, it's probably easiest to rediscover Machin's formula by remembering that some number of copies of $\arctan\frac1k$ for some fairly small $k$ adds up to something very close to 45°.
|
Scalar Multiples of Matrices
Definition: A scalar is a quantity that has a magnitude (size or length) but no direction.
Unless otherwise specified, scalars in the context of Linear Algebra will be a real number $k \in \mathbb{R}$. We are now ready to define scalar multiplication on a matrix.
Definition: If $A$ is an $m \times n$ matrix and $k \in \mathbb{R}$ a scalar, then the scalar multiple of $A$ by $k$ denoted $kA$ is an $m \times n$ matrix, all of whose entries are multiplied by $k$.
Determining a scalar multiple of a matrix is easy. For example, consider the matrix $A = A = \begin{bmatrix}1 & 2 & 3 & 4\\ 5 & 6 & 7 & 8 \end{bmatrix}$. If we wanted to figure out what the matrix $2A$ is, we would just take every entry in $A$ and multiply it by $2$ to get $2A = \begin{bmatrix}2\cdot1 & 2\cdot2 & 2\cdot3 & 2\cdot4\\ 2\cdot5 & 2\cdot6 & 2\cdot7 & 2\cdot8\end{bmatrix} = \begin{bmatrix}2 & 4 & 6& 8\\ 10 & 12 & 14 & 16\end{bmatrix}$
In general, if we have an $m \times n$ matrix $A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n}\\ a_{21} & a_{22} & & a_{2n}\\ \vdots & & \ddots & \vdots\\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix}$ and a scalar $k \in \mathbb{R}$ then:(1)
We will now look at some properties of scalar multiples of matrices in the following theorem.
Theorem 1: Let $A$ and $B$ be $m \times n$ matrices, and let $k, l \in \mathbb{R}$ be scalars. Then: a) $k(A + B) = kA + kB$. b) $k(A - B) = kA - kB$. c) $(k + l)A = kA + lA$. d) $(k - l)A = kA - lA$. e) $k(lA) = (kl) A$.
We will prove
(a) and (c) and leave the rest of the proofs to the reader as they follow the same format. Proof of (a): Proof of (c): Example 1 Given the matrix $A = \begin{bmatrix}2 & 4\\ 1 & 0\\ 0 & 3\\ -3 & 2\\ 1 & 3\end{bmatrix}$, find the matrix $-3A$.
To determine $-3A$, we will multiply every entry in A by -3 to obtain $-3A = \begin{bmatrix} -6 & -12\\ -3 & 0\\ 0 & -9\\ 9 & -6\\ -3 & -9 \end{bmatrix}$.
|
Question
Suppose you have a supply of inductors ranging from 1.00 nH to 10.0 H, and resistors ranging from $0.100 \textrm{ }\Omega$ to $1.00 \textrm{ M}\Omega$. What is the range of characteristic RL time constants you can produce by connecting a single resistor to a single inductor?
Final Answer
$\tau_{max} = 100 \textrm{ s}$
$\tau_{min} = 1.00 \times 10^{-15} \textrm{ s}$
Calculator Screenshots
Video Transcript
This is College Physics Answers with Shaun Dychko. We have inductors ranging from a minimum of 1 nanohenry to a maximum of 10 henrys and some resistors ranging from 0.100 ohms up to 1.00 mega-ohms and we are asked to figure out what is the range of possible time constants we could make? So, the time constant is going to be the inductance divided by resistance. So to get the maximum time constant we want to maximize the numerator and minimize the denominator and so we will take maximum inductance divided by minimum resistance. So that’s 10 henrys divided by 0.100 ohms which is 100 seconds will be the maximum time constant. And then for the minimum time constant, we want to minimize the numerator and maximize the denominator so we have one times ten to the minus nine henry is the minimum inductance, that’s 1 nanohenry and we will divide that by 1 megaohm resistance which is 1 times 10 to the minus 15 seconds so that’s one femto-second.
|
Too long for a comment: Here's a little intuitive tip: What do $~\dfrac{\sin t}t~$ and $~\dfrac{\cos t}{t^2}~$ both have in
common? They are even functions. So, if you notice various series or integrals whose summand
or integrand belongs to this category having a nice closed form, that should not surprise you. For
instance, $~\displaystyle\int_{-\infty}^\infty\frac{\sin x}x~dx=\pi,~$ or $~\displaystyle\int_{-\infty}^\infty\frac{\cos x}{1+x^2}~dx=\frac\pi e,~$ or $~\displaystyle\sum_{n=-\infty}^\infty'\frac1{n^{2k}}=a_k~\pi^{2k},~$ where the
apostrophe represents the omission of the divergent term corresponding to $n=0$, and $a_k\in\mathbb Q$.
Obviously, if one were to sum or integrate odd functions over this entire interval, then the result
would be $0$ for integrals, and either $0$ or $f(0)$ for sums, since the values on $(-\infty,0)$ would cancel
those on $(0,\infty)$. So, in this sense, if one were to define odd $\zeta$ values as $\zeta(2k+1)=\displaystyle\sum_{n=-\infty}^\infty'\frac1{n^{2k+1}}$
then they would indeed possess a very beautiful closed form, namely $0$. Indeed, $~\displaystyle\int_0^\infty\frac{\sin x}{1+x^2}~dx~$
also lacks a known closed form, as does $~\displaystyle\sum_{n=1}^\infty\frac{\sin nx}{n^2}.~$ Please do not misunderstand me, there are
exceptions to every rule, and one might indeed find counter-examples of both kinds, but usually
they are trivial $($e.g., the odd integrand whose primitive can be expressed in closed form, and then
evaluated at the extremities of the integration interval, or, in the case of $~\displaystyle\sum_{n=1}^\infty\frac{\cos nx}n,~$ the famous
Mercator series for the natural logarithm; not to mention a whole infinity of even functions whose
summation or definite integral simply does not possess a closed form, for the trivial reason that the
overwhelming majority of functions simply do not have one, and those that do are the exception
rather than the rule$)$.
|
Some Metrics Defined on Euclidean Space
Recall from the Metric Spaces page that if $M$ is a nonempty set then a function $d : M \times M \to [0, \infty)$ is called a metric if for all $x, y, z \in M$ we have that the following three properties hold:
$d(x, y) = d(y, x)$. $d(x, y) = 0$ if and only if $x = y$. $d(x, y) \leq d(x, z) + d(z, y)$.
Furthermore, the set $M$ with the metric $d$, denoted $(M, d)$ is called a metric space.
We will now look at some other metrics defined on the Euclidean space $\mathbb{R}^n$ specifically.
The first type of metric $d : \mathbb{R}^n \times \mathbb{R}^n \to [0, \infty)$ is defined for all $\mathbf{x} = (x_1, x_2, ..., x_n), \mathbf{y} = (y_1, y_2, ..., y_n), \mathbf{z} = (z_1, z_2, ..., z_n) \in \mathbb{R}^n$ by:(1)
Let's verify that $d$ is indeed a metric.
For the first condition we have that for all $\mathbf{x}, \mathbf{y} \in \mathbb{R}^n$ that since $\mid x_k - y_k \mid = \mid y_k - x_k \mid$ that then:(2)
For the second condition, suppose that $d(\mathbf{x}, \mathbf{y}) = 0$. Then:(3)
We have that $\mid y_k - x_k \mid \geq 0$ for all $k \in \{1, 2, ..., n \}$ so for the sum above to equal to $0$, we must have that $\mid y_k - x_k \mid = 0$ for each $k$, so $y_k - x_k = 0$ and $y_k = x_k$ for each $k$. Hence $\mathbf{x} = \mathbf{y}$. Now suppose that $\mathbf{x} = \mathbf{y}$. Then $x_k = y_k$ for each $k \in \{ 1, 2, ..., n \}$ so $\mid x_k - y_k \mid = 0$ for each $k$ and:(4)
For the third condition we have by the triangle inequality that:(5)
Therefore $(\mathbb{R}^n, d)$ is a metric space.
|
If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order Problem 575
Let $G$ be a finite group of order $2n$.
Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup. Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$.
Then prove that $H$ is an abelian normal subgroup of odd order.Add to solve later
Three Linearly Independent Vectors in $\R^3$ Form a Basis. Three Vectors Spanning $\R^3$ Form a Basis. Problem 574
Let $B=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ be a set of three-dimensional vectors in $\R^3$.
(a) Prove that if the set $B$ is linearly independent, then $B$ is a basis of the vector space $\R^3$.
Add to solve later
(b) Prove that if the set $B$ spans $\R^3$, then $B$ is a basis of $\R^3$. Problem 572
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.
There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold). The time limit was 55 minutes. Problem 7. Let $A=\begin{bmatrix} -3 & -4\\ 8& 9 \end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix} -1 \\ 2 \end{bmatrix}$. (a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$. (b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$. Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular. Problem 9. Determine whether each of the following sentences is true or false. (a) There is a $3\times 3$ homogeneous system that has exactly three solutions. (b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric. (c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$. (d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
Add to solve later
(e) The vectors \[\mathbf{v}_1=\begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, \mathbf{v}_2=\begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \mathbf{v}_3=\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}\] are linearly independent. Problem 571
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.
There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold). The time limit was 55 minutes.
This post is Part 2 and contains Problem 4, 5, and 6.
Check out Part 1 and Part 3 for the rest of the exam problems.
Problem 4. Let \[\mathbf{a}_1=\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}, \mathbf{a}_2=\begin{bmatrix} 2 \\ -1 \\ 4 \end{bmatrix}, \mathbf{b}=\begin{bmatrix} 0 \\ a \\ 2 \end{bmatrix}.\]
Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$.
Problem 5. Find the inverse matrix of \[A=\begin{bmatrix} 0 & 0 & 2 & 0 \\ 0 &1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 \end{bmatrix}\] if it exists. If you think there is no inverse matrix of $A$, then give a reason. Problem 6. Consider the system of linear equations \begin{align*} 3x_1+2x_2&=1\\ 5x_1+3x_2&=2. \end{align*} (a) Find the coefficient matrix $A$ of the system. (b) Find the inverse matrix of the coefficient matrix $A$. (c) Using the inverse matrix of $A$, find the solution of the system.
(
Linear Algebra Midterm Exam 1, the Ohio State University) Read solution Problem 570
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.
There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold). The time limit was 55 minutes. Problem 1. Determine all possibilities for the number of solutions of each of the systems of linear equations described below. (a) A consistent system of $5$ equations in $3$ unknowns and the rank of the system is $1$. (b) A homogeneous system of $5$ equations in $4$ unknowns and it has a solution $x_1=1$, $x_2=2$, $x_3=3$, $x_4=4$. Problem 2. Consider the homogeneous system of linear equations whose coefficient matrix is given by the following matrix $A$. Find the vector form for the general solution of the system. \[A=\begin{bmatrix} 1 & 0 & -1 & -2 \\ 2 &1 & -2 & -7 \\ 3 & 0 & -3 & -6 \\ 0 & 1 & 0 & -3 \end{bmatrix}.\] Problem 3.Let $A$ be the following invertible matrix.
\[A=\begin{bmatrix}
-1 & 2 & 3 & 4 & 5\\
6 & -7 & 8& 9& 10\\
11 & 12 & -13 & 14 & 15\\
16 & 17 & 18& -19 & 20\\
21 & 22 & 23 & 24 & -25
\end{bmatrix}
\] Let $I$ be the $5\times 5$ identity matrix and let $B$ be a $5\times 5$ matrix.
Suppose that $ABA^{-1}=I$.
Then determine the matrix $B$.
(
Linear Algebra Midterm Exam 1, the Ohio State University) Read solution Problem 569
For an $m\times n$ matrix $A$, we denote by $\mathrm{rref}(A)$ the matrix in reduced row echelon form that is row equivalent to $A$.
For example, consider the matrix $A=\begin{bmatrix} 1 & 1 & 1 \\ 0 &2 &2 \end{bmatrix}$ Then we have \[A=\begin{bmatrix} 1 & 1 & 1 \\ 0 &2 &2 \end{bmatrix} \xrightarrow{\frac{1}{2}R_2} \begin{bmatrix} 1 & 1 & 1 \\ 0 &1 & 1 \end{bmatrix} \xrightarrow{R_1-R_2} \begin{bmatrix} 1 & 0 & 0 \\ 0 &1 &1 \end{bmatrix}\] and the last matrix is in reduced row echelon form. Hence $\mathrm{rref}(A)=\begin{bmatrix} 1 & 0 & 0 \\ 0 &1 &1 \end{bmatrix}$.
Find an example of matrices $A$ and $B$ such that
\[\mathrm{rref}(AB)\neq \mathrm{rref}(A) \mathrm{rref}(B).\] Problem 564
Let $A$ and $B$ be $n\times n$ skew-symmetric matrices. Namely $A^{\trans}=-A$ and $B^{\trans}=-B$.
(a) Prove that $A+B$ is skew-symmetric. (b) Prove that $cA$ is skew-symmetric for any scalar $c$. (c) Let $P$ be an $m\times n$ matrix. Prove that $P^{\trans}AP$ is skew-symmetric. (d) Suppose that $A$ is real skew-symmetric. Prove that $iA$ is an Hermitian matrix. (e) Prove that if $AB=-BA$, then $AB$ is a skew-symmetric matrix. (f) Let $\mathbf{v}$ be an $n$-dimensional column vecotor. Prove that $\mathbf{v}^{\trans}A\mathbf{v}=0$.
Add to solve later
(g) Suppose that $A$ is a real skew-symmetric matrix and $A^2\mathbf{v}=\mathbf{0}$ for some vector $\mathbf{v}\in \R^n$. Then prove that $A\mathbf{v}=\mathbf{0}$. Problem 563
Let
\[\mathbf{v}_1=\begin{bmatrix} 1 \\ 2 \\ 0 \end{bmatrix}, \mathbf{v}_2=\begin{bmatrix} 1 \\ a \\ 5 \end{bmatrix}, \mathbf{v}_3=\begin{bmatrix} 0 \\ 4 \\ b \end{bmatrix}\] be vectors in $\R^3$.
Determine a condition on the scalars $a, b$ so that the set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is linearly dependent.Add to solve later
Problem 562
An $n\times n$ matrix $A$ is called
nonsingular if the only vector $\mathbf{x}\in \R^n$ satisfying the equation $A\mathbf{x}=\mathbf{0}$ is $\mathbf{x}=\mathbf{0}$. Using the definition of a nonsingular matrix, prove the following statements. (a) If $A$ and $B$ are $n\times n$ nonsingular matrix, then the product $AB$ is also nonsingular. (b) Let $A$ and $B$ be $n\times n$ matrices and suppose that the product $AB$ is nonsingular. Then: The matrix $B$ is nonsingular. The matrix $A$ is nonsingular. (You may use the fact that a nonsingular matrix is invertible.) Problem 561
Let $A$ be a singular $n\times n$ matrix.
Let \[\mathbf{e}_1=\begin{bmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix}, \mathbf{e}_2=\begin{bmatrix} 0 \\ 1 \\ \vdots \\ 0 \end{bmatrix}, \dots, \mathbf{e}_n=\begin{bmatrix} 0 \\ 0 \\ \vdots \\ 1 \end{bmatrix}\] be unit vectors in $\R^n$.
Prove that at least one of the following matrix equations
\[A\mathbf{x}=\mathbf{e}_i\] for $i=1,2,\dots, n$, must have no solution $\mathbf{x}\in \R^n$. The Matrix $[A_1, \dots, A_{n-1}, A\mathbf{b}]$ is Always Singular, Where $A=[A_1,\dots, A_{n-1}]$ and $\mathbf{b}\in \R^{n-1}$. Problem 560
Let $A$ be an $n\times (n-1)$ matrix and let $\mathbf{b}$ be an $(n-1)$-dimensional vector.
Then the product $A\mathbf{b}$ is an $n$-dimensional vector. Set the $n\times n$ matrix $B=[A_1, A_2, \dots, A_{n-1}, A\mathbf{b}]$, where $A_i$ is the $i$-th column vector of $A$.
Prove that $B$ is a singular matrix for any choice of $\mathbf{b}$.Add to solve later
Prove $\mathbf{x}^{\trans}A\mathbf{x} \geq 0$ and determine those $\mathbf{x}$ such that $\mathbf{x}^{\trans}A\mathbf{x}=0$ Problem 559
For each of the following matrix $A$, prove that $\mathbf{x}^{\trans}A\mathbf{x} \geq 0$ for all vectors $\mathbf{x}$ in $\R^2$. Also, determine those vectors $\mathbf{x}\in \R^2$ such that $\mathbf{x}^{\trans}A\mathbf{x}=0$.
(a) $A=\begin{bmatrix} 4 & 2\\ 2& 1 \end{bmatrix}$. (b) $A=\begin{bmatrix} 2 & 1\\ 1& 3 \end{bmatrix}$.
|
Most of the phenomena studied in the domain of Engineering and Science are periodic in nature. For instance, current and voltage in an alternating current circuit. These periodic functions could be analyzed into their constituent components (fundamentals and harmonics) by a process called Fourier analysis.
Periodic functions occur frequently in the problems studied during Engineering education. Their representation in terms of simple periodic functions such as sine and cosine function, which leads to Fourier series(FS). Fourier series is a very powerful tool in connection with various problems involving partial differential equations. In this article, let us discuss the Fourier analysis with examples.
Also, read: What is the Fourier Series?
A Fourier series is an expansion of a periodic function f(x) in terms of an infinite sum of sines and cosines. Fourier Series makes use of the orthogonality relationships of the sine and cosine functions.
Laurent Series yield Fourier Series
A difficult thing to understand and/or motivate is the fact that arbitrary periodic functions have Fourier series representations. In this section, we prove that periodic analytic functions have such a representation using Laurent expansions.
Fourier Analysis for Periodic Functions
The Fourier series representation of analytic functions is derived from Laurent expansions. The elementary complex analysis is used to derive additional fundamental results in the harmonic analysis including the representation of C∞ periodic functions by Fourier series, the representation of rapidly decreasing functions by Fourier integrals, and Shannon’s sampling theorem. The ideas are classical and of transcendent beauty.
A function is periodic of period L if f(x+L) = f(x) for all x in the domain of f. The smallest positive value of L is called the fundamental period.
The trigonometric functions sin x and cos x are examples of periodic functions with fundamental period 2π and tan x is periodic with fundamental period π. A constant function is a periodic function with arbitrary period L.
It is easy to verify that if the functions f1, . . . , fn are periodic of period L, then any linear combination
\(c_{1}f_{1}\left ( x \right )+…+c_{n}f_{n}\left ( x \right )\)
is also periodic. Furthermore, if the infinite series
\(\frac{1}{2} a_{o}+ \sum_{ n=1}^{\infty}a_{n}\;cos\frac{n\pi x}{L}+b_{n}\; sin\frac{n\pi x}{L}\)<
consisting of 2L-periodic functions converges for all x, then the function to which it converges will be periodic of period 2L. There are two symmetry properties of functions that will be useful in the study of Fourier series.
Even and Odd Function
A function f(x) is said to be even if f(−x) = f(x).
The function f(x) is said to be odd if f(−x) = −f(x).
Graphically, even functions have symmetry about the y-axis, whereas odd functions have symmetry around the origin.
Examples:
Sums of odd powers of x are odd: 5x
3 − 3x
Sums of even powers of x are even: −x
6 + 4x 4 + x 2 − 3
Since x is odd, and cos x is even
The product of two odd functions is even: x sin x is even
The product of two even functions is even: x
2cos x is even
The product of an even function and an odd function is
odd: sin x cos x is odd
Note:
To find a Fourier series, it is sufficient to calculate the integrals that give the coefficients a
0, a n, and b n and plug them into the big series formula.
Typically, f(x) will be piecewise-defined.
Big advantage that Fourier series have over Taylor series: the function f(x) can have discontinuities.
For more information, visit BYJU’S – The Learning App and register with the app to watch all the interactive videos to learn with ease.
|
Yes, except for the specific case in which the state is in an eigenstate of the measurement operator.Measurements with deterministic outcomesMore specifically, for any quantum (pure) state $|\psi\rangle$, there is just one class of measurements that can be performed without changing the state, and these are the measurements which ask questions of the ...
In quantum mechanics in fact there is no isolated systems. Interactions with environment are impossible to avoid partially because of real influence of small perturbations, partially because of quantum entanglement.Saying that it is clear that measurements process, based on interaction between quantum system and macroscopic measurement aparatus is ...
Occam's razor favors simplicity, but simplicity is subjective. I've heard people use Occam's razor for exactly the opposite purpose, to argue that it's extravagant to talk about many worlds, so we should favor the Copenhagen interpretation (CI) over the many-worlds interpretation (MWI).I think wrangling over CI versus MWI is pretty pointless, because we ...
Consider the Many-Worlds approach.You have a wavefunction (an immensely complicated one, of course). Your amplitude for having heard a click steadily grows in magnitude.No paradox if you look at it like this.
Your statements treat the quantum mechanical distribution as physical, whereas it is a mathematical function fitting the boundary condition of your experiment, i.e. it is the mathematical function describing a particle's probability of decay.Probabilities are the same in classical mechanics, in economics in gambling, in population interactions. Take the ...
I think that “listening” even in the case of silence is already the measurement. You can only hope to hear something when there is a medium (air) that will carry the sound waves. This medium causes a continuous interaction between you and the Geiger counter. Only without the medium there is no interaction but then you can also not tell that the Geiger ...
My take on this is that in the original thought experiment, you don't get to monitor the detector. When the detector detects, it kills the cat. But it doesn't tell you then. You only find out when you open the box.If it tells you immediately, then you know immediately. And then there's the question whether the detector detects 100%.If the Geiger counter ...
Good question. The textbook formalism in Quantum Mechanics & QFT just doesn't deal with this problem (as well as a few others). It deals with cases where there is a well-defined moment of measurement, and a variable with a corresponding hermitian operator $x, p, H$, etc is measured. However there are questions which can be asked, like this one, which ...
No, the detector is not always collapsing the state.When the particle is in an undecayed state its wave function is physically localised with a vanishingly small amplitude in the region of the detector, so the detector doesn't interact with it and isn't 'always' measuring it. It is only when the particle's state evolves to the point at which it has a ...
The idea of the collapse of the state is not a fundamental part of quantum mechanics. It's a feature of the Copenhagen interpretation (CI). The CI is not the only way to think about quantum mechanics.Even within CI, it's not necessarily true that measurement must disturb the system, depending on what you mean by "measurement" and "disturb." In the Stern-...
Answering the question in the title: a measurement process is intrinsically non-unitary.One way to see this is to realise that the unitarity of a process is equivalent to its being reversible.A measurement process is intrinsically non-reversible, as some information gets lost. For example, measuring $(|0\rangle+|1\rangle)/\sqrt2$ in the computational ...
As the other answers point out, your question is very confusing and it's not entire clear what you're asking. I think what you're asking is what happens if you measure whether the system is in any of the states $|\psi_i\rangle$ with $i \geq 3$, i.e. you measure the value of the observable that is the projection operator$$\hat{P} = \sum_{i \geq 3} |\psi_i\...
In QM, a measurement always amounts to a choice of a basis (or more generally, a set of projectors summing to the identity) with respect to which the wavefunction collapses. In other words, any measurement of a state $|\Psi\rangle$ can be described via a set of orthogonal projectors $P_k$ such that $\sum_k P_k=I$, by writing the state as $|\Psi\rangle=\sum_k ...
|
Answer
Stone house: 9 hours Wood house: 3 hours
Work Step by Step
We divide the value of Q by the rate at which heat is absorbed to find: $ \Delta t = \frac{ \Delta Q}{10^5} = \frac{75 \times 2000 \times .2 \times 30}{10^5 } = 9 \ hours$ For the wood house: $\Delta t = \frac{ \Delta Q}{10^5} = \frac{15 \times 2000 \times .33 \times 30}{10^5 } = 3 \ hours$
|
NTS ABSTRACTSpring2019
Return to [1]
Contents Jan 23
Yunqing Tang Reductions of abelian surfaces over global function fields For a non-isotrivial ordinary abelian surface $A$ over a global function field, under mild assumptions, we prove that there are infinitely many places modulo which $A$ is geometrically isogenous to the product of two elliptic curves. This result can be viewed as a generalization of a theorem of Chai and Oort. This is joint work with Davesh Maulik and Ananth Shankar. Jan 24
Hassan-Mao-Smith--Zhu The diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$ Abstract: Assume a polynomial-time algorithm for factoring integers, Conjecture~\ref{conj}, $d\geq 3,$ and $q$ and $p$ prime numbers, where $p\leq q^A$ for some $A>0$. We develop a polynomial-time algorithm in $\log(q)$ that lifts every $\mathbb{Z}/q\mathbb{Z}$ point of $S^{d-2}\subset S^{d}$ to a $\mathbb{Z}[1/p]$ point of $S^d$ with the minimum height. We implement our algorithm for $d=3 \text{ and }4$. Based on our numerical results, we formulate a conjecture which can be checked in polynomial-time and gives the optimal bound on the diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$. Jan 31
Kyle Pratt Breaking the $\frac{1}{2}$-barrier for the twisted second moment of Dirichlet $L$-functions Abstract: I will discuss recent work, joint with Bui, Robles, and Zaharescu, on a moment problem for Dirichlet $L$-functions. By way of motivation I will spend some time discussing the Lindel\"of Hypothesis, and work of Bettin, Chandee, and Radziwi\l\l. The talk will be accessible, as I will give lots of background information and will not dwell on technicalities. Feb 7
Shamgar Gurevich Harmonic Analysis on $GL_n$ over finite fields Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters.
For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}: $$trace (\rho(g))/dim (\rho),$$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (TAMU).
Feb 14
Tonghai Yang The Lambda invariant and its CM values Abstract: The Lambda invariant which parametrizes elliptic curves with two torsions (X_0(2)) has some interesting properties, some similar to that of the j-invariants, and some not. For example, $\lambda(\frac{d+\sqrt d}2)$ is a unit sometime. In this talk, I will briefly describe some of the properties. This is joint work with Hongbo Yin and Peng Yu. Feb 28
Brian Lawrence Diophantine problems and a p-adic period map. Abstract: I will outline a proof of Mordell's conjecture / Faltings's theorem using p-adic Hodge theory. Joint with Akshay Venkatesh. March 7
Masoud Zargar Sections of quadrics over the affine line Abstract: Abstract: Suppose we have a quadratic form Q(x) in d\geq 4 variables over F_q[t] and f(t) is a polynomial over F_q. We consider the affine variety X given by the equation Q(x)=f(t) as a family of varieties over the affine line A^1_{F_q}. Given finitely many closed points in distinct fibers of this family, we ask when there exists a section passing through these points. We study this problem using the circle method over F_q((1/t)). Time permitting, I will mention connections to Lubotzky-Phillips-Sarnak (LPS) Ramanujan graphs. Joint with Naser T. Sardari March 14
Elena Mantovan p-adic automorphic forms, differential operators and Galois representations A strategy pioneered by Serre and Katz in the 1970s yields a construction of p-adic families of modular forms via the study of Serre's weight-raising differential operator Theta. This construction is a key ingredient in Deligne-Serre's theorem associating Galois representations to modular forms of weight 1, and in the study of the weight part of Serre's conjecture. In this talk I will discuss recent progress towards generalizing this theory to automorphic forms on unitary and symplectic Shimura varieites. In particular, I will introduce certain p-adic analogues of Maass-Shimura weight-raising differential operators, and discuss their action on p-adic automorphic forms, and on the associated mod p Galois representations. In contrast with Serre's classical approach where q-expansions play a prominent role, our approach is geometric in nature and is inspired by earlier work of Katz and Gross.
This talk is based joint work with Eishen, and also with Fintzen--Varma, and with Flander--Ghitza--McAndrew.
March 28
Adebisi Agboola Relative K-groups and rings of integers Abstract: Suppose that F is a number field and G is a finite group. I shall discuss a conjecture in relative algebraic K-theory (in essence, a conjectural Hasse principle applied to certain relative algebraic K-groups) that implies an affirmative answer to both the inverse Galois problem for F and G and to an analogous problem concerning the Galois module structure of rings of integers in tame extensions of F. It also implies the weak Malle conjecture on counting tame G-extensions of F according to discriminant. The K-theoretic conjecture can be proved in many cases (subject to mild technical conditions), e.g. when G is of odd order, giving a partial analogue of a classical theorem of Shafarevich in this setting. While this approach does not, as yet, resolve any new cases of the inverse Galois problem, it does yield substantial new results concerning both the Galois module structure of rings of integers and the weak Malle conjecture. April 4
Wei-Lun Tsai Hecke L-functions and $\ell$ torsion in class groups Abstract: The canonical Hecke characters in the sense of Rohrlich form a
set of algebraic Hecke characters with important arithmetic properties. In this talk, we will explain how one can prove quantitative nonvanishing results for the central values of their corresponding L-functions using methods of an arithmetic statistical flavor. In particular, the methods used rely crucially on recent work of Ellenberg, Pierce, and Wood concerning bounds for $\ell$-torsion in class groups of number fields. This is joint work with Byoung Du Kim and Riad Masri.
April 11
Taylor Mcadam Almost-prime times in horospherical flows Abstract: Equidistribution results play an important role in dynamical systems and their applications in number theory. Often in such applications it is desirable for equidistribution to be effective (i.e. the rate of convergence is known). In this talk I will discuss some of the history of effective equidistribution results in homogeneous dynamics and give an effective result for horospherical flows on the space of lattices. I will then describe an application to studying the distribution of almost-prime times in horospherical orbits and discuss connections of this work to Sarnak’s Mobius disjointness conjecture. April 18
Ila Varma Malle's Conjecture for octic $D_4$-fields. Abstract: We consider the family of normal octic fields with Galois group $D_4$, ordered by their discriminant. In forthcoming joint work with Arul Shankar, we verify the strong Malle conjecture for this family of number fields, obtaining the order of growth as well as the constant of proportionality. In this talk, we will discuss and review the combination of techniques from analytic number theory and geometry-of-numbers methods used to prove these results.
|
NTS ABSTRACTSpring2019
Return to [1]
Contents Jan 23
Yunqing Tang Reductions of abelian surfaces over global function fields For a non-isotrivial ordinary abelian surface $A$ over a global function field, under mild assumptions, we prove that there are infinitely many places modulo which $A$ is geometrically isogenous to the product of two elliptic curves. This result can be viewed as a generalization of a theorem of Chai and Oort. This is joint work with Davesh Maulik and Ananth Shankar. Jan 24
Hassan-Mao-Smith--Zhu The diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$ Abstract: Assume a polynomial-time algorithm for factoring integers, Conjecture~\ref{conj}, $d\geq 3,$ and $q$ and $p$ prime numbers, where $p\leq q^A$ for some $A>0$. We develop a polynomial-time algorithm in $\log(q)$ that lifts every $\mathbb{Z}/q\mathbb{Z}$ point of $S^{d-2}\subset S^{d}$ to a $\mathbb{Z}[1/p]$ point of $S^d$ with the minimum height. We implement our algorithm for $d=3 \text{ and }4$. Based on our numerical results, we formulate a conjecture which can be checked in polynomial-time and gives the optimal bound on the diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$. Jan 31
Kyle Pratt Breaking the $\frac{1}{2}$-barrier for the twisted second moment of Dirichlet $L$-functions Abstract: I will discuss recent work, joint with Bui, Robles, and Zaharescu, on a moment problem for Dirichlet $L$-functions. By way of motivation I will spend some time discussing the Lindel\"of Hypothesis, and work of Bettin, Chandee, and Radziwi\l\l. The talk will be accessible, as I will give lots of background information and will not dwell on technicalities. Feb 7
Shamgar Gurevich Harmonic Analysis on $GL_n$ over finite fields Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters.
For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}: $$trace (\rho(g))/dim (\rho),$$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (TAMU).
Feb 14
Tonghai Yang The Lambda invariant and its CM values Abstract: The Lambda invariant which parametrizes elliptic curves with two torsions (X_0(2)) has some interesting properties, some similar to that of the j-invariants, and some not. For example, $\lambda(\frac{d+\sqrt d}2)$ is a unit sometime. In this talk, I will briefly describe some of the properties. This is joint work with Hongbo Yin and Peng Yu. Feb 28
Brian Lawrence Diophantine problems and a p-adic period map. Abstract: I will outline a proof of Mordell's conjecture / Faltings's theorem using p-adic Hodge theory. Joint with Akshay Venkatesh. March 7
Masoud Zargar Sections of quadrics over the affine line Abstract: Abstract: Suppose we have a quadratic form Q(x) in d\geq 4 variables over F_q[t] and f(t) is a polynomial over F_q. We consider the affine variety X given by the equation Q(x)=f(t) as a family of varieties over the affine line A^1_{F_q}. Given finitely many closed points in distinct fibers of this family, we ask when there exists a section passing through these points. We study this problem using the circle method over F_q((1/t)). Time permitting, I will mention connections to Lubotzky-Phillips-Sarnak (LPS) Ramanujan graphs. Joint with Naser T. Sardari March 14
Elena Mantovan p-adic automorphic forms, differential operators and Galois representations A strategy pioneered by Serre and Katz in the 1970s yields a construction of p-adic families of modular forms via the study of Serre's weight-raising differential operator Theta. This construction is a key ingredient in Deligne-Serre's theorem associating Galois representations to modular forms of weight 1, and in the study of the weight part of Serre's conjecture. In this talk I will discuss recent progress towards generalizing this theory to automorphic forms on unitary and symplectic Shimura varieites. In particular, I will introduce certain p-adic analogues of Maass-Shimura weight-raising differential operators, and discuss their action on p-adic automorphic forms, and on the associated mod p Galois representations. In contrast with Serre's classical approach where q-expansions play a prominent role, our approach is geometric in nature and is inspired by earlier work of Katz and Gross.
This talk is based joint work with Eishen, and also with Fintzen--Varma, and with Flander--Ghitza--McAndrew.
March 28
Adebisi Agboola Relative K-groups and rings of integers Abstract: Suppose that F is a number field and G is a finite group. I shall discuss a conjecture in relative algebraic K-theory (in essence, a conjectural Hasse principle applied to certain relative algebraic K-groups) that implies an affirmative answer to both the inverse Galois problem for F and G and to an analogous problem concerning the Galois module structure of rings of integers in tame extensions of F. It also implies the weak Malle conjecture on counting tame G-extensions of F according to discriminant. The K-theoretic conjecture can be proved in many cases (subject to mild technical conditions), e.g. when G is of odd order, giving a partial analogue of a classical theorem of Shafarevich in this setting. While this approach does not, as yet, resolve any new cases of the inverse Galois problem, it does yield substantial new results concerning both the Galois module structure of rings of integers and the weak Malle conjecture. April 4
Wei-Lun Tsai Hecke L-functions and $\ell$ torsion in class groups Abstract: The canonical Hecke characters in the sense of Rohrlich form a
set of algebraic Hecke characters with important arithmetic properties. In this talk, we will explain how one can prove quantitative nonvanishing results for the central values of their corresponding L-functions using methods of an arithmetic statistical flavor. In particular, the methods used rely crucially on recent work of Ellenberg, Pierce, and Wood concerning bounds for $\ell$-torsion in class groups of number fields. This is joint work with Byoung Du Kim and Riad Masri.
April 11
Taylor Mcadam Almost-prime times in horospherical flows Abstract: Equidistribution results play an important role in dynamical systems and their applications in number theory. Often in such applications it is desirable for equidistribution to be effective (i.e. the rate of convergence is known). In this talk I will discuss some of the history of effective equidistribution results in homogeneous dynamics and give an effective result for horospherical flows on the space of lattices. I will then describe an application to studying the distribution of almost-prime times in horospherical orbits and discuss connections of this work to Sarnak’s Mobius disjointness conjecture. April 18
Ila Varma Malle's Conjecture for octic $D_4$-fields. Abstract: We consider the family of normal octic fields with Galois group $D_4$, ordered by their discriminant. In forthcoming joint work with Arul Shankar, we verify the strong Malle conjecture for this family of number fields, obtaining the order of growth as well as the constant of proportionality. In this talk, we will discuss and review the combination of techniques from analytic number theory and geometry-of-numbers methods used to prove these results. April 25
Michael Bush Interactions between group theory and number theory Abstract: I'll survey some of the ways in which group theory has helped us understand extensions of number fields with restricted ramification and why one might care about such things. Some of Nigel's contributions will be highlighted. A good portion of the talk should be accessible to those other than number theorists. April 25
Rafe Jones Eventually stable polynomials and arboreal Galois representations Abstract: Call a polynomial defined over a field K eventually stable if its nth iterate has a uniformly bounded number of irreducible factors (over K) as n grows. I’ll discuss some far-reaching conjectures on eventual stability, and recent work on various special cases. I’ll also describe some natural connections between eventual stability and arboreal Galois representations, which Nigel Boston introduced in the early 2000s. April 25 NTS
Jen Berg Rational points on conic bundles over elliptic curves with positive rank Abstract: Varieties that fail to have rational points despite having local points for each prime are said to fail the Hasse principle. A systematic tool accounting for these failures is called the Brauer-Manin obstruction, which uses the Brauer group, Br X, to preclude the existence of rational points on a variety X. In this talk, we'll explore the arithmetic of conic bundles over elliptic curves of positive rank over a number field k. We'll discuss the insufficiency of the known obstructions to explain the failures of the Hasse principle for such varieties over a number field. We'll further consider questions on the distribution of the rational points of X with respect to the image of X(k) inside of the rational points of the elliptic curve E. In the process, we'll discuss results on a local-to-global principle for torsion points on elliptic curves over Q. This is joint work in progress with Masahiro Nakahara. April 25
Judy Walker Derangements of Finite Groups Abstract: In the early 1990’s, Nigel Boston taught an innovative graduate-level group theory course at the University of Illinois that focused on derangements (fixed-point-free elements) of transitive permutation groups. The course culminated in the writing of a 7-authored paper that appeared in Communications in Algebra in 1993. This paper contained a conjecture that was eventually proven by Fulman and Guralnick, with that result appearing in the Transactions of the American Mathematical Society just last year. May 2
Melanie Matchett Wood Unramified extensions of random global fields Abstract: For any finite group Gamma, I will give a "non-abelian-Cohen-Martinet Conjecture," i.e. a conjectural distribution on the "good part" of the Galois group of the maximal unramified extension of a global field K, as K varies over all Galois Gamma extensions of the rationals or rational function field over a finite field. I will explain the motivation for this conjecture based on what we know about these maximal unramified extensions (very little), and how we prove, in the function field case, as the size of the finite field goes to infinity, that the moments of the Galois groups of these maximal unramified extensions match out conjecture. This talk covers work in progress with Yuan Liu and David Zureick-Brown May 9
David Zureick-Brown Arithmetic of stacks Abstract: I'll discuss several diophantine problems that naturally lead one to study algebraic stacks, and discuss a few results.
|
This answer says
A helicopter uses a LOT more fuel hovering than it does in forward flight.
Is this correct? Why?
Aviation Stack Exchange is a question and answer site for aircraft pilots, mechanics, and enthusiasts. It only takes a minute to sign up.Sign up to join this community
Yes it is correct that helicopters use more fuel when hovering: the engine needs to apply more power to overcome drag. Here is a graph of the engine power required for different airspeeds, from J. Gordon Leishman, Principles Of Helicopter Aerodynamics:
The line for total power goes down between 0 - 70 kts with increasing airspeed, this is caused by the line for induced power: power required to overcome the induced drag of the helicopter blade. The total required engine power is the summation of:
Induced power is dominant in the hover. Induced drag is caused by the backwards tilt of the lift vector: the higher the angle between blade and free stream, the more the vector is tilted backwards, which causes both loss of lift and increase of drag. The equation for lift L is:
$$ L = C_L \cdot \frac{1}{2} \cdot \rho \cdot V^2 \cdot S$$
and at a given altitude, the two variables here are $C_L$ (lift coefficient) and $V$ (airspeed at the blade). $C_L$ is an approximately linear function of angle of attack at the blade, so lift increases linearly with blade tilt-back and quadratically with increasing airspeed over the blade.
Above graph from Leishman shows the velocity distribution over the blades when hovering, and at airspeed. Quite a complicated situation - when hovering, the airspeed reaching the blade is only the rotational speed of the rotor, at forward speed the blade going forward has rotational speed plus airspeed.
The helicopter does not roll over and both forward blade and retreating blade deliver the same amount of lift, with the rearward going blade tilted back more than it was in the hover. But the forward going blade is tilted back a lot less: airspeed has a quadratic influence.
Note that the circle in the plot at fwd airspeed is not stalled flow, but reverse flow: the air streams in at the back of the blade. So drag is now negative, the airstream helps to propel the blade! However there is loss of lift in the reverse flow area.
Induced power reduces with airspeed at first according to the simple 1-D impulse consideration (more air mass through the disk), and later increases as the disk is increasingly tilted forward and must do more work to overcome losses from rotor profile drag, airframe parasitic drag, and compressibility drag.
There is also an interference effect of the downwash over the fuselage: in the hover the air streams straight down, while in forward flight the rotor wash is more aligned with the fuselage, catching more of a streamline shape. Parasitic drag is of course dominant at top speed, while offloading the rotor by using fixed wing surfaces reduces the induced power at high speeds - but from hover to moderate forward speeds it is purely the reduction in lift induced power that creates translational lift.
Yes, it is correct, if the helicopter doesn’t fly too fast. A helicopter will produce the necessary lift most efficiently at a moderate forward speed.
In a hover all the airflow which is available for lift creation must be generated by the rotation of the main rotor. This means that a small amount of air must be accelerated by a lot. If the helicopter adds forward speed, it can achieve a higher mass flow through the rotor, and now less acceleration of air is needed to achieve the same lift. This improves the efficiency of lift creation. If the helicopter goes faster than its speed for maximum rate of climb, aerodynamic drag grows too high and reduces efficiency again.
At high speed, the tips of the advancing blades might reach transsonic speeds, which produces a noticeable drag increase, and the inner part of the retreating blade will see very little airspeed, and to still produce lift, the whole blade will pitch to a high angle of attack, causing the inner part to stall, which again produces a noticeable drag increase. There is a sweet spot between hover and fast speed where the required power reaches a minimum.
Yeah, I'm not a physics student, but I work on Black Hawks. If you conceptualise a helicopter as just a main rotor disc producing lift, then Peter Kampf's answer about mass-flow through the rotor disc is the greatest factor. (Remember the disc is tilted forward as the helicopter moves forward). However, your question actually asked why do they burn less fuel: well, thousands of little design features on the airframe each help save precious pounds of fuel in forward flight. (You might want to do a Google image search to look at while you read this.)
The Black Hawk has a cambered vertical fin which unloads the tail rotor above 60kts, and this torque is redirected into the main rotor. It has a variable stabilator which changes angle with fwd airspeed (= changing main rotor downwash angle) in order to provide lift, further unloading the main rotor. The tail rotor is canted at an angle and spins backwards into the main rotor wash, again to unload the main rotor, freeing up more power for forward speed. It has on-flight computers and a mixer unit which flattens out the airframe in flight, so that it doesn't present a flat cabin roof into the airstream at high forward airspeeds. The flatter you can keep the disc into the relative airflow, the smaller the pitching angles of the blades, and the less parasitic drag from the rotor disc.
The main rotor blade tips are swept backwards to delay the onset of transonic tip drag as the advancing blade sees higher relative airspeeds in forward flight. Other helicopters have airframe fairings that generate lift off the cabin body in forward flight. All of these aerodynamic savings are present in forward flight, but not in the hover. And lastly, your turbine engine air inlets will benefit from some ram-air effect in forward flight, which means burning less fuel for the same torque. Every helicopter in the world uses some or all of these features to save fuel in flight, and if you compare generations of helicopters (Bell 47, Bell UH-1, Bell 412, Black Hawk), you can see these features gradually develop.
There are other considerations when a helicopter is hovering just off the ground, but I've tried to list just some of the ways helicopters are designed to save fuel in flight. Hope some of this helps.
The concept is known as "translational lift". When moving in forward flight, a helicopter's rotor disc acts a lot like an airplane's wing - it has a significant lift-to-drag ratio. The required thrust to maintain level flight is reduced by that ratio, and therefore necessary engine power and fuel flow are also reduced. In hover, the engine+rotor system has to supply thrust fully equal to the weight of the helicopter.
When in a hover, the air has more time to setup into an induced wash from further upwards that translates into higher down flow speed by the time the induced wash reaches the plane of the rotor. When in translational flight, the rotor is continuously moving into clean air, so the down flow speed by the time the air reaches the plane of the rotor is less than that of a hover. Power equals force times speed, in this case consider the power output to the air. In both cases, the force is the same (equal to the weight of the helicopter), but in a hover, the down wash speed through the plane of the rotor is greater than during translational flight, so the required power in a hover is greater than in translational flight, until the translational drag becomes an issue.
Another issue is tip vortices. In a hover, these can get quite large, again due to all the time for the vortices to set up and the rotor tips moving into the vortices induced by the other rotor tip(s). In translational flight, the vortices are "washed" off by the relative horizontal wind, reducing the size of the tip vortices.
Another point to consider is whether helicopter has supplemental wings. Rather famous example is Mi-24 family of attack helicopters, where weapon pylons works as wings.
"At high speed, the wings provide considerable lift (up to a quarter of total lift)."
At high altitudes with full load the recommended lift-off procedure is to gain horizontal speed so wings pick up some lift.
If gravity were the only force acting on an aircraft, then at each moment in time the aircraft would be gaining certain amount of downward momentum. So to maintain altitude, the aircraft must transfer that momentum to some other mass (i.e. air). That is, there’s going to be some air that starts with zero velocity (in the simplest case) and ends up with some downward velocity. Since momentum is mass times velocity, the velocity that the air has to be accelerated to will be inversely proportional to the mass of air accelerated: velocity = momentum/mass. However, the energy of that air is mv
2/2. When we substitute velocity into that equation, we get energy = mass*( momentum 2/(2*mass 2). One power of mass cancels out, giving energy = momentum 2/(2*mass). Thus, doubling the amount of air accelerated downward halves the energy required. When an airplane is traveling at high velocity, a large amount of air is coming into contact with its wings, meaning that it does not have to expend much energy to generate lift (of course, the faster it travels, the more drag it experiences, giving a lift-drag trade-off). A helicopter experiences something similar: when it is traveling horizontally, it naturally moves into new air. When it hovers, there’s less air to accelerate downward, and what air there is has to be pulled towards the rotor by the rotor’s own effort.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.