text
stringlengths 256
16.4k
|
|---|
You ask:
A broader question is what is the relationship between the size of the slits, the distance between the slits, and the observed interference pattern?
The answer here covers your question.
The de Broglie wavelength describes the effective wavelength that a particle would have when it was behaving as a wave.
to decide what wavelength an electron should have so as to be able to see the interference pattern one has to see separation and and distance to the screen that the slits should have.
The interference pattern is dictated by the distance from one bright line (coherence) to the next:
$$\Delta y = \frac{\lambda D}{d}$$
where D is the distance from the slit to the screen (or detector), little d is the spacing between the slits, and λ is going to be our de Broglie wavelength.
Let's assume we want to use electrons for our experiment. We build a setup with the screen placed 1 meter from the slits, and the two slits 1 millimeter apart (maybe we found this equipment in a storage closet in the physics department...). This setup will make the distance between the bright spots on our screen 1000 times what the de Broglie wavelength of our incoming electron is. We want to be able to actually see the interference pattern in our detectors, so perhaps we should request that the spacing of the bright spots be about 1 millimeter (this would depend on the detectors, of course). This means the de Broglie wavelength of our electron has to be about one meter. Now we go back to the equation for de Broglie wavelength, and see that we know h and we now know λ, so we can calculate what p should be. Since we know the mass of the electron, calculating the momentum is essentially the same as calculating the speed; for our experiment, we find the electron needs to be going about 0.0007 m/s! That's a tiny speed... about 2 inches a minute (kind of like pouring ketchup)!
So experiments are not easy with electrons.
For the buckyball experiment , the researchers used slits about 100 nanometers apart (a nanometer is one millionth of a millimeter), and shot the buckyballs through the slits at about 200 meters per second (roughly 500 mph), much slower than the speed of light.
|
If I have volatility smile quoted with respect to the delta of an option on the forward, how can I convert this delta into the moneyness or strike of the option?
Is there any bult-in function of Matlab financial toolbox?
Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. It only takes a minute to sign up.Sign up to join this community
The call delta in a Black framework is: $$\Delta = N(d_1)$$ with $d_1=\frac{\ln(F_t(T)/K)+(T-t)\frac{\sigma^2}{2}}{\sigma\sqrt{T-t}}$.
Then the strike of the option is: $$K=F_t(T) e^{-(N^{-1}(\Delta)+1/2) \sigma \sqrt{T-t}}$$
The same thing is done if the option is a put and we obtain:
$$K=F_t(T) e^{-(N^{-1}(\Delta+1)+1/2) \sigma \sqrt{T-t}}$$
In matlab it can be solved by doing:
fzero(@(Strike) blsdelta(Price,Strike,Rate,Time,Volatility,Yield)-Delta, K0)
where the initial guess can be
K0 = Price
|
Coupling Heat Transfer with Subsurface Porous Media Flow
In the second part of our Geothermal Energy series, we focus on the coupled heat transport and subsurface flow processes that determine the thermal development of the subsurface due to geothermal heat production. The described processes are demonstrated in an example model of a hydrothermal doublet system.
Deep Geothermal Energy: The Big Uncertain Potential
One of the greatest challenges in geothermal energy production is minimizing the prospecting risk. How can you be sure that the desired production site is appropriate for, let’s say, 30 years of heat extraction? Usually, only very little information is available about the local subsurface properties and it is typically afflicted with large uncertainties.
Over the last decades, numerical models became an important tool to estimate risks by performing parametric studies within reasonable ranges of uncertainty. Today, I will give a brief introduction to the mathematical description of the coupled subsurface flow and heat transport problem that needs to be solved in many geothermal applications. I will also show you how to use COMSOL software as an appropriate tool for studying and forecasting the performance of (hydro-) geothermal systems.
Governing Equations in Hydrothermal Systems
The heat transport in the subsurface is described by the heat transport equation:
(1)
Heat is balanced by conduction and convection processes and can be generated or lost through defining this in the source term, Q. A special feature of the
Heat Transfer in Porous Media interface is the implemented Geothermal Heating feature, represented as a domain condition: Q_{geo}.
There is also another feature that makes the life of a geothermal energy modeler a little easier. It’s possible to implement an averaged representation of the thermal parameters, composed from the rock matrix and the groundwater using the matrix volume fraction, \theta, as a weighting factor. You may choose between volume and power law averaging for several immobile solids and fluids.
In the case of volume averaging, the volumetric heat capacity in the heat transport equation becomes:
(2)
and the thermal conductivity becomes:
(3)
Solving the heat transport properly requires incorporating the flow field. Generally, there can be various situations in the subsurface requiring different approaches to describe the flow mathematically. If the focus is on the micro scale and you want to resolve the flow in the pore space, you need to solve the creeping flow or Stokes flow equations. In partially saturated zones, you would solve Richards’ equation, as it is often done in studies concerning environmental pollution (see our past Simulating Pesticide Runoff, the Effects of Aldicarb blog post, for instance).
However, the fully-saturated and mainly pressure-driven flows in deep geothermal strata are sufficiently described by Darcy’s law:
(4)
where the velocity field, \mathbf{u}, depends on the permeability, \kappa, the fluid’s dynamic viscosity, \mu, and is driven by a pressure gradient, p. Darcy’s law is then combined with the continuity equation:
(5)
If your scenario concerns long geothermal time scales, the time dependence due to storage effects in the flow is negligible. Therefore, the first term on the left-hand side of the equation above vanishes because the density, \rho, and the porosity, \epsilon_p, can be assumed to be constant. Usually, the temperature dependencies of the hydraulic properties are negligible. Thus, the (stationary) flow equations are independent of the (time-dependent) heat transfer equations. In some cases, especially if the number of degrees of freedom is large, it can make sense to utilize the independence by splitting the problem into one stationary and one time-dependent study step.
Fracture Flow and Poroelasticity
Fracture flow may locally dominate the flow regime in geothermal systems, such as in karst aquifer systems. The Subsurface Flow Module offers the
Fracture Flow interface for a 2D representation of the Darcy flow field in fractures and cracks.
Hydrothermal heat extraction systems usually consist of one or more injection and production wells. Those are in many cases realized as separate boreholes, but the modern approach is to create one (or more) multilateral wells. There are even tactics that consist of single boreholes with separate injection and production zones.
Note that artificial pressure changes due to water injection and extraction can influence the structure of the porous medium and produce hydraulic fracturing. To take these effects into account, you can perform poroelastic analyses, but we will not consider these here.
COMSOL Model of a Hydrothermal Application: A Geothermal Doublet
It is easy to set up a COMSOL Multiphysics model that features long time predictions for a hydro-geothermal application.
The model region contains three geologic layers with different thermal and hydraulic properties in a box with a volume V≈500 [m³]. The box represents a section of a geothermal production site that is ranged by a large fault zone. The layer elevations are interpolation functions from an external data set. The concerned aquifer is fully saturated and confined on top and bottom by aquitards (impermeable beds). The temperature distribution is generally a factor of uncertainty, but a good guess is to assume a geothermal gradient of 0.03 [°C/m], leading to an initial temperature distribution T
0(z)=10 [°C] – z·0.03 [°C/m]. Hydrothermal doublet system in a layered subsurface domain, ranged by a fault zone. The edge is about 500 meters long. The left drilling is the injection well, the production well is on the right. The lateral distance between the wells is about 120 meters.
COMSOL Multiphysics creates a mesh that is perfectly fine for this approach, except for one detail — the mesh on the wells is refined to resolve the expected high gradients in that area.
Now, let’s crank the heat up! Geothermal groundwater is pumped (produced) through the production well on the right at a rate of 50 [l/s]. The well is implemented as a cylinder that was cut out of the geometry to allow inlet and outlet boundary conditions for the flow. The extracted water is, after using it for heat or power generation, re-injected by the left well at the same rate, but with a lower temperature (in this case 5 [°C]).
The resulting flow field and temperature distribution after 30 years of heat production are displayed below:
Result after 30 years of heat production: Hydraulic connection between the production and injection zones and temperature distribution along the flow paths. Note that only the injection and production zones of the boreholes are considered. The rest of the boreholes are not implemented, in order to reduce the meshing effort.
The model is a suitable tool for estimating the development of a geothermal site under varied conditions. For example, how is the production temperature affected by the lateral distance of the wells? Is it worthwhile to reach a large spread or is a moderate distance sufficient?
This can be studied by performing a parametric study by varying the well distance:
Flow paths and temperature distribution between the wells for different lateral distances. The graph shows the production temperature after reaching stationary conditions as a function of the lateral distance.
With this model, different borehole systems can easily be realized just by changing the positions of the injection/production cylinders. For example, here are the results of a single-borehole system:
Results of a single-borehole approach after 30 years of heat production. The vertical distance between the injection (up) and production (down) zones is 130 meters.
So far, we have only looked at aquifers without ambient groundwater movement. What happens if there is a hydraulic gradient that leads to groundwater flow?
The following figure shows the same situation as the figure above, except that now there is a hydraulic head gradient of \nablaH=0.01 [m/m], leading to a superposed flow field:
Single borehole after 30 years of heat production and overlapping groundwater flow due to a horizontal pressure gradient. Other Posts in This Series Modeling Geothermal Processes with COMSOL Software Geothermal Energy: Using the Earth to Heat and Cool Buildings Further Reading Download the Geothermal Doublet tutorial Explore the Subsurface Flow Module Related papers and posters presented at the COMSOL Conference: Hydrodynamic and Thermal Modeling in a Deep Geothermal Aquifer, Faulted Sedimentary Basin, France Simulation of Deep Geothermal Heat Production Full Coupling of Flow, Thermal and Mechanical Effects in COMSOL Multiphysics® for Simulation of Enhanced Geothermal Reservoirs Multiphysics Between Deep Geothermal Water Cycle, Surface Heat Exchanger Cycle and Geothermal Power Plant Cycle Modelling Reservoir Stimulation in Enhanced Geothermal Systems Comments (26) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
When I evaluate
Solve[a==Sin[b*c], b] to rearrange the following for $ b $:
$$ a = \sin(bc) $$
I get the following result from Mathematica:
$$\begin{align*} \left\{\left\{b\to \text{ConditionalExpression}\left[\frac{-\sin ^{-1}(a)+2 \pi c_1+\pi }{c},c_1\in \mathbb{Z}\right]\right\},\right.\left.\left\{b\to \text{ConditionalExpression}\left[\frac{\sin ^{-1}(a)+2 \pi c_1}{c},c_1\in \mathbb{Z}\right]\right\}\right\} \end{align*}$$
It seems far too complicated. Unless I'm making a huge mistake, surely solving the equation for $ b $ would give:
$$ b = \frac{\sin ^{-1}(a)}{c} $$
Am I doing something wrong?
|
Faddeeva Package From AbInitio
Revision as of 18:58, 21 December 2012 (edit)
Stevenj (Talk | contribs)
(→Wrappers: C, Matlab, GNU Octave, Python, Scilab, Julia)
← Previous diff
Revision as of 04:15, 23 December 2012 (edit)
Stevenj (Talk | contribs)
Next diff →
Line 8: Line 8: * '''Dawson''', the [[w:Dawson function|Dawson function]] <math>\mathrm{Dawson}(z) = \frac{\sqrt{\pi}}{2} e^{-z^2} \mathrm{erfi}(z)</math> * '''Dawson''', the [[w:Dawson function|Dawson function]] <math>\mathrm{Dawson}(z) = \frac{\sqrt{\pi}}{2} e^{-z^2} \mathrm{erfi}(z)</math> - Given the Faddeeva function ''w''(''z''), one can also easily compute [[w:Voigt profile|Voigt functions]] and similar related functions as well. In benchmarks of our code, we find that it is comparable to or faster than most competing software for these functions in the complex plane (but we also have special-case optimizations for purely real or imaginary arguments), and we find that the accuracy is typically at at least 13 [[w:Significant figures|significant digits]] in both the real and imaginary parts. + Given the Faddeeva function ''w''(''z'') and the other complex error functions, one can also easily compute [[w:Voigt profile|Voigt functions]], [[w:Fresnel integral|Fresnel integrals]], and similar related functions as well. In benchmarks of our code, we find that it is comparable to or faster than most competing software for these functions in the complex plane (but we also have special-case optimizations for purely real or imaginary arguments), and we find that the accuracy is typically at at least 13 [[w:Significant figures|significant digits]] in both the real and imaginary parts. Because all of the algorithms are based on algorithms for the Faddeeva function, we call this the '''Faddeeva Package'''. Because all of the algorithms are based on algorithms for the Faddeeva function, we call this the '''Faddeeva Package'''. Revision as of 04:15, 23 December 2012
Steven G. Johnson has written free/open-source C++ code (with wrappers for C, Matlab, GNU Octave, Python, Scilab, and Julia) to compute the various error functions of arbitrary complex arguments. In particular, we provide:
w, the Faddeeva function , where erfc is the complementary error function. erf, the error function erfc, the complementary error function erfcx, the scaled complementary error function erfi, the imaginary error function Dawson, the Dawson function
Given the Faddeeva function
w( z) and the other complex error functions, one can also easily compute Voigt functions, Fresnel integrals, and similar related functions as well. In benchmarks of our code, we find that it is comparable to or faster than most competing software for these functions in the complex plane (but we also have special-case optimizations for purely real or imaginary arguments), and we find that the accuracy is typically at at least 13 significant digits in both the real and imaginary parts.
Because all of the algorithms are based on algorithms for the Faddeeva function, we call this the
Faddeeva Package.
Contents Download
Download the source code from:
http://ab-initio.mit.edu/Faddeeva.cc and http://ab-initio.mit.edu/Faddeeva.hh (updated 18 December 2012)
See also below for wrappers to call the Faddeeva package from other languages.
Usage
To use the code, include the
Faddeeva.hh header file:
#include "Faddeeva.hh"
and compile and link the
Faddeeva.cc source code. You can then call various functions. For example:
extern std::complex<double> Faddeeva::w(std::complex<double> z, double relerr=0);
This function
Faddeeva::w(z, relerr) computes
w( z) to a desired relative error
relerr.
Omitting the
relerr argument, or passing
relerr=0 (or any
relerr less than machine precision ε≈10
−16), corresponds to requesting machine precision, and in practice a relative error < 10 −13 is usually achieved. Specifying a larger value of
relerr may improve performance for some
z (at the expense of accuracy).
Similarly, the erf, erfc, erfcx, erfi, and Dawson functions are computed by calling:
extern std::complex<double> Faddeeva::erf(std::complex<double> z, double relerr=0); extern std::complex<double> Faddeeva::erfc(std::complex<double> z, double relerr=0); extern std::complex<double> Faddeeva::erfcx(std::complex<double> z, double relerr=0); extern std::complex<double> Faddeeva::erfi(std::complex<double> z, double relerr=0); extern std::complex<double> Faddeeva::Dawson(std::complex<double> z, double relerr=0);
Since these functions are purely real for real arguments
z= x, we provide the following specialized interfaces for convenience (and a slight performance gain, although the complex functions above automatically execute specialized code for purely real arguments): extern double Faddeeva::erf(double x); extern double Faddeeva::erfc(double x); extern double Faddeeva::erfcx(double x); extern double Faddeeva::erfi(double x); extern double Faddeeva::Dawson(double x);
(These functions always compute to maximum accuracy, usually near machine precision.)
It is also sometimes useful to compute Im[
w( x)] for real x, since in that case (like the Dawson function but without the √π/2 factor). [Note that Re[ w( x)] is simply exp(− x 2) for real x.] Im[ w( x)] can be computed efficiently to nearly machine precision by calling: extern double Faddeeva::w_im(double x); Wrappers: C, Matlab, GNU Octave, Python, Scilab, Julia
Wrappers are available for this function in other languages.
C: Download the files http://ab-initio.mit.edu/Faddeeva.c and http://ab-initio.mit.edu/Faddeeva.h (in additionto Faddeeva.cc from above) to obtain a pure C version (you do notneed a C++ compiler), using C99 complex numbers. The complex functions are
Faddeeva_erf(double complex z, double relerr)etc. instead of
Faddeeva::erf, and the real-argument versions are
Faddeeva_erf_re(double x)etc. (Note that in gcc you may need to compile with the
-std=c99flag to enable C99 support.)
Matlab (also available here): We provide source code for compiled Matlab plugins (MEX files) to interface all of the error functions above from Matlab. Download the code and documentation from: http://ab-initio.mit.edu/Faddeeva-MATLAB.zip (a zip file) The provided functions are called
Faddeeva_w,
Faddeeva_erf,
Faddeeva_erfc,
Faddeeva_erfi,
Faddeeva_erfcx, and
Faddeeva_Dawson, equivalent to the C++ functions above. All have usage of the form
w = Faddeeva_w(z)[or
w = Faddeeva_w(z, relerr)to pass the optional relative error], to compute the function value from an array or matrix
zof complex (or real) inputs.
For convenience, a script to compile all of the plugins using the mex command in Matlab is included. Assuming you have a C++ compiler installed (and have run
mex -setupto tell Matlab to use it), you can simply run the
Faddeeva_build.mscript in Matlab to compile all of the Faddeeva functions.
Install the resulting
*.mex*files, along with the
*.mhelp files, into your Matlab path
GNU Octave: Similar to Matlab, above, we provide source code for compiled GNU Octave plugins (.oct files) for all of the error functions above. ( Note:our code for complex-argument erf, erfc, erfcx, erfi, and dawson functions has been merged into Octave and should be included in a future release.) Download the code and documentation from: http://ab-initio.mit.edu/Faddeeva-octave.tgz (a gzipped tar file) The provided functions are called
Faddeeva_w,
Faddeeva_erf,
Faddeeva_erfc,
Faddeeva_erfi,
Faddeeva_erfcx, and
Faddeeva_Dawson, with usage identical to the Matlab plugins above.
A Makefile is included. Assuming you have a C++ compiler and the
mkoctfilecommand installed (
mkoctfilecomes with Octave, possibly in an
octave-develor similarly named package in GNU/Linux distributions), you can simply run
maketo compile the plugins, and
sudo make installto install them system-wide (assuming you have system administrator privileges); otherwise put the compiled
.octfiles somewhere in your octave path.
Python: Our code is used to provide
scipy.special.erf,
scipy.special.wofz, and the other error functions in SciPy starting in version 0.12.0 (see here).
Scilab has a patch to call the Faddeeva Package which should hopefully be incorporated into the next Scilab release. Julia uses the Faddeeva Package to provide its complex
erf,
erfc,
erfcx,
erfi, and
dawsonfunctions.
Algorithms
Our implementation uses a combination of different algorithms, mostly centering around computing the Faddeeva function
w( z).
To compute the Faddeeva function for sufficiently large |
z|, we use a continued-fraction expansion for w( z) similar to those described in Walter Gautschi, "Efficient computation of the complex error function," SIAM J. Numer. Anal. 7(1), pp. 187–198 (1970). G. P. M. Poppe and C. M. J. Wijers, "More efficient computation of the complex error function," ACM Trans. Math. Soft. 16(1), pp. 38–46 (1990); this is TOMS Algorithm 680.
Unlike those papers, however, we switch to a completely different algorithm for smaller |
z| or for z close to the real axis: Mofreh R. Zaghloul and Ahmed N. Ali, "Algorithm 916: Computing the Faddeyeva and Voigt Functions," ACM Trans. Math. Soft. 38(2), 15 (2011). Preprint available at arXiv:1106.0151.
(I initially used this algorithm for all z, but the continued-fraction expansion turned out to be faster for larger |
z|. On the other hand, Algorithm 916 is competitive or faster for smaller | z|, and appears to be significantly more accurate than the Poppe & Wijers code in some regions, e.g. in the vicinity of | z|=1 [although comparison with other compilers suggests that this may be a problem specific to gfortran]. Algorithm 916 also has better relative accuracy in Re[ z] for some regions near the real- z axis. You can switch back to using Algorithm 916 for all z by changing
USE_CONTINUED_FRACTION to
0 in the code.)
Note that this is SGJ's
independent re-implementation of these algorithms, based on the descriptions in the papers only. In particular, we did not refer to the authors' Fortran or Matlab implementations (respectively), which are under restrictive "semifree" ACM copyright terms and are therefore unusable in free/open-source software.
Algorithm 916 requires an external complementary error function erfc(
x) function for real arguments x to be supplied as a subroutine. More precisely, it requires the scaled function erfcx( x) = e erfc( x2 x). Here, we use an erfcx routine written by SGJ that uses a combination of two algorithms: a continued-fraction expansion for large xand a lookup table of Chebyshev polynomials for small x. (I initially used an erfcx function derived from the DERFC routine in SLATEC, modified by SGJ to compute erfcx instead of erfc, but the new erfcx routine is much faster, and also seems to be faster than the calerf rational-Chebyshev code by W. J. Cody.)
Similarly, we also implement special-case code for real
z, where the imaginary part of w is Dawson's integral. Similar to erfcx, this is also computed by a continued-fraction expansion for large | x|, a lookup table of Chebyshev polynomials for smaller | x|, and finally a Taylor expansion for very small | x|. (This seems to be faster than the dawsn function in the Cephes library, and is substantially faster than the gsl_sf_dawson function in the GNU Scientific Library.)
The other error functions can be computed in terms of
w( z). The basic equations are: (scaled complementary error function) (complementary error function) (error function) ; for real x, (imaginary error function) ; for real x, (Dawson function)
Note that we sometimes employ different equations for positive and negative Re(
z) in order to avoid numerical problems arising from multiplying exponentially large and small quantities. For erfi and the Dawson function, there are simplifications that occur for real x as noted. In some cases, however, there are additional complications that require our implementation to go beyond these simple formulas. For erf, large cancellation errors occur in these formulas near | z|=0 where w( z) is nearly 1, as well as near the imaginary axis for Re[erf], and in these regimes we switch to a Taylor expansion. Similarly, for the Dawson function we switch to a Taylor expansion near the origin or near the real axis. (Similar problems occur for erfi, but our erfi implementation simply calls our erf code.) Test program
To test the code, a small test program is included at the end of
Faddeeva.cc which tests
w( z) against several known results (from Wolfram Alpha) and prints the relative errors obtained. To compile the test program,
#define TEST_FADDEEVA in the file (or compile with
-DTEST_FADDEEVA on Unix) and compile
Faddeeva.cc. The resulting program prints
SUCCESS at the end of its output if the errors were acceptable.
License
The software is distributed under the "MIT License", a simple permissive free/open-source license:
Copyright © 2012 Massachusetts Institute of Technology Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
If only weak-ordering and continuity is assumed, ICs can definitely intersect.
This is not true. First, if you're speaking of indifference
curves, you'd already be assuming either local non-satiation or monotonicity.
Let's speak of indifference
sets instead. The analogue of two sets, $I_1$ and $I_2$, "crossing" each other can be formalized as $I_1\ne I_2$ and $I_1\cap I_2\ne \varnothing$.
Take two alternatives $x_1,x_2\in X$ and define two indifference sets as follows:\begin{align}I_1:=\{x\in X:x\sim x_1\}\quad\text{and}\quad I_2:=\{x\in X:x\sim x_2\}. \end{align}WLOG, assume that $x_1\succ x_2$, so that $I_1\ne I_2$. If we allow $I_1$ to "cross" $I_2$, then $I_1\cap I_2$ must be non-empty. Let $\bar x\in I_1\cap I_2$ be an element in the intersection. By the transitivity of the indifference relation $\sim$, we have $x_1\sim \bar x$ and $\bar x\sim x_2$, implying $x_1\sim x_2$. But this is contradictory to our assumption that $x_1\succ x_2$. The contradiction therefore suggests that transitivity, a property of the weak ordering, is violated.
|
Voronoi summation method
A matrix summation method of sequences. It is defined by a numerical sequence $\{p_n\}$ and denoted by the symbol $(W,p_n)$. A sequence $\{s_n\}$ is summable by the method $(W,p_n)$ to a number $S$ if
$$\frac{s_0p_n+s_1p_{n-1}+\ldots+s_np_0}{p_0+\ldots+p_n}\to S$$
In particular, if $p_0=1$, $p_k=0$, $k\geq1$, the summability of a sequence by the $(W,p_n)$-method to a number $S$ means that the sequence converges to $S$. For $p_k=1$, $k\geq0$, one obtains the Cesàro summation method (cf. Cesàro summation methods). For $p_0>0$, $p_k\geq1$, $k\geq1$, the $(W,p_n)$-method is regular (cf. Regular summation methods) if and only if $p_n/(p_0+\ldots+p_n)\to0$. Any two regular methods $(W,p_n')$ and $(W,p_n'')$ are compatible (cf. Compatibility of summation methods).
The Voronoi summation method was first introduced by G.F. Voronoi [1] and was rediscovered by N.E. Nörlund in 1919. The method is therefore sometimes referred to in western literature as the Nörlund method and the symbol given to it is $(N,p_n)$ or $N(p_n)$.
References
[1] G.F. [G.F. Voronoi] Woronoi, "Extension of the notion of the limit of the sum of terms of an infinite series" Ann. of Math. (2) , 33 (1932) pp. 422–428 ((With notes by J.D. Tamarkin.)) [2] G.H. Hardy, "Divergent series" , Clarendon Press (1949) Comments References
[a1] C.N. Moore, "Summable series and convergence factors" , Dover, reprint (1966) How to Cite This Entry:
Voronoi summation method.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Voronoi_summation_method&oldid=32749
|
In several theories, space itself is discrete, somewhat in relation to the Planck length, $$l_p = \sqrt{\frac{\hbar G}{c^3}} \simeq 1.616199 \times 10^{-35}\quad m$$ .
More specifically in loop quantum gravity, Carlo Rovelli's 1998 overview paper states the following:
The spin-networks picture of space–time is mathematically precise and physically compelling: nodes of spin networks represent elementary grains of space, and their volume is given by a quantum number that is associated with the node in units of the elementary Planck volume $$V = \left( \frac{\hbar G}{c^3} \right)^{3/2}$$
So, from what I understand of LQG, space has always been discrete. However, mathematically, space being discrete does not imply that time also is (which would mean that spacetime is discrete). A counter example in 2D would be the floor and ceiling functions.
Concerning the OPERA results, let's keep in mind that several explanations have been published which don't allow for supralumnial neutrinos, cf this Universe Today article or this Bad Astronomy article.
I am relatively new here, and I might not have fully answered your question, so feel free to post comments or even modify my answer to improve it. Thanks!This post imported from StackExchange Physics at 2014-03-17 03:42 (UCT), posted by SE-user ChrisR
|
With which notation do you feel uncomfortable?
closed as not constructive by Loop Space, Chris Schommer-Pries, Qiaochu Yuan, Scott Morrison♦ Mar 19 '10 at 6:10
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. If this question can be reworded to fit the rules in the help center, please edit the question.
There is a famous anecdote about Barry Mazur coming up with the worst notation possible at a seminar talk in order to annoy Serge Lang. Mazur defined $\Xi$ to be a complex number and considered the quotient of the conjugate of $\Xi$ and $\Xi$: $$\frac{\overline{\Xi}}{\Xi}.$$ This looks even better on a blackboard since $\Xi$ is drawn as three horizonal lines.
My favorite example of bad notation is using $\textrm{sin}^2(x)$ for $(\textrm{sin}(x))^2$ and $\textrm{sin}^{-1}(x)$ for $\textrm{arcsin}(x)$, since this is basically the same notation used for two different things ($\textrm{sin}^2(x)$ should mean $\textrm{sin}(\textrm{sin}(x))$ if $\textrm{sin}^{-1}(x)$ means $\textrm{arcsin}(x)$).
It might not be horrible, since it rarely leads to confusion, but it is inconsistent notation, which should be avoided in general.
I personally hate the notation $x \mid y$, for "$x$ divides $y$". Of course, I'm used to reading it by now, but a general principle I follow and recommend is:
Never use a symmetric symbol to denote an asymmetric relation!
I never liked the notation ${\mathbb Z}_p$ for the ring of residue classes modulo $p$. At one point, it confused the hell out of me, and this confusion is easily avoided by writing $C_p$, $C(p)$ or ${\mathbb Z}/p$.
Mathematicians are really quite bad when it comes to notation. They should learn from programming langauges people. Bad notation actually makes it difficult for students to understand the concepts. Here are some really bad ones:
Using $f(x)$ to denote both the value of $f$ at $x$ and the function $f$ itself. Because of this students in programming classes cannot tell the difference beween $f$ (the function) and $f(x)$ (function applied to an argument). When I was a student nobody ever managed to explain to me why $dy/dx$ made sense. What is $dy$ and what is $dx$? They're not numbers, yet we divide them (I am just giving a student's perspective). In Langrangian mechanics and calculus of variations people take the partial derivative of the Lagrangian $L$ with respect to $\dot q$, where $\dot q$ itself is the derivative of momentum $q$ with respect to time. That's crazy. The summation convention, e.g., that ${\Gamma^{ij}}_j$ actually means $\sum_j {\Gamma^{ij}}_j$ is useful but very hard to get used to. In category theory I wish people sometimes used anynotation as opposed to nameless arrows which are introduced in accompanying text as "the evident arrow".
Physicist will hate me for this, but I never liked Einstein's summation convention, nor the famous bra ($\langle\phi|$) and ket ($|\psi\rangle$) notation. Both notations make easy things look unnecessarily complicated, and especially the bra-ket notation is no fun to use in LaTeX.
My candidate would be the (internal) direct sum of subspaces $U \oplus V$ in linear algebra. As an operator it is equivalent to sum but with the side effect of implying that $U \cap V = \lbrace 0\rbrace$. Whenever I had a chance to teach linear algebra I found this terribly confusing for students.
I think composition of arrows $f:X\to Y$ and $g:Y\to Z$ should be written $fg$ not $gf$. First of all it would make the notation $\hom(X,Y)\to\hom(Y,Z)\to \hom(X,Z)$ much more natural: $\hom(E,X)$ should be a left $\hom(E,E)$ module because $E$ is on the left :) Secondly, diagrams are written from left to right (even stronger: Almost anything in the western world is written left to right). And i think the strange (-1) needed when shifting complexes is an effect of this
twisted notation.
The notation ]a,b[ for open intervals and its ilk. Sorry, Bourbaki.
Writing a finite field of size $q$ as $\mathrm{GF}(q)$ instead of as $\mathbf{F}_q$ always rubbed me the wrong way. I know where it comes from (Galois Field), and I think it is still widely used in computer science, and maybe in some allied areas of discrete math, but I still dislike it.
As Trevor Wooley used to always say in class, ``Vinogradov's notation sucks....the constants away."
For those who don't know, Vinogradov's notation in this context is $f(x)\ll g(x)$ meaning $f(x) = O(g(x)).$ (if you prefer big-O notation, that is).
I rather dislike the notation $$\int_{\Omega}f(x)\,\mu(dx)$$ myself. I realize that just as the integral sign is a generalized summation sign, the $dx$ in $\mu(dx)$ would stand for some small measurable set of which you take the measure, but it still rubs me the wrong way. Is it only because I was brought up with the $\int\cdots\,d\mu(x)$ notation? The latter nicely generalizes the notation for the Stieltjes integral at least.
I get very frustrated when an author or speaker writes "Let $X\colon= A\sqcup B$..." to mean:
$A$ and $B$ are disjoint sets (in whatever the appropriate universe is), and let $X\colon= A\cup B$.
If they just meant "form the disjoint union of $A$ and $B$" this would be fine. But I've seen speakers later
use the fact that $A$ and $B$ are disjoint, which was never stated anywhere except as above. You should never hide an assumption implicitly in your notation.
The use of squared brackets $\left[...\right]$ for anything. It's not bad per se, but unfortunately it is used both as a substitute for $\left(...\right)$ and as a notation for the floor function. And there are cases when it takes a while to figure out which of these is meant - I'm not making this up.
The word "character" meaning: a 1-dimensional representation, a representation, a trace form of a representation, a formal linear combination of representations, a formal linear combination of trace forms of representations.
The word "adjoint", and the corresponding notation $A\mapsto A^{\ast}$, having two completely unrelated meanings.
The term "symplectic group" used to mean the group $U(n,{\mathbb H})$. It's as if people called $U(n)$ and $GL(n,{\mathbb R})$ by some single name.
My personal pet peeve of notation HAS to be algebraists writing functions
on the right a la Herstein's "Topics In Algebra". I don't know why they do it when everyone else doesn't. I think one of them got up one day and decided they wanted to be cooler then everyone else, seriously...
I don't like (but maybe for a bad reason) the notation $F\vdash G$ for $F$ is left adjoint to $G$.
Any comment ?
A cute idea but for which I have yet to find supporters is D. G. Northcott's notation (used at least in [Northcott, D. G. A first course of homological algebra. Cambridge University Press, London, 1973. xi+206 pp. MR0323867) for maps in a commutative diagram, which consists in enumerating the names of the objects placed vertices along the way of the composition. Thus, if there is only one map in sight from $M$ to $N$, he writes it simply $MN$, so he has formulas looking like $$A'A(ABB'') = A'ABB'' = A'B'BB'' = 0.$$ He also writes maps on the right, so his $$xMN=0$$ means that the image of $x$ under the map from $M$ to $N$ is zero.
I would not say this is among the worst notations ever, though.
Students have big difficulties when first confronted with the $o(\cdot)$ and $O(\cdot)$ notation. The term $o(x^3)$, e.g., does not denote a certain function evaluated at $x^3$, but a function of $x$, defined by the context, that converges to zero when divided by $x^3$.
I have struggled with 'dx'. I've spent years trying to study every different approach to calculus that I could find to try and make sense of it. I read about the limit definitions in my first book, vector calculus with them as pullbacks of linear transformations or flows/flux, differential forms from the bridge project, k-forms, nonstandard analysis which enlarges $\mathbb{R}$ to give you infinitesimals (and unbounded numbers) but the same first order properties and lets integral be defined as a sum, constructive analysis using a monad to take the closure of the rationals to give reals... but I am still just as confused as ever, I understand that the mathematical notation doesn't have a compositional semantics but still don't really get it - one of the problems is despite not really understanding it, or having any abstract definition of it.. I can still get correct answers and I really hope this doesn't become a theme as I study more topics in mathematics.
p < q as in "the forcing condition p is stronger than q".
I hate the short cut $ab$ for $a\cdot b$. Everyone get used to it, BUT it creates very deep problem with all other notation; say you never can be sure what $f(x+y)$ or $2\!\tfrac23$ might be...
Also in modern mathematics people do not multiply things too often, so it does not have sense to make such a short cut.
Yet the shortcut $x^n$ is really bad one. One can not use upper indexes after this. It would be easy to write $x^{\cdot n}$ instead.
|
I am trying to understand how the functional integral for Chern-Simons theory for a possibly non-compact 3-manifold with boundary is made gauge invariant.
For a compact 3-manifold, $M$, without boundary, it is well known (see, for example, section 2 of this reference), that for a compact simple Lie group $G$ and trivial principal G-bundle $P\rightarrow M$, one may define the Chern-Simons action\begin{equation}S[A]=\frac{k}{4\pi}\int_M\textrm{Tr}\bigg(A\wedge dA+\frac{2}{3}A\wedge A\wedge A\bigg).\end{equation}Here, the group, $\mathcal{G}$, of gauge transformations of $P$, is isomorphic to the group of smooth maps from $M$ to $G$. Under a gauge transformation $g\in \mathcal{G}$, the action changes by the sum of a boundary term and $\textrm{deg}(g)$, which labels the corresponding component of $\mathcal{G}$. The group, $\pi_0(\mathcal{G})$, of components is isomorphic to the group of homotopy classes of maps from $M$ to $G$, which for simply connected $G$ is isomorphic to $\pi_3(G)$. Since $G$ is simple, $\pi_3(G)\cong\mathbb{Z}$. Thus, upon requiring that $k$ is quantized, we find that the integrand of the functional integral, $e^{iS}$, is invariant under gauge transformations.
My question is, how does one extend this to the case where $M$ has a boundary, and is possibly noncompact? The example I have in mind is $M=D\times \mathbb{R}$, where $D$ is the disk. In this paper, it is explained that for gauge invariance, one first chooses one of the boundary components of the connection, $A$, to be zero, and with such a boundary condition the functional integral is invariant
only under gauge transformationswhich are one at the boundary. This requirement is also alluded to below equation 3.18 of this paper by Dijkgraaf and Witten.
It is clear to me that the aforementioned boundary term that arises via gauge transformation will vanish via the boundary condition.
However, it is not clear to me why we require that $g$ be 1 at the boundary for gauge invariance.
Firstly, why do we need to impose another boundary condition on $g$, i.e., in addition to the constraint required to preserve the boundary condition on $A$ under gauge transformations? Secondly, why would another constant value for $g$ at the boundary not suffice? I would think that any common value for $g$ along the boundary would imply that we are effectively studying $M$ with boundary points identified, which is a closed 3-manifold, for which we can apply the arguments of the second paragraph above.This post imported from StackExchange MathOverflow at 2019-04-13 07:47 (UTC), posted by SE-user Mtheorist
|
2017-04-19 08:09
Flavour anomalies after the $R_{K^*}$ measurement / D'Amico, Guido (CERN) ; Nardecchia, Marco (CERN) ; Panci, Paolo (CERN) ; Sannino, Francesco (CERN ; Southern Denmark U., CP3-Origins ; U. Southern Denmark, Odense, DIAS) ; Strumia, Alessandro (CERN ; Pisa U. ; INFN, Pisa) ; Torre, Riccardo (EPFL, Lausanne, LPTP) ; Urbano, Alfredo (CERN) The LHCb measurement of the $\mu/e$ ratio $R_{K^*}$ indicates a deficit with respect to the Standard Model prediction, supporting earlier hints of lepton universality violation observed in the $R_K$ ratio. We show that the $R_K$ and $R_{K^*}$ ratios alone constrain the chiralities of the states contributing to these anomalies, and we find deviations from the Standard Model at the $4\sigma$ level. [...] arXiv:1704.05438; CP3-ORIGINS-2017-014; CERN-TH-2017-086; IFUP-TH/2017; CP3-Origins-2017-014.- 2017-09-04 - 31 p. - Published in : JHEP 09 (2017) 010 Article from SCOAP3: PDF; Fulltext: PDF; Preprint: PDF; Detaljerad journal - Similar records 2017-04-15 08:30
Multi-loop calculations: numerical methods and applications / Borowka, S. (CERN) ; Heinrich, G. (Munich, Max Planck Inst.) ; Jahn, S. (Munich, Max Planck Inst.) ; Jones, S.P. (Munich, Max Planck Inst.) ; Kerner, M. (Munich, Max Planck Inst.) ; Schlenk, J. (Durham U., IPPP) We briefly review numerical methods for calculations beyond one loop and then describe new developments within the method of sector decomposition in more detail. We also discuss applications to two-loop integrals involving several mass scales.. CERN-TH-2017-051; IPPP-17-28; MPP-2017-62; arXiv:1704.03832.- 2017-11-09 - 10 p. - Published in : J. Phys. : Conf. Ser. 920 (2017) 012003 Fulltext from Publisher: PDF; Preprint: PDF; In : 4th Computational Particle Physics Workshop, Tsukuba, Japan, 8 - 11 Oct 2016, pp.012003 Detaljerad journal - Similar records 2017-04-15 08:30
Anomaly-Free Dark Matter Models are not so Simple / Ellis, John (King's Coll. London ; CERN) ; Fairbairn, Malcolm (King's Coll. London) ; Tunney, Patrick (King's Coll. London) We explore the anomaly-cancellation constraints on simplified dark matter (DM) models with an extra U(1)$^\prime$ gauge boson $Z'$. We show that, if the Standard Model (SM) fermions are supplemented by a single DM fermion $\chi$ that is a singlet of the SM gauge group, and the SM quarks have non-zero U(1)$^\prime$ charges, the SM leptons must also have non-zero U(1)$^\prime$ charges, in which case LHC searches impose strong constraints on the $Z'$ mass. [...] KCL-PH-TH-2017-21; CERN-TH-2017-084; arXiv:1704.03850.- 2017-08-16 - 19 p. - Published in : JHEP 08 (2017) 053 Article from SCOAP3: PDF; Preprint: PDF; Detaljerad journal - Similar records 2017-04-13 08:29
Single top polarisation as a window to new physics / Aguilar-Saavedra, J.A. (Granada U., Theor. Phys. Astrophys.) ; Degrande, C. (CERN) ; Khatibi, S. (IPM, Tehran) We discuss the effect of heavy new physics, parameterised in terms of four-fermion operators, in the polarisation of single top (anti-)quarks in the $t$-channel process at the LHC. It is found that for operators involving a right-handed top quark field the relative effect on the longitudinal polarisation is twice larger than the relative effect on the total cross section. [...] CERN-TH-2017-013; arXiv:1701.05900.- 2017-06-10 - 5 p. - Published in : Phys. Lett. B 769 (2017) 498-502 Article from SCOAP3: PDF; Elsevier Open Access article: PDF; Preprint: PDF; Detaljerad journal - Similar records 2017-04-12 07:18
Colorful Twisted Top Partners and Partnerium at the LHC / Kats, Yevgeny (CERN ; Ben Gurion U. of Negev ; Weizmann Inst.) ; McCullough, Matthew (CERN) ; Perez, Gilad (Weizmann Inst.) ; Soreq, Yotam (MIT, Cambridge, CTP) ; Thaler, Jesse (MIT, Cambridge, CTP) In scenarios that stabilize the electroweak scale, the top quark is typically accompanied by partner particles. In this work, we demonstrate how extended stabilizing symmetries can yield scalar or fermionic top partners that transform as ordinary color triplets but carry exotic electric charges. [...] MIT-CTP-4897; CERN-TH-2017-073; arXiv:1704.03393.- 2017-06-23 - 34 p. - Published in : JHEP 06 (2017) 126 Article from SCOAP3: PDF; Preprint: PDF; Detaljerad journal - Similar records 2017-04-11 08:06
Where is Particle Physics Going? / Ellis, John (King's Coll. London ; CERN) The answer to the question in the title is: in search of new physics beyond the Standard Model, for which there are many motivations, including the likely instability of the electroweak vacuum, dark matter, the origin of matter, the masses of neutrinos, the naturalness of the hierarchy of mass scales, cosmological inflation and the search for quantum gravity. So far, however, there are no clear indications about the theoretical solutions to these problems, nor the experimental strategies to resolve them [...] KCL-PH-TH-2017-18; CERN-TH-2017-080; arXiv:1704.02821.- 2017-12-08 - 21 p. - Published in : Int. J. Mod. Phys. A 32 (2017) 1746001 Preprint: PDF; In : HKUST Jockey Club Institute for Advanced Study : High Energy Physics, Hong Kong, China, 9 - 26 Jan 2017 Detaljerad journal - Similar records 2017-04-08 07:18 Detaljerad journal - Similar records 2017-04-05 07:33
Radiative symmetry breaking from interacting UV fixed points / Abel, Steven (Durham U., IPPP ; CERN) ; Sannino, Francesco (CERN ; U. Southern Denmark, CP3-Origins ; U. Southern Denmark, Odense, DIAS) It is shown that the addition of positive mass-squared terms to asymptotically safe gauge-Yukawa theories with perturbative UV fixed points leads to calculable radiative symmetry breaking in the IR. This phenomenon, and the multiplicative running of the operators that lies behind it, is akin to the radiative symmetry breaking that occurs in the Supersymmetric Standard Model.. CERN-TH-2017-066; CP3-ORIGINS-2017-011; IPPP-2017-23; arXiv:1704.00700.- 2017-09-28 - 14 p. - Published in : Phys. Rev. D 96 (2017) 056028 Fulltext: PDF; Preprint: PDF; Detaljerad journal - Similar records 2017-03-31 07:54 Detaljerad journal - Similar records 2017-03-31 07:54
Continuum limit and universality of the Columbia plot / de Forcrand, Philippe (ETH, Zurich (main) ; CERN) ; D'Elia, Massimo (INFN, Pisa ; Pisa U.) Results on the thermal transition of QCD with 3 degenerate flavors, in the lower-left corner of the Columbia plot, are puzzling. The transition is expected to be first-order for massless quarks, and to remain so for a range of quark masses until it turns second-order at a critical quark mass. [...] arXiv:1702.00330; CERN-TH-2017-022.- SISSA, 2017-01-30 - 7 p. - Published in : PoS LATTICE2016 (2017) 081 Fulltext: PDF; Preprint: PDF; In : 34th International Symposium on Lattice Field Theory, Southampton, UK, 24 - 30 Jul 2016, pp.081 Detaljerad journal - Similar records
|
Current browse context:
astro-ph.CO
Change to browse by: Bookmark(what is this?) Astrophysics > Cosmology and Nongalactic Astrophysics Title: Massive Photon and Dark Energy
(Submitted on 2 Dec 2015 (v1), last revised 7 Mar 2016 (this version, v2))
Abstract: We investigate cosmology of massive electrodynamics and explore the possibility whether massive photon could provide an explanation of the dark energy. The action is given by the scalar-vector-tensor theory of gravity which is obtained by non-minimal coupling of the massive Stueckelberg QED with gravity and its cosmological consequences are studied by paying a particular attention to the role of photon mass. We find that the theory allows cosmological evolution where the radiation- and matter-dominated epochs are followed by a long period of virtually constant dark energy that closely mimics $\Lambda$CDM model and the main source of the current acceleration is provided by the nonvanishing photon mass governed by the relation $\Lambda\sim m^2$. A detailed numerical analysis shows that the nonvanishing photon mass of the order of $\sim 10^{-34}$ eV is consistent with the current observations. This magnitude is far less than the most stringent limit on the photon mass available so far, which is of the order of $m \leq 10^{-27}$eV. Submission historyFrom: Seyen Kouwn [view email] [v1]Wed, 2 Dec 2015 01:49:29 GMT (1742kb,D) [v2]Mon, 7 Mar 2016 05:38:54 GMT (1745kb,D)
|
L # 1
Show that
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Last edited by krassi_holmz (2006-03-09 02:44:53)
IPBLE: Increasing Performance By Lowering Expectations.
Offline
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
L # 2
If
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Let
log x = x' log y = y' log z = z'. Then:
x'+y'+z'=0.
Rewriting in terms of x' gives:
IPBLE: Increasing Performance By Lowering Expectations.
Offline
Well done, krassi_holmz!
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
L # 3
If x²y³=a and log (x/y)=b, then what is the value of (logx)/(logy)?
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
loga=2logx+3logy
b=logx-logy loga+3b=5logx loga-2b=3logy+2logy=5logy logx/logy=(loga+3b)/(loga-2b). Last edited by krassi_holmz (2006-03-10 20:06:29)
IPBLE: Increasing Performance By Lowering Expectations.
Offline
Very well done, krassi_holmz!
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
L # 4
Offline
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
You are not supposed to use a calculator or log tables for L # 4. Try again!
Last edited by JaneFairfax (2009-01-04 23:40:20)
Offline
No, I didn't
I remember
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
You still used a calculator / log table in the past to get those figures (or someone else did and showed them to you). I say again:
no calculators or log tables to be used (directly or indirectly) at all!! Last edited by JaneFairfax (2009-01-06 00:30:04)
Offline
Offline
log a = 2log x + 3log y
b = log x log y
log a + 3 b = 5log x
loga - 2b = 3logy + 2logy = 5logy
logx / logy = (loga+3b) / (loga-2b)
Offline
Hi ganesh
for L # 1 since log(a)= 1 / log(b), log(a)=1 b a a we have 1/log(abc)+1/log(abc)+1/log(abc)= a b c log(a)+log(b)+log(c)= log(abc)=1 abc abc abc abc Best Regards Riad Zaidan
Offline
Hi ganesh
for L # 2 I think that the following proof is easier: Assume Log(x)/(b-c)=Log(y)/(c-a)=Log(z)/(a-b)=t So Log(x)=t(b-c),Log(y)=t(c-a) , Log(z)=t(a-b) So Log(x)+Log(y)+Log(z)=tb-tc+tc-ta+ta-tb=0 So Log(xyz)=0 so xyz=1 Q.E.D Best Regards Riad Zaidan
Offline
Gentleman,
Thanks for the proofs.
Regards.
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
log_2(16) = \log_2 \left ( \frac{64}{4} \right ) = \log_2(64) - \log_2(4) = 6 - 2 = 4, \,
log_2(\sqrt[3]4) = \frac {1}{3} \log_2 (4) = \frac {2}{3}. \,
Offline
L # 4
I don't want a method that will rely on defining certain functions, taking derivatives,
noting concavity, etc.
Change of base:
Each side is positive, and multiplying by the positive denominator
keeps whatever direction of the alleged inequality the same direction:
On the right-hand side, the first factor is equal to a positive number less than 1,
while the second factor is equal to a positive number greater than 1. These facts are by inspection combined with the nature of exponents/logarithms.
Because of (log A)B = B(log A) = log(A^B), I may turn this into:
I need to show that
Then
Then 1 (on the left-hand side) will be greater than the value on the
right-hand side, and the truth of the original inequality will be established.
I want to show
Raise a base of 3 to each side:
Each side is positive, and I can square each side:
-----------------------------------------------------------------------------------
Then I want to show that when 2 is raised to a number equal to
(or less than) 1.5, then it is less than 3.
Each side is positive, and I can square each side:
Last edited by reconsideryouranswer (2011-05-27 20:05:01)
Signature line:
I wish a had a more interesting signature line.
Offline
Hi reconsideryouranswer,
This problem was posted by JaneFairfax. I think it would be appropriate she verify the solution.
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Hi all,
I saw this post today and saw the probs on log. Well, they are not bad, they are good. But you can also try these problems here by me (Credit: to a book):
http://www.mathisfunforum.com/viewtopic … 93#p399193
Practice makes a man perfect.
There is no substitute to hard work All of us do not have equal talents but everybody has equal oppurtunities to build their talents.-APJ Abdul Kalam
Offline
JaneFairfax, here is a basic proof of L4:
For all real a > 1, y = a^x is a strictly increasing function.
log(base 2)3 versus log(base 3)5
2*log(base 2)3 versus 2*log(base 3)5
log(base 2)9 versus log(base 3)25
2^3 = 8 < 9
2^(> 3) = 9
3^3 = 27 < 25
3^(< 3) = 25
So, the left-hand side is greater than the right-hand side, because
Its logarithm is a larger number.
Offline
|
Mass balances are an important tool that is used to define the dimension of the production, define the amount of raw materials that enter into the process and the amount of products and waste that leave the process. Overall, with a mass balance it is possible to estimate the yield of a process.
Given a process:
The corresponding mass balance is the following:$$ input = output + accumulation $$
Where $accumulation$ è is the amount of material that accumulate into the reactor.
To design a new process, it is important to define a flow diagram. This diagram must be clear, organized in a way that is easily readable. A well-designed flow diagram should allow to define the main operations that characterize the process. Moreover, a flow diagram should define all the raw materials and the products leaving the process, as well as the waste products. In addition, a flow diagram can be enriched with information about the composition of the raw ingredients, the amount of raw materials, or any other processing variable that is important to understand the process.
Given a process:
The overall mass balance is built by writing on the left all the amount of raw materials entering into the process or reactor and, on the right, the amunt of all the materials leaving the process or reactor. In the example, letter "A" is the raw material. Instead, letters "B" e "C" are products leaving the process. Thus, the overall mass balance can be written as:$$ A = B + C $$
The letters correspond to masses (in kg), to volumes (in m$^3$ or rates (kg / s or m$^3$/s). Conversion between mass and volume requires the knowledge of the density. Density of foods as a function of the temperature can be calculated from the information about the composition, by the following equations:
Density (kg/m$^3$) Food component Thermal model Protein $\rho = 1.3299 \cdot 10^3 – 5.1840 \cdot 10^{–1} \cdot t$ Fat $\rho = 9.2559 \cdot 10^2 – 4.1757 \cdot 10^{–1} \cdot t$ Carbohydrate $\rho = 1.5991 \cdot 10^3 – 3.1046 \cdot 10^{–1} \cdot t$ Fibers $\rho = 1.3115 \cdot 10^3 – 3.6589 \cdot 10^{–1} \cdot t$ Water $\rho = 9.9718 \cdot 10^2 + 3.1439 \cdot t^{-3} – 3.7574 \cdot 10^{–3} \cdot t^2$
For instance, a fruit juice composed by 10% sugars and 90% water, at 25°C, has a calculated density of 1049 kg/m$^3$. This value is in close agreement with A.M Ramos and A. Ibarz (JFE, 1998, 57-63) for different juices.
The chemical composition of the materials "A", "B" e "C", of the previous example, may be known. In such case, it is possible (and very useful) to report the mass fraction of the components. Mass fraction is defined as:
Mass fraction. $$ x_A = \frac{\text{mass of the component A}}{\text{total mass}} = \frac{A}{A + B + C}$$
Determine the amount of concentrated fruit juice that contains 65% of solids that should be mixed with fruit juice with a content of 15% of solids in order to give 100 kg of resulting product with 45% of solids.
It follows the flow diagram:
The solution of the problem requires the definition of at least two balance equation. One equation is the overall mass balance. The other equation is the mass balance of the components.$$ \begin{cases} A + B = C \\ x_A \cdot A + x_B \cdot B = x_C \cdot C \\ \end{cases} $$
By combining the two equation, you have the following result:$$ \begin{cases} A + B = 100 \\ 0.15 \cdot A + 0.65 \cdot B = 0.45 \cdot 100 \\ \end{cases} $$
If you substitute $A = 100 - B$ in the mass balance of the components, this gives:$$ 0.15 \cdot (100 - B) + 0.65 \cdot B = 45 $$
If you simplify, you get the solution:$$ B = 60 $$
Thus:$$ A = 40 $$
The process include a washing step. As soon as the fruit enter into the plant, fruit are initially washed and, subsequently, the move into a conveyor belt, where trained operators start with the inspection. In this step, 20 kg of water are used per 100 kg of product. During washing, about l'1% of the initial mass is lost. During the following inspection phase, the amunt of lost products is about 2%. Afterwards, the cleaned and selected fruits move in the hot areas, where the fruit is crushed and immidiately heated. The heating step is called hot break. The purpose of the hot break process is to stabilize the fruit by the denaturation of enzymes, like polyphenoloxidase and pectinase. In details, polyphenoloxidase are rsponsible for enzymatic browning which it leads to the change of color of the product. Instead, pectinase are a family of enzymes that are responsible for the degradation of the pectin, the formation of deposit, the change in consistency of the product, the change in the colloidal stability, etc. Since pectinase are thermostable, the hot-break process needs very high temperatures (higher than 85℃) The heated product passes through one or more pulpers, which is composed by a siever that is able to remove peals and other undesired component of the fruits. The outcome of this process leads to a loss of products of about 2%. At this point, the product must be concentrated. This phase occurs with an evaporator, where the product fruit enters, loose water by boiling under vacuum, and reach a concentration of solids of about 32%. Finally, the concentrated fruit can be sterilized and aseptically packed in bag-in-box or stainless steel tanks.
Th process should take into acount also the management of the waste. Waste can be recoverd. Typically, waste from fruits can be dried, mechanically pressed (removing about 50% of water) and air dried, untill a dry waste with a final moisture of 8% is achieved.
The most important phases for the mass balance are the following:
During the step of washing and inspection, about 3% of the initial products are removed. During washing, fruit products (P) and water (W) enter, leading to clean fruit products (PP) and waste water (R). During the inspection phase, cleaned fruit products enter into the process, leading to selected fruit products (PS) and a percentage of removed products (S). It follows the corresponding flow diagram:
Mass balance of the washing step:$$P + W = PP + R $$
Mass balance of the inspection phase:$$PP = PS + S $$
During washing, about 1% of the initial products is removed. Also, during inspection, about 2% of the fruits is removed:$$PP = P \cdot 0.99 $$
e$$PS = PP \cdot 0.98 $$
Assume that the initial mass of fruit is 100 kg, then:$$PP = 99 \si{\kilo\gram}$$
e$$PS = 97 \si{\kilo\gram}$$
Now, the flow diagram can be enriched with the following information:
It follows the flow diagram, where all the information related to the mass balance are reported.
|
The transfinite
tower of iterative automorphisms of a group $G$ is simply definied to be the following chain of the groups where $G_{\alpha+1}=Aut(G_{\alpha})$ for each ordinal $\alpha$ and the direct limit is taken at the limit stages:
$G\rightarrow Aut(G)\rightarrow Aut(Aut(G))\rightarrow\cdots\rightarrow G_{\alpha}\rightarrow G_{\alpha+1}\rightarrow\cdots$
The tower
terminates when a fixed point is reached, namely one of the groups in the chain is isomorphic to its automorphism group by the natural map. Simon Thomas has proved that the automorphism tower of every centerless group eventually terminates. Later, Hamkins completed Thomas' result by showing that the automorphism tower terminates for every group: Hamkins, Joel David, Every group has a terminating transfinite automorphism tower, Proc. Am. Math. Soc. 126, No. 11, 3223-3226 (1998). ZBL0904.20027.
Hamkins' theorem gives a sense to the natural definition of the notion of
terminating number of a group, $\tau(G)$, that is the least ordinal where the automorphism tower of $G$ terminates.
My first question is about the minimum power of $ZFC$ that is needed to carry out Thomas-Hamkins' proof:
Question 1.How much $ZFC$ is needed to prove that the automorphism tower terminates for every group, $G$, and so $\tau(G)$ is well-defined? Particularly, is $AC$ used anywhere in Hamkins or Thomas' results (which Hamkins' proof is partially based on)? If so, is this use of $AC$ essential? If yes, are the following two statements equivalent?
The automorphism tower terminates for every group.
The Axiom of Choice.
My next question is about the relation between the terminating number of the direct product of two groups and the terminating number of each component:
Question 2.What is the relation between $\tau (G\times H)$ and $\tau (G)$, $\tau(H)$? Is there an upper bound for $\tau (G\times H)$ expressible in terms of $\tau (G)$, $\tau(H)$? For instance, is it true to say $\tau (G\times H)\leq Max (\tau (G), \tau(H))$ or $\tau (G)+\tau(H)$ or $\tau (G).\tau(H)$ ...?
The "Max" bound in the above question is inspired by the fact that for finite groups, $G, H$, whose orders are relatively prime, we have $Aut(G\times H)\cong Aut(G)\times Aut(H)$. If one somehow manages to keep this pattern through the entire chain then the automorphism tower of $G\times H$ terminates after $Max (\tau (G), \tau(H))$ steps.
In particular, computing $\tau(G^n)$ (and comparing it with $\tau(G)$) could be of interest as well. For instance, in the special case that $G$ is a cyclic group of order $p$, one has $Aut(G^n)\cong GL_{n}(\mathbb{F}_p)$ and so $\tau (G^{n})=\tau (GL_{n}(\mathbb{F}_p))+1$.
|
I would like to solve the following mechanics problem and would like to get few hints.
There is an inclined plane with angle $\alpha$. A ball of radius $r$ rolls on the inclined plane that has a spring connected to it at the bottom (first figure). The spring constant is $k_{1}$. Let the initial angular velocity of the ball be $\omega_{\circ}$. When the ball rolls down the inclined plane, it comes in contact with the spring (second figure) and it rebounds up the inclined plane (third figure). After it rebounds, it again rolls up the plane until the angular velocity is reduced to zero.
It starts rolling down the inclined plane again and gets rebounded by the spring up the inclined plane. This continues until the ball comes to rest in contact with the spring (last figure).
Problem: I would like to obtain the equation of motion for the rolling ball. After how many rebounds, let us say $n$, will the ball come to rest? Can this problem be solved using the Lagrangian formulation?
P.S. It may be possible that if the mass of the rolling ball is more than a certain limit, it may comes to rest with the spring compressed.
|
The probability that a prime p does not divide a random integer n is (1-1/p), so for random n we could argue that the probability that n and φ(n) are coprime is
$\prod_{p|n} \left(1-1/p \right) = \phi(n)/n.$
The average order of φ(n)/n is given by
${ 1 \over N } \sum_{n=1}^N {\phi(n) / n} = 6/\pi^2 + O(\log N/N).$
Now the probability that a random integer is squarefree is $6/\pi^2$.
So my question is: does gcd(n,φ(n))=1 for almost all squarefree n? Or to put it another way, for random squarefree n is the probability that n and φ(n) are coprime one? (Of course we have gcd(55,φ(55))=5, etc.)
I have not been able to find anything about this on the internet and so would like to know if this has been considered before. Thanks.
EDIT: Take integer N and let f(N) = number of squarefree n<=N such that gcd(n,φ(n))>1 (e.g. 21 or 55). Does f(N)/q(N) tend to zero as N tends to infinity, where q(N) is the number of squarefree numbers <= N?
|
A remark on local well-posedness for nonlinear Schrödinger equations with power nonlinearity-an alternative approach
Department of Mathematics, Shimane University, Matsue 690-8504, Japan
$\partial_t u +i \Delta u = i\lambda |u|^{p-1} u$
$\mathit{\boldsymbol{R}}^{1+n}$
$n\ge 3$
$p>1$
$\lambda \in \mathit{\boldsymbol{C}}$
$H^s$
$1<s<\min\{4;n/2\}$
$\max\{1;s/2\}< p< 1+4/(n-2s)$
$p$ Mathematics Subject Classification:35Q55, 35Q41. Citation:Takeshi Wada. A remark on local well-posedness for nonlinear Schrödinger equations with power nonlinearity-an alternative approach. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1359-1374. doi: 10.3934/cpaa.2019066
References:
[1] [2]
J. Bergh and J. Löfström,
[3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]
M. Nakamura and T. Ozawa,
Low energy scattering for nonlinear Schrödinger equations in fractional order Sobolev spaces,
[16]
M. Nakamura and T. Wada,
Modified Strichartz estimates with an application to the critical nonlinear Schrödinger equation,
[17] [18]
H. Y. Schmeisser, Vector-valued Sobolev and Besov spaces, in
[19]
H. Triebel,
[20] [21] [22] [23]
H. Uchizono and T. Wada,
On well-posedness for nonlinear Schrödinger equations with power nonlinearity in fractional order Sobolev spaces,
show all references
References:
[1] [2]
J. Bergh and J. Löfström,
[3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]
M. Nakamura and T. Ozawa,
Low energy scattering for nonlinear Schrödinger equations in fractional order Sobolev spaces,
[16]
M. Nakamura and T. Wada,
Modified Strichartz estimates with an application to the critical nonlinear Schrödinger equation,
[17] [18]
H. Y. Schmeisser, Vector-valued Sobolev and Besov spaces, in
[19]
H. Triebel,
[20] [21] [22] [23]
H. Uchizono and T. Wada,
On well-posedness for nonlinear Schrödinger equations with power nonlinearity in fractional order Sobolev spaces,
[1] [2] [3]
Hiroyuki Hirayama, Mamoru Okamoto.
Well-posedness and scattering for fourth order nonlinear Schrödinger type equations at the scaling critical regularity.
[4]
Changxing Miao, Bo Zhang.
Global well-posedness of the Cauchy problem for nonlinear Schrödinger-type equations.
[5] [6]
Hiroyuki Hirayama.
Well-posedness and scattering for a system of quadratic derivative
nonlinear Schrödinger equations with low regularity initial data.
[7]
Yonggeun Cho, Gyeongha Hwang, Tohru Ozawa.
Global well-posedness of critical nonlinear Schrödinger equations below $L^2$.
[8]
Tadahiro Oh, Mamoru Okamoto, Oana Pocovnicu.
On the probabilistic well-posedness of the nonlinear Schrödinger equations with non-algebraic nonlinearities.
[9]
Xiaoping Zhai, Yongsheng Li, Wei Yan.
Global well-posedness for the 3-D incompressible MHD equations in the critical Besov spaces.
[10]
Yonggeun Cho, Gyeongha Hwang, Soonsik Kwon, Sanghyuk Lee.
Well-posedness and ill-posedness for the cubic fractional Schrödinger equations.
[11]
Yuanyuan Ren, Yongsheng Li, Wei Yan.
Sharp well-posedness of the Cauchy problem for the fourth order nonlinear Schrödinger equation.
[12]
Takafumi Akahori.
Low regularity global well-posedness for the nonlinear Schrödinger equation on closed manifolds.
[13]
Daniela De Silva, Nataša Pavlović, Gigliola Staffilani, Nikolaos Tzirakis.
Global well-posedness for a periodic nonlinear Schrödinger equation in 1D and 2D.
[14]
Zihua Guo, Yifei Wu.
Global well-posedness for the derivative nonlinear Schrödinger equation in $H^{\frac 12} (\mathbb{R} )$.
[15]
Daniela De Silva, Nataša Pavlović, Gigliola Staffilani, Nikolaos Tzirakis.
Global well-posedness for the $L^2$ critical nonlinear Schrödinger equation in higher dimensions.
[16]
Jun-ichi Segata.
Well-posedness and existence of standing waves
for the fourth order nonlinear Schrödinger type equation.
[17]
Igor Chueshov, Alexey Shcherbina.
Semi-weak well-posedness and attractors for
2D Schrödinger-Boussinesq equations.
[18]
Tarek Saanouni.
Global well-posedness of some high-order semilinear wave and Schrödinger type equations with exponential nonlinearity.
[19]
Zhichun Zhai.
Well-posedness for two types of generalized Keller-Segel system of
chemotaxis in critical Besov spaces.
[20]
Benjamin Dodson.
Global well-posedness and scattering for the defocusing, cubic nonlinear Schrödinger equation when $n = 3$ via a linear-nonlinear decomposition.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top]
|
I would like to solve a simple 2nd-order ODE with one of the boundary conditions defined at $ -\infty $. The ODE I am looking to solve is:
$$ w''(z)-2i\pi^2w(z)=0 $$
with the corresponding boundary conditions:
$$ w(z=-\infty)=0, \; w'(z=0)=0+i\dfrac{\tau_{0}}{\mu}. $$
My attempt at a solution using
DSolve is as follows:
DSolve[{-2 I \[Pi]^2 w[z] + (w^\[Prime]\[Prime])[z] == 0, w[-Infinity] == 0, w'[0] == 0 + I Subscript[\[Tau], 0]/\[Mu]}, w[z],z]
but I only get an empty set of curly brackets as an output. I checked the rest of my snipet of code without the
w[-Infinity]==0 boundary condition, and that works as expected; therefore, I know that this is a problem with the boundary condition at $z=-\infty$. I am looking for methods with which I can solve simple ODE's with boundary conditions at infinity, and any help would be greatly appreciated.
|
I'm given a continuous-time analog signal $x_a(t) = \cos(2\pi f_1t)+\sin(2\pi f_2t)$, for some frequency $f_1, f_2$. I'm asked to sample $x_a(t)$ at $F_s=1024\textrm{ Hz}$, apply a 128-point Hamming Window, and take the DFT(Discrete Fourier Transform) of the resulting windowed, sampled signal.
The expression for a sampled signal is:
$$x[n] = x_a\left(\dfrac{n}{F_s}\right)$$
To apply the hamming window, we multiply $x[n]$ by the 128-point window function (courtesy of ML):
$$x_w[n]=x[n].*\mathrm{hamming}[128]$$
We then take the DFT of the windowed function $x_w[n]$:
$$X_w[k] = \mathrm{DFT}\left\{x_w[n]\right\}$$
I've read that the DTFT(Discrete-time Fourier transform) is a continuous spectrum of $x[n]$ and that the DFT of $x[n]$ (in a nutshell) is a sampling of that spectrum. When we take the $N$-point DFT of $x[n]$, we are taking $N$ samples of the DTFT of $x[n]$ where frequencies are unique (i.e., we sample within $2\pi$). That is:
\begin{align} \mathrm{DTFT}\left\{x[n]\right\} &= X(\omega)\\ \mathrm{DFT}\left\{x[n]\right\} &= X\left(\omega = \frac{2\pi k}{N}\right) = X[k],\quad \textrm{ for } k=0,1,...,N-1. \end{align}
My questions:
If we plot $X[k]$, we are still plotting the sample of the spectrum against frequency, right? Is the frequency, given $k$, denoted by $2\pi k/N $ radians?
My professor asked me to plot the FFT versus $k/F_s$ where $k=0,\ldots,127$. What does $k/F_s$ represent?
A homework problem asks (given application of the 128-point Hamming window, above) to find the "continuous-time frequency spacing $\Delta F$ between DFT samples"? What does this mean? Isn't this just $2\pi/N$?
My professor told me that $2\pi/N$ is the discrete-time frequency spacing in radians and that I have to multiply that by $Fs/(2 \pi)$. I'm not sure how to make sense of $Fs/N$.
|
Filter Results: Full text PDF available (8) Publication Year
2007
2019
This year (1) Last 5 years (13) Last 10 years (18) Co-author
Learn More
Abstract We investigate the existence of the least and greatest solutions to measure differential equations, as well as the relation between the extremal solutions and lower or upper solutions. Along… (More)
This paper deals with integral equations of the form \begin{eqnarray*} x(t)=\tilde{x}+∫_a^td[A]x+f(t)-f(a), t∈[a,b], \end{eqnarray*} in a Banach space $X,$ where $-\infty\ < a < b < \infty$,… (More)
This contribution deals with systems of generalized linear difierential equations of the form
Abstract Piezoelectricity of some materials has shown to have many applications, in particular in energy harvesting. Due to the inherent hysteresis in the characteristic of such materials, a number… (More)
We use the theory of generalized linear ordinary differential equations in Banach spaces to study linear measure functional differential equations with infinite delay. We obtain new results… (More)
Our objective here is to prove that the uniform convergence of a sequence of Kurzweil integrable functions implies the convergence of the sequence formed by its corresponding integrals.
We use lower and upper solutions to investigate the existence of the greatest and the least solutions for quasimonotone systems of measure differential equations. The established results are then… (More)
It is well known that functions of bounded variation, BV[a, b], and continuous functions, C[a, b], define two classes of functions which are adjoint with respect to the Riemann–Stieltjes integral.… (More)
The concept of bounded variation has been generalized in many ways. In the frame of functions taking values in Banach space, the concept of bounded semivariation is a very important generalization.… (More)
|
The
Menger sponge is a delightfully pathological shape to soak up. Start with a cube of side-length 1, and “hollow it out” by slicing it into 27 sub-cubes (like a Rubik’s Cube) and removing the 7 central sub-cubes as illustrated below. Call this the level 1 sponge. To make the 2nd level, perform the same “hollowing out” operation on each of the 20 cubes in level 1. Keep going to make levels 3, 4, and so on. The end result, i.e., level \(\infty\), is a fractal known as the Menger Sponge. It has a crazy [1] network of tunnels and passageways running through it:
What is its volume? We made the Menger Sponge by cutting out smaller and smaller chunks from the original unit cube, so how much volume did we leave behind? Each “hollowing out” step reduces the volume by a factor of 20/27, so level
k has volume \((20/27)^k\) and the final Menger Sponge has volume 0. We removed all the volume!
Since the Menger Sponge doesn’t have enough “3D stuff” to be considered a truly 3-dimensional shape, maybe it behaves more like a 2-dimensional surface? This isn’t quite right either. The Menger Sponge has “too much 2D stuff”: its surface area is infinite.
[2] In fact, this shape falls squarely (hah!) in the middle: it has dimension 2.7268….
Wait, what does it mean to have non-integer dimension? One interpretation, formalized in the notion of
Hausdorff dimension, is to look at how the “stuff” it is made from behaves under scaling. If we take a square (which is made of “2-dimensional stuff”) and scale it by a factor of 1/3, its area changes by a factor of \((1/3)^2\). Similarly, scaling a cube (“3D stuff”) by 1/3 scales its volume by \((1/3)^3\). In general, scaling “ d-dimensional stuff” by a factor of 1/3 scales its “ d-dimensional volume” by \((1/3)^d\).
As shown in this image, a Menger Sponge can be chopped into 20 smaller Menger Sponges, each scaled by 1/3. So if the Menger Sponge is made of
m-dimensional stuff, its m-dimensional volume scales by 1/20 when the shape is scaled by 1/3, so we should have \((1/3)^m=1/20\), i.e., \(m=\log_3(20) = 2.7268\ldots\).
As one more example of the quirkiness that can be squeezed out of the Menger Sponge, what happens when we slice this shape along one of its primary diagonal planes
[3]? The Menger Sponge’s crazy network of tunnels creates a stellar array of 6-pointed stars cut out of a regular hexagon:
What is the Hausdorff dimension of this hexagonal fractal? It turns out to be $$\log_3\left(\tfrac{9+\sqrt{33}}{2}\right) = 1.8184\ldots.$$ To find out where this ridiculous number comes from, you’ll have to come back next week!
|
[This is the 6th post in the current series about Wythoff’s game: see posts #1, #2, #3, #4, and #5.
Caveat lector: this post is a bit more difficult than usual. Let me know what you think in the comments!]
Our only remaining task from last week was to prove the mysterious Covering Theorem: we must show that there is exactly one dot in each row and column of the grid (we already covered the diagonal case). Since the rows and columns are symmetric, let’s focus on columns.
The columns really only care about the
x-coordinates of the points, so let’s draw just these x-coordinates on the number-line. We’ve drawn \(\phi,2\phi,3\phi,\ldots\) with small dots and \(\phi^2,2\phi^2,3\phi^2,\ldots\) with large dots. We need to show that there’s exactly one dot between 1 and 2, precisely one dot between 2 and 3, just one between 3 and 4, and so on down the line. For terminology’s sake, break the number line into length-1 intervals [1,2], [2,3], [3,4], etc., so we must show that each interval has one and only one dot:
Why is this true? One explanation hinges on a nice geometric observation: Take any small dot
s and large dot t on our number-line above, and cut segment st into two parts in the ratio \(1:\phi\) (with s on the shorter side). Then the point where we cut is always an integer! For example, the upper-left segment in the diagram below has endpoints at \(s=2\cdot\phi\) and \(t=1\cdot\phi^2\), and its cutting point is the integer 3:
In general, if
s is the jth small dot—i.e., \(s=j\cdot\phi\)—and \(t=k\cdot\phi^2\) is the kth large dot, then the cutting point between s and t is \(\frac{1}{\phi}\cdot s+\frac{1}{\phi^2}\cdot t = j+k\) (Why?! [1]). But more importantly, this observation shows that no interval has two or more dots: a small dot and a large dot can’t be in the same interval because they always have an integer between them! [2]
So all we have to do now is prove that no interval is
empty: for each integer n, some dot lies in the interval [ n, n+1]. We will prove this by contradiction. What happens if no dot hits this interval? Then the sequence \(\phi,2\phi,3\phi,\ldots\) jumps over the interval, i.e., for some j, the jth dot in the sequence is less than n but the ( j+1)st is greater than n+1. Likewise, the sequence \(\phi^2,2\phi^2,3\phi^2,\ldots\) jumps over the interval: its kth dot is less than n while its ( k+1)st dot is greater than n+1:
By our observation above on segment \(s=j\phi\) and \(t=k\phi^2\), we find that the integer
j+ k is less than n, so \(j+k\le n-1\). Similarly, \(j+k+2 > n+1\), so \(j+k+2 \ge n+2\). But together these inequalities say that \(n\le j+k\le n-1\), which is clearly absurd! This is the contradiction we were hoping for, so the interval [ n, n+1] is in fact not empty. This completes our proof of the Covering Theorem and the Wythoff formula!
It was a long journey, but we’ve finally seen exactly why the Wythoff losing positions are arranged as they are. Thank you for following me through this!
A Few Words on the Column Covering Theorem
Using the
floor function \(\lfloor x\rfloor\) that rounds x down to the nearest integer, we can restate the Column Covering Theorem in perhaps a more natural context. The sequence of integers $$\lfloor\phi\rfloor = 1, \lfloor 2\phi\rfloor = 3, \lfloor 3\phi\rfloor = 4, \lfloor 4\phi\rfloor = 6, \ldots$$ is called the Beatty sequence for the number \(\phi\), and similarly, $$\lfloor\phi^2\rfloor = 2, \lfloor 2\phi^2\rfloor = 5, \lfloor 3\phi^2\rfloor = 7, \lfloor 4\phi^2\rfloor = 8,\ldots$$ is the Beatty sequence for \(\phi^2\). Today we proved that these two sequence are complementary, i.e., together they contain each positive integer exactly once. We seemed to use very specific properties of the numbers \(\phi\) and \(\phi^2\), but in fact, a much more general theorem is true: Beatty’s Theorem: If \(\alpha\) and \(\beta\) are any positive irrational numbers with \(\frac{1}{\alpha}+\frac{1}{\beta}=1\), then their Beatty sequences \(\lfloor\alpha\rfloor, \lfloor 2\alpha\rfloor, \lfloor 3\alpha\rfloor,\ldots\) and \(\lfloor\beta\rfloor, \lfloor 2\beta\rfloor, \lfloor 3\beta\rfloor,\ldots\) are complementary sequences.
Furthermore, our same argument—using \(\alpha\) and \(\beta\) instead of \(\phi\) and \(\phi^2\)—can be used to prove the more general Beatty’s Theorem!
|
I'm trying to study about modelling star from a journal. To get the characteristics of the star, I have to solve 4 differential equations which are coupled each other. I want to use Mathematica 9 to solve this system of equations but I still can't do this simply. This is the system of equations:
$\lambda'=\frac{1-e^{\lambda}}{r}$+ $8\pi Gre^{\lambda}\left([m^2 +e^{-v}(\omega +qA)^2]\phi^2+ \frac{e^{-v-\lambda}{A'}^2}{2}+{\phi'}^2e^{-\lambda}]\right)$
$v'=\frac{-1+e^{\lambda}}{r}+8\pi Gre^{\lambda}\left([-m^2 +e^{-v}(\omega +qA)^2]\phi^2 - \frac{e^{-v-\lambda}{A'}^2}{2}+{\phi'}^2e^{-\lambda}\right) $
$A''+\left(\frac{2}{r}-\frac{v' + \lambda '}{2}\right)A'-2qe^{\lambda}\phi^2(\omega + qA)=0$
$\phi'' + \left(\frac{2}{r}+\frac{v' - \lambda '}{2}\right)\phi'+e^{\lambda}[(\omega+qA)^2 e^{-v}-m^2]\phi=0$
The boundary conditions are:
$\phi(\inf)=0, \phi'(\inf)=0, \phi(0)=constant, \phi'(0)=0$
$A(\inf)=0, A'(\inf)=0, A(0)=constant, A'(0)=0$
$v(\inf)=0, \lambda(0)=0$
Can someone tell me? That's journal just tell to use Mathematica and Runge-Kutta method.
|
What is the exact length of \stackrel{\LARGE{\frown}}{AB} on circle P
Step 1: Find the circumference of the circle
NOTE: Circumference of the circle = 2 \pi r
EXAMPLE: Circumference = 2 \pi r
= 2 \pi(15)
= 30 \pi
Step 2: Identify the type of the triangle
NOTE: The triangle are congruent because they are radii of the same circle.
So, \triangle APB is isosceles, and the
angles opposite the congruent sides are congruent to each other.
EXAMPLE: \angle A = \angle B
Step 3: Add the interior angles of a triangle up to 180\degree, to find the unknown angle.
\angle A + \angle B + \angle P = 180\degree
38\degree + 38\degree + \angle P = 180\degree
\angle P = 104\degree
Step 4: Calculate the arc length.
Arc length:
Step 1: Note down the given values
Step 2: Set up the formula for arc length.
NOTE: The formula is arc length= 2 \pi (r)(\frac{\theta }{360}) ,
where {\displaystyle r} equals the radius of the circle and {\displaystyle \theta } equals the measurement of the arc’s central angle, in degrees.
or
Arc length = r * \theta
Step 3: Plug the length of the circle’s radius into the formula.
Step 4: Plug the value of the arc’s central angle into the formula.
Step 5: Simplify the equation to find the arc length
NOTE: Use multiplication and division to simplify the equation.
Area of the circle.
Step 1: Identify the known or given information.
Step 2: Set up a formula for the sector area
NOTE: A ratio will need to be constructed. Recall that a circle is composed
of 360 degrees. Therefore, the following ratio can be made,
\frac{\theta}{360} = \frac{\text{sector area} (A_C)}{\text{Total area} (A_T)}
Sector area = \frac{\theta}{360} * \pi r^2 (since area = \pi r^2)
where, \theta = Central angle.
Step 3: Plug the sector’s central angle measurement into the formula.
Step 4: Plug the sector’s radius measurement into the formula.
Step 5: Solve for the area:
EXAMPLE: Sector area = \frac{\theta}{360} * \pi r^2 (since area = \pi r^2)
Sector area = \frac{60}{360}*\left(3.14\right)\left(5\right)^2
Sector area = 13.09 cm^2
.
|
Nagoya Mathematical Journal Nagoya Math. J. Volume 185 (2007), 143-150. Some estimates for the Bergman kernel and metric in terms of logarithmic capacity Abstract
For a bounded domain $\Omega$ on the plane we show the inequality $c_{\Omega}(z)^{2} \leq 2\pi K_{\Omega}(z)$, $z \in \Omega$, where $c_{\Omega}(z)$ is the logarithmic capacity of the complement $\mathbb{C} \setminus \Omega$ with respect to $z$ and $K_{\Omega}$ is the Bergman kernel. We thus improve a constant in an estimate due to T. Ohsawa but fall short of the inequality $c_{\Omega}(z)^{2} \leq \pi K_{\Omega}(z)$ conjectured by N. Suita. The main tool we use is a comparison, due to B. Berndtsson, of the kernels for the weighted complex Laplacian and the Green function. We also show a similar estimate for the Bergman metric and analogous results in several variables.
Article information Source Nagoya Math. J., Volume 185 (2007), 143-150. Dates First available in Project Euclid: 23 March 2007 Permanent link to this document https://projecteuclid.org/euclid.nmj/1174668918 Mathematical Reviews number (MathSciNet) MR2301462 Zentralblatt MATH identifier 1127.30006 Citation
Błocki, Zbigniew. Some estimates for the Bergman kernel and metric in terms of logarithmic capacity. Nagoya Math. J. 185 (2007), 143--150. https://projecteuclid.org/euclid.nmj/1174668918
|
Is there any way to tell whether a filter is high pass or low pass by observing only it's time domain samples or coefficients?
To elaborate a bit on Fat32's answer: the most straightforward thing to do is to compute (or estimate) the following two sums:
$$H(e^{j 0})=\sum_nh[n]\tag{1}$$
and
$$H(e^{j \pi})=\sum_n(-1)^nh[n]\tag{2}$$
where $(1)$ is the value of the frequency response at DC (i.e., $\omega=0$), and $(2)$ is the value of the frequency response at Nyquist (i.e., at $\omega=\pi$).
A low pass filter should have a relatively large value for $(1)$ and a very small value (ideally zero) for $(2)$. For a high pass filter the opposite is the case. If both values are small (and if $h[n]$ is not zero) then it's probably a band pass filter, and if both values are relatively large, it's probably a band stop filter. This of course only applies if you can assume that the filter approximates some standard frequency selective filter characteristic.
Yes. For example if the sum of the filter coefficients is (close to) zero, then it will not pass any DC signals, hence it cannot be a low-pass filter. Then such a filter will be a highpass filter (it can also be band-pass but I assume you deal with only a lowpass or highpass decision..)
|
I am trying to simplify the following integral but getting no answer. Any help in how to get the resulting function as a function of t would be much appreciated?
t - Integrate[Abs[t - s]^(-1/2)*s, {s, 0, 1}]
Mathematica Stack Exchange is a question and answer site for users of Wolfram Mathematica. It only takes a minute to sign up.Sign up to join this community
You just need to give
Integrate an assumption:
t - Integrate[Abs[t-s]^(-1/2)*s,{s,0,1}, Assumptions->t ∈ Reals] //TeXForm
$t-\left( \begin{array}{cc} \{ & \begin{array}{cc} \frac{2}{3} \left(2 (-t)^{3/2}+2 \sqrt{1-t} t+\sqrt{1-t}\right) & t\leq 0 \\ -\frac{2}{3} \left(-2 t^{3/2}+2 \sqrt{t-1} t+\sqrt{t-1}\right) & t\geq 1 \\ \frac{2}{3} \left(2 t^{3/2}+2 \sqrt{1-t} t+\sqrt{1-t}\right) & \text{True} \\ \end{array} \\ \end{array} \right)$
|
A trajectory is the path taken up by a moving object that is following through the space as a function of time. Mathematically, a trajectory is described as a position of an object over a particular time. A much simplified example would by a ball or rock thrown upwards, the path taken by the stone is determined by the gravitational forces and resistance of air.
Some more common examples of trajectory motion would be a bullet fired from gun, an athlete throwing a javelin, satellite orbiting around the earth etc.
Trajectory formula is given by
\[\large y=x\:tan\,\theta-\frac{gx^{2}}{2v^{2}\,cos^{2}\,\theta}\]
Where,
y is the horizontal component, x is the vertical component, g= gravity value, v= initial velocity, $\theta$ = angle of inclination of the initial velocity from horizontal axis,
Trajectory related equations are:
\[\large Time\;of\;Flight: t=\frac{2v_{0}\,sin\,\theta}{g}\]
\[\large Maximum\;height\;reached: H=\frac{_{0}^{2}\,sin^{2}\,\theta}{2g}\]
\[\large Horizontal\;Range: R=\frac{V_{0}^{2}\,sin\,2\,\theta}{g}\]
Where,
V o is the initial Velocity, sin $\theta$ is the y-axis vertical component, cos $\theta$ is the x-axis horizontal component. Solved examples Question: Marshall throws a ball at an angle of 60o. If it moves at the rate of 6m/s and Steve catches it after 4s. Calculate the vertical distance covered by it. Solution:
Given,
$\theta = 60^{\circ}$ $Initial\;velocity=v_{0} = 6m/sec$ time = 4 sec
The horizontal distance is given by:
$x=V_{x0}$
$t=6\;m/sec\times 4\;sec$ $x=24\;m/sec^{2}$
$y=x\,tan\,\theta -\frac{gx^{2}}{2v^{2}\,cos^{2}\,\theta}$
$=24\,m/sec^{2}\;tan\;60^{\circ}$
$=\frac{9.8\,m/sec^{2}(15/sec^{2})^{2}}{2(6m/sec)^{2}cos^{2}\,60^{\circ}}$
$=24.5\,m/s^{2}$
|
I've seen this question! from 2 years ago:
Given $F$ is a $PRF$, we define $G$ for an input $x\in\{0,1\}^n$ as follows:
$$G(x) = F_k(x) \oplus F_k(x \oplus 1^s)$$
The question was if $G$ is a $PRG$. I edited the question a bit to fit the answers given back then. The answers stated this isn't a $PRG$ because $$G(x\oplus 1^n)=F_k(x\oplus 1^n)\oplus F_k(x\oplus 1^n \oplus 1^n)=F_k(x\oplus 1^n)\oplus F_k(x)=G(x)$$Now because $x$, the seed, must be random and because the adversary cannot affect the seed in any way. Why wouldn't this be a $PRG$?
For a random uniformly selected $x$ shouldn't the output of $F_k$ on input $x$ and $x\oplus 1^n$ be pseudorandom and thus $F_k(x) \oplus F_k(x \oplus 1^s)$ is also pseudorandom?
Replay
I believe you misunderstood the linked question and probably the definition of PRG.
PRG maps a key $K$ (also called seed) of bit length $l$ into a bit sequence $x\in\{0,1\}^n$ of bit length $n$. PRG is secure if the generated bit sequence $x$ is computationally indistinguishable from truly random.
What the answer to the linked question has shown is that the given PRG's output is easily distinguishable from random because two bits in fixed positions of the PRG's output are equal for any key $K$.
PS: Probably the questions like the linked one implicitly assume the following construction that creates PRG from PRF:
Suppose we have a PRF $F:K\times D \to R$ which map keys $K=\{0,1\}^l$ and domain $D=\{0,1\}^n$ into range $R=\{0,1\}^m$. Now, we can build a PRG $G: K\to \{0,1\}^{n+m}$:
$$G(k)[x]=F(k,x)$$
Note that $x$ is an argument of $F$ but not of $G$; on the LHS $x\in\{0,1\}^n$ denotes a position of a bit sequence of length $m$ in PRG output.
In the simplest case $m=1$, that is PRF $F$ outputs just one bit: $R=\{0,1\}$, and $x\in\{0,1\}^n$ denotes bit position in $G$ output.
|
Previous
abstract Next
abstract
Session 34 - Real Instruments.
Display session, Tuesday, June 09
Atlas Ballroom,
The design of a long slit dual order spectrograph (DOS), being built for flight on a NASA sounding rocket by Johns Hopkins University (JHU) in the late fall -- early winter of 1999, is presented. The spectrograph is intended to survey nebular regions on intermediate angular scales in the far UV between 900 -- 1650 Åfor atomic and molecular emissions, and to study the extinction and scattering of FUV radiation by dust. It has an instantaneous field of view of \approx 4'' \times 10'', a slit limited spectral resolution of 7.7 Å\ and point source resolutions of 3'' to 5'' spatial and 3 to 3.5 Å\ spectral.
The DOS uses a concaved holographic grating with a toroidal figure in a normal incidence Rowland mount (\alpha = 0^\circ), which produces symmetric efficiency and astigmatism in the positive and negative 1st orders with low scatter. Laminar (rectangular) profiles provide a theoretical peak groove efficiency of 41% in each order for a combined groove efficiency of 82%, competitive with the 100% theoretical peak groove efficiency of a blazed grating. The use of both orders provides a valuable redundancy in a mission critical component, important in space flight applications. Our design incorporates complementary UV sensitive MCP and CCD detectors at each order to provide high dynamic range.
The DOS design is effective for imaging spectroscopy in the photon starved far UV, utilizing slow f/ratios, where low scatter, high efficiency, moderate spectral, and high spatial resolution are desired. The sounding rocket DOS is the basis of an instrument that JHU recently proposed in answer to the NASA Announcement of Opportunity for University Explorers, which we call the Nebular Explorer, (NebEx).
Program
listing for Tuesday
|
I just asked this question concerning the application of Noether's theory. Think about this got me wondering about the following. In the usual derivation of the Noether current the assumption is made that:
$$\mathcal{L}(\phi'(x'),\partial_\mu'\phi'(x'),x')=\mathcal{L}(\phi(x),\partial_\mu\phi(x),x)+\delta x^\mu\partial_\mu\mathcal{L}(\phi(x),\partial_\mu\phi(x),x).\tag{1}$$ This is usually shown by considering the Lagrangian to be a function of $x$ only then, the statement that:
$$\mathcal{L}(x')=\mathcal{L}(x)+\delta x^\mu\partial_\mu\mathcal{L}(x)\tag{2}$$
does indeed hold true by trivial Taylor expansion. But as far as I can tell this derivation is making the assumption that: $$\phi'(x')=\phi(x').\tag{3}$$ I have seen (1) used in cases where this is not the case. Thus please can someone explain why (1) holds for a general mapping $\phi(x) \mapsto \phi'(x')$
|
Archimedes constant π Set todo Requirements
definiendum $\pi:=V_2(1)$ Discussion
The constant $\pi$ is defined as the volume of the disc of radius $1$, where the underlying metric space is taken to be the two dimensional Euclidean space $\mathbb E^2$.
$\pi = 3.14159\dots\approx \frac{22}{7}$
|
This is actually a very tricky question, mathematically. Physicists may think this question to be trivial. But it takes me one hour in a math summer school to explain the notion of gapped Hamiltonian.
To see why it is tricky, let us consider the following statements. Any physicalsystem have a finite number of degrees of freedom (assuming the universe is finite). Such physicalsystem is described by a Hamiltonian matrix with a finite dimension.Any Hamiltonian matrix with a finite dimension has a discrete spectrum.So all the physical systems (or all the Hamiltonian) are gapped.
Certainly, the above is not what we mean by "gapped Hamiltonian" in physics.But what does it mean for a Hamiltonian to be gapped?
Since a gapped system may have gapless excitations at boundary, soto define gapped Hamiltonian, we need to put the Hamiltonian on a space with no boundary. Also, system with certain sizes may contain non-trivial excitations(such as spin liquid state of spin-1/2 spins on a lattice with an ODD number of sites), so we have to specify that the system have a certain sequence of sizes as we take the thermodynamic limit.
So here is a definition of "gapped Hamiltonian" in physics:Consider a system on a closed space, if there is a sequence of sizesof the system $L_i$, $L_i\to\infty$ as $i \to \infty$,such that the size-$L_i$ system on closed space has the following "gap property", then the system is said to be gapped. Note that the notion of "gapped Hamiltonian" cannot be even defined for a single Hamiltonian. It is a properties of a sequence of Hamiltonianin the large size limit.
Here is the definition of the "gap property":There is a fixed $\Delta$ (ie independent of $L_i$) such that the size-$L_i$ Hamiltonian has no eigenvalue in an energy window of size $\Delta$.The number of eigenstates below the energy window does not depend on$L_i$, the energy splitting of those eigenstates below the energy windowapproaches zero as $L_i\to \infty$.
The number eigenstates below the energy window becomes the ground state degeneracy of the gapped system.This is how the ground state degeneracy of a topological ordered stateis defined.I wonder, if some one had consider the definition of gapped many-body system very carefully, he/she might discovered the notion on topological order mathematically. This post imported from StackExchange Physics at 2014-04-04 16:13 (UCT), posted by SE-user Xiao-Gang Wen
|
I came across John Duffield Quantum Computing SE via this hot question. I was curious to see an account with 1 reputation and a question with hundreds of upvotes.It turned out that the reason why he has so little reputation despite a massively popular question is that he was suspended.May I ...
@Nelimee Do we need to merge? Currently, there's just one question with "phase-estimation" and another question with "quantum-phase-estimation". Might we as well use just one tag? (say just "phase-estimation")
@Blue 'merging', if I'm getting the terms right, is a specific single action that does exactly that and is generally preferable to editing tags on questions. Having said that, if it's just one question, it doesn't really matter although performing a proper merge is still probably preferable
Merging is taking all the questions with a specific tag and replacing that tag with a different one, on all those questions, on a tag level, without permanently changing anything about the underlying tags
@Blue yeah, you could do that. It generally requires votes, so it's probably not worth bothering when only one question has that tag
@glS "Every hermitian matrix satisfy this property: more specifically, all and only Hermitian matrices have this property" ha? I though it was only a subset of the set of valid matrices ^^ Thanks for the precision :)
@Nelimee if you think about it it's quite easy to see. Unitary matrices are the ones with phases as eigenvalues, while Hermitians have real eigenvalues. Therefore, if a matrix is not Hermitian (does not have real eigenvalues), then its exponential will not have eigenvalues of the form $e^{i\phi}$ with $\phi\in\mathbb R$. Although I'm not sure whether there could be exceptions for non diagonalizable matrices (if $A$ is not diagonalizable, then the above argument doesn't work)
This is an elementary question, but a little subtle so I hope it is suitable for MO.Let $T$ be an $n \times n$ square matrix over $\mathbb{C}$.The characteristic polynomial $T - \lambda I$ splits into linear factors like $T - \lambda_iI$, and we have the Jordan canonical form:$$ J = \begin...
@Nelimee no! unitarily diagonalizable matrices are all and only the normal ones (satisfying $AA^\dagger =A^\dagger A$). For general diagonalizability if I'm not mistaken onecharacterization is that the sum of the dimensions of the eigenspaces has to match the total dimension
@Blue I actually agree with Nelimee here that it's not that easy. You get $UU^\dagger = e^{iA} e^{-iA^\dagger}$, but if $A$ and $A^\dagger$ do not commute it's not straightforward that this doesn't give you an identity
I'm getting confused. I remember there being some theorem about one-to-one mappings between unitaries and hermitians provided by the exponential, but it was some time ago and may be confusing things in my head
@Nelimee if there is a $0$ there then it becomes the normality condition. Otherwise it means that the matrix is not normal, therefore not unitarily diagonalizable, but still the product of exponentials is relatively easy to write
@Blue you are right indeed. If $U$ is unitary then for sure you can write it as exponential of an Hermitian (time $i$). This is easily proven because $U$ is ensured to be unitarily diagonalizable, so you can simply compute it's logarithm through the eigenvalues. However, logarithms are tricky and multivalued, and there may be logarithms which are not diagonalizable at all.
I've actually recently asked some questions on math.SE on related topics
@Mithrandir24601 indeed, that was also what @Nelimee showed with an example above. I believe my argument holds for unitarily diagonalizable matrices. If a matrix is only generally diagonalizable (so it's not normal) then it's not true
also probably even more generally without $i$ factors
so, in conclusion, it does indeed seem that $e^{iA}$ unitary implies $A$ Hermitian. It therefore also seems that $e^{iA}$ unitary implies $A$ normal, so that also my argument passing through the spectra works (though one has to show that $A$ is ensured to be normal)
Now what we need to look for is 1) The exact set of conditions for which the matrix exponential $e^A$ of a complex matrix $A$, is unitary 2) The exact set of conditions for which the matrix exponential $e^{iA}$ of a real matrix $A$ is unitary
@Blue fair enough - as with @Semiclassical I was thinking about it with the t parameter, as that's what we care about in physics :P I can possibly come up with a number of non-Hermitian matrices that gives unitary evolution for a specific t
Or rather, the exponential of which is unitary for $t+n\tau$, although I'd need to check
If you're afraid of the density of diagonalizable matrices, simply triangularize $A$. You get $$A=P^{-1}UP,$$ with $U$ upper triangular and the eigenvalues $\{\lambda_j\}$ of $A$ on the diagonal.Then$$\mbox{det}\;e^A=\mbox{det}(P^{-1}e^UP)=\mbox{det}\;e^U.$$Now observe that $e^U$ is upper ...
There's 15 hours left on a bountied question, but the person who offered the bounty is suspended and his suspension doesn't expire until about 2 days, meaning he may not be able to award the bounty himself?That's not fair: It's a 300 point bounty. The largest bounty ever offered on QCSE. Let h...
|
It looks like you're new here. If you want to get involved, click one of these buttons!
I've spent four lectures on the logic of partitions; you may be wondering why. One reason was to give you examples illustrating this important fact:
Theorem. Left adjoints preserve joins and right adjoints preserve meets. Suppose \(f : A \to B\) and \(g : B \to A\) are monotone functions between posets. Suppose that \(f\) is the left adjoint of \(g\), or equivalently, \(g\) is the right adjoint of \(f\). If the join of \(a,a' \in A\) exists then so does the join of \(f(a), f(a') \in B\), and
$$ f(a \vee a') = f(a) \vee f(a'). $$ If the meet of \(b,b' \in B\) exists then so does the meet of \(g(b), g(b') \in A\), and
$$ g(b \wedge b') = g(b) \wedge g(b'). $$ The proof is very easy, so this deserves being called a "Theorem" only because it's so fundamental! I will prove it later, in more generality. Right now let's see how it's relevant to what we've been doing.
In Lecture 9 we saw something interesting about the subsets. Given any set \(X\) there's a poset \(P(X)\) consisting of all subsets of \(X\). Given any function \(f : X \to Y\) there's a monotone map
$$ f^* : P(Y) \to P(X) $$sending any subset of \(Y\) to its preimage under \(f\). And we saw that \( f^{\ast} \) has both a left adjoint and a right adjoint. This means that \( f^{\ast} \)
is both a right adjoint and a left adjoint. (Remember: having a left adjoint means being a right adjoint, and vice versa.)
So by our Theorem, we see that \(f^* : P(Y) \to P(X)\) preserves both meets and joins! You can also see this directly - see Puzzle 41 in Lecture 13. But what matters here is the general pattern.
In Lecture 13 we also saw something interesting about partitions. Given any set \(X\) there's a poset \( \mathcal{E}(X)\) consisting of all partitions of \(X\). Given any function \(f : X \to Y\) there's a monotone map
$$ f^* : \mathcal{E}(Y) \to \mathcal{E}(X) $$ sending any partition of \(Y\) to its pullback along \(f\). And we saw that while \( f^{\ast} \) preserves meets, it does not preserve joins!
So by our Theorem, we see that \(f^* : \mathcal{E}(Y) \to \mathcal{E}(X)\)
cannot be a left adjoint. On the other hand, it might be a right adjoint.
And indeed it is! So, this strange difference between the logic of subsets and the logic of partitions is really all about adjoints.
Puzzle 42. Given a function \(f : X \to Y\) there is a way to push forward any partition \(P\) on \(X\) and get a partition \(f_{!} (P)\) on \(Y\). In pictures it looks like this:
although this is not the most exciting example, since here \(f_{!}(P)\) has just one part. Figure out the precise general description of \(f_{!} (P)\). If you get stuck read Section 1.5.2 of
Seven Sketches. Puzzle 43. Show that for any function \(f : X \to Y\), pushing forward partitions along \(f\) gives a monotone map
$$ f_{!} : \mathcal{E}(X) \to \mathcal{E}(Y) . $$
Puzzle 44. Show that for any function \(f : X \to Y\), \(f^* : \mathcal{E}(Y) \to \mathcal{E}(X)\) is the right adjoint of \(f_{!}: \mathcal{E}(X) \to \mathcal{E}(Y)\).
|
Is it possible to modify the PCA algorithm so that it actually implements factor analysis? We can assume that the uniquenesses are known.
I'm aware that for a $d$-dimensional data $x$, PCA takes the leading $k<d$ eigenvectors of $\Sigma_X$ and uses these to define the principal components.
In factor analysis models this data as $x=Af + \epsilon$ where $f$ is a matrix of factors, $A$ is the loading matrix between the latent factors and the observable data, and $\epsilon$ is the vector of uniquenesses.
I have read (see slide 18 of link) that we should construct $A$ as the leading $k$ eigenvectors of $\Sigma_X - \psi$ where $\psi=\text{diag}(\psi_{11}, \dots , \psi_{dd})$ is the covariance of the $\epsilon$ vector.
Is this true? What is the explanation? The slides are very superficial...
|
I am attempting to create a Kalman filter to track a sine wave (I am using a linear Kalman filter example assuming I already know the frequency of the sine wave) - the example I am using is derived on pages 194-196 of
"Fundamentals of Kalman Filtering: A Practical Approach" 2nd edition by Paul Zarchan and Howard Musoff.
It is working to track the AC part of the signal, however the offset of the sine wave from the $x$-axis is not correct, it seems to be tied to my value of $R$, I'm guessing it is because in the derivation of the model no offset is included but I cannot see where an offset term should be inserted, how should I go about this?
The matrices describing my system are like so:
$$\mathbf \Phi_k = \begin{bmatrix} \cos(\omega T_s) & \dfrac{\sin(\omega T_s)}{\omega} \\ -\omega \sin(\omega T_s) & \cos(\omega T_s) \\\end{bmatrix}$$
Where the state matrix is:
$$\mathbf X = \begin{bmatrix} x \\ 0 \end{bmatrix}$$
And I have set $\mathbf Q$ and $\mathbf P$ like so:
$$\mathbf Q = \begin{bmatrix} \dfrac{T_S^3}{3} & \dfrac{T_s^2}{2} \\ \dfrac{T_s^2}{2} & T_s \end{bmatrix}, \quad\mathbf P = \begin{bmatrix} 9999999999999 & 0 \\ 0 & 9999999999999 \end{bmatrix} $$
I am now attemping to use this to track a generated signal of a noisy sine wave of amplitude and frequncy $1$ with an offset of $300$.
If $R$ is set to $0.1$ I get the following output of my filter:
Which is offset by $300/20$, as can be seen from this plot where I add an offset to the Kalman filter output:
Changing $R$ changes this offset (as one might expect as $R$ denotes the uncertainty in your data).
How can I add this offset into my model such that my Kalman filter can tracking the sine wave correctly regardless of offset?
EDIT: I've been thinking more about my question and realised that in a real signal with noise this offset could be considered a separate constant signal with frequency $0$ which could be extracted by using a Kalman filter fitting a constant value (or simply a moving average filter) and that the offset is not inherently part of the sine wave. In which case how might I remove the offset in the Kalman filter output entirely? (e.g. if I used a bandpass filter no offset would be present). Since my model does not include an offset I might expect no offset at all but I do see an offset of $300/20$, as discussed. Why does this occur if there is no such offset parameter in my model?
|
I read the definition of work as $$W ~=~ \vec{F} \cdot \vec{d}$$ $$\text{ Work = (Force) $\cdot$ (Distance)}.$$
If a book is there on the table, no work is done as no distance is covered. If I hold up a book in my hand and my arm is stretched, if no work is being done, where is my energy going?
I read the definition of work as $$W ~=~ \vec{F} \cdot \vec{d}$$ $$\text{ Work = (Force) $\cdot$ (Distance)}.$$
While you do spend some body energy to keep the book lifted, it's important to differentiate it from physical effort. They are connected but are not the same. Physical effort depends not only on how
much energy is spent, but also on how energy is spent.
Holding a book in a stretched arm requires
a lot of physical effort, but it doesn't take that much energy.
In the
idealcase, if you manage to hold your arm perfectly steady, and your muscle cells managed to stay contracted without requiring energy input, there wouldn't be any energy spent at all because there wouldn't be any distance moved.
On
realscenarios, however, you do spend (chemical) energy stored within your body, but whereis it spent? It is spent on a cellular level. Muscles are made with filaments which can slide relative to one another, these filaments are connected by molecules called myosin, which use up energy to move along the filaments but detach at time intervals to let them slide. When you keep your arm in position, myosins hold the filaments in position, but when one of them detaches other myosins have to make up for the slight relaxation locally. Chemical energy stored within your body is released by the cell as both work and heat.*
Both on the ideal and the real scenarios we are talking about the physical definition of energy. On your consideration, you ignore the movement of muscle cells, so you're considering the ideal case. A careful analysis of the
real case leads to the conclusion that work is done and heat is released, even though the arm itself isn't moving.
*
Ultimately, the work done by the cells is actually done on other cells, which eventually dissipates into heat due to friction and non-elasticity. So all the energy you spend is invested in keeping the muscle tension and eventually dissipated as heat.
This is about how your muscles work -- the're an ensemble of small elements that, triggered by a signal from nerves, use chemical energy to go from less energetical long state to more energetical short one. Yet, this obviously is not permanent and there is spontaneous come back, that must be compensated by another trigger. This way there are numerous streches and releases that in sum gives small oscillations that create macroscopic work on the weight.
Perhaps an analogy is in order. Lets hold up the book by using an electromagnet (say we put a piece if steel under it ). If the coils were made of superconducting material it would take no energy input to maintain the position/field strength. But if we use ordinary wire, ohmic loses within the coil must be made up for by externally supplied electrical energy.
The reason is that you need to
spend energy to keep muscle stretched.
The first thing you need know is that the work $W=F \Delta x$ is the energy transfer between objects. Hence, there are no work done on the book when it is put on the table because there are no movement.
When your arm muscle is stretched, however, it consumes energy continuously to keep this state so you feel tire very fast. This energy comes from the chemical energy in your body and most of them are converted into heat and lost to the surrounding. In this situation, no energy is transferred to the book, so no work is done.
You can feel the different energy consumption when your arm is stretched in different angle. A particular case is that you put the book on your leg when you sit on a chair so your muscle is relaxed and the energy spent is less.
There are also a
special type of muscle, smooth muscle, requires very little energy to keep its state so that it can always keep it stretched and you won't get tire:
Tonic smooth muscle contracts and relaxes slowly and exhibits force maintenance such as vascular smooth muscle. Force maintenance is the maintaining of a contraction for a prolonged time with little energy utilization.
When contracted, the sarcomeres, the structure that actually do the work in a muscle, take turns doing the work. Only a third of them are engaged at any given moment.
This is because the sarcomere pumps blood as it contracts and relaxes, enabling it to get the energy it needs to do its work for longer periods. The temporary, superhuman strength some people experience may be some sort of override of this normal level of engagement.
This system doesn't have a different mechanism for holding a position, so the same thing goes on when trying to hold an object steady.
But if the muscle is contracted for a very long time and the energy in the blood being pumped becomes insufficient, sarcomeres will actually get stuck in their contracted position. This state doesn't require energy and the sarcomere will remain contracted until the load stops and normal circulation is restored.
I believe this is a survival mechanism that enables an animal to hang on, even when the load would otherwise be overwhelming.
It also can cause muscle stiffness when circulation through a muscle is impaired, a very common condition as people age.
The big difference between holding up a book in your hand (by holding it in the palm) and holding up a book by laying it on a table is that first equilibrium position is a dynamical one, while the book on the table is in static equilibrium.
I'll explain it qualitatively. You can compare the situation in which you hold up a book with the situation in which a book is held up by constantly bombarding it from below with particles, say marbles. In the extreme case of bombarding the book with only one marble at a time, the book falls a little, the marble hits it from below and thereby sends it back up again. The marble loses energy in the process, wich is given to the book (assuming an elastic collision). The book falls back again, and the next marble hits the book, sending it back up again, etc. You can use big marbles, little marbles, give them different velocities, and vary the amount of time between which the marbles hit the book. The best combination of these will hold the book in the best quasi-stable position. Even better would be to use many marbles, hitting the book at different places.
So each time a marble hits the book it loses some of its energy, which is given to the constantly falling book, and which makes it look like the book is in equilibrium. That is, a dynamical equilibrium.
Now, where is the connection with the muscles keeping up the book? I think it's easy to see, though I don't have too much understanding of the muscles workings. Alls muscle cells can be compared with the marbles and give the book constantly an upward change in motion during its fall. They relax, go tense, relax, go tense, etc. The fall and upward change are too small to notice, so the book looks in a steady state. That is, a dynamical steady state. Of course, there is no friction in the case of the marbles, who get their energy from "little canons".
Consider an analogy,
We get tired after STANDING for some time,without doing any work*. The reason behind this is same as the reason of why we dont do any work holding any object above our heads, but this case is easier to comprehend,
when we stand we r actually resisting the tendency of falling on the ground,muscles are holding on to the structure of our body so that we dont collapse on the ground like some non living thing,
these muscles have fibers which have have streached themselves ,which requires energy,
Similarly when we hold something above our head we r doing the same thing, resisting that collapsing tendency , which causes elongment in the muscles which requires energy.
When a physicist talks about work, they are using the word in the technical sense of the equation you quote. To a biologist, though, work might be defined as energy expended to carry out a task. In your example, your arm will not naturally stay in the position described. Your body (mostly your muscles) must expend energy to hold your arm (and the book) in a set position, unsupported by anything but your own physiology.
So, by the biologist's definition, your muscles are doing work to hold up the book and your arm (muscle fibers are contracting and relaxing based on a host of chemical processes at the cellular level). But by the physicist's technical definition, no work is being done.
$F=ma$ means that every force is applied to a mass and produces an acceleration. Okay. Acceleration is $a=\frac{\Delta v}{\Delta t}$. If you put this $\Delta v$ into ${\frac{1}{2}m(\Delta v)^2}$ you discover the energy which have been necessary to let that mass accelerate. Since energy is neither created nor destroyed, it is the energy burnt by the one who applied the force! His/her/its potential energy (e.g. from food) has become kinetic energy of the accelerated body. Now, what about holding up 5 kg with your arm? No energy? Of course you spend energy. It is the same as above: you apply a force, equal and opposite to the gravitational force, so the object doesn't fall and doesn't rise and if you apply a force, for the reason above, you spend energy. Now one could object that there is no acceleration in this case. If no acceleration (opposite to the gravitational acceleration $g$) existed, the object would fall! We have two opposite accelerations (since two opposite forces) at stake ($\mathbf{F}=-\mathbf{F_g} \Rightarrow \mathbf{a}=\mathbf{-g}$). Which cancel. But if they cancel they both exist. So yes, you spend energy for holding the object up: to let this counter-acceleration exist. So you need energy to hold up a mass but no work is done if the object is at rest on your hand, since its kinetic energy is NOT varying. If you stop with your hand a falling body you cause a negative $\Delta E_k$ (you do negative work on it) but once it is stopped no more work, your energy is simply to cancel $F_g$ and keeping the body at rest.
Energy is being expended maintaining it in position. Earth's gravity is applying a force downwards, the book is being accelerated down gravitation force.
A force is being applied to the hand and arm which must be resisted and thus energy expended.
The arm and book are not a closed system.
protected by Qmechanic♦ Nov 19 '13 at 7:19
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
Automatic Equation Numbering¶
The TeX input processing in MathJax can be configured to add equation numbers to displayed equations automatically. This functionality is turned off by default, but it is easy to configure MathJax to produce automatic equation numbers by adding:
window.MathJax = { tex: { tags: 'ams' }};
to tell the TeX input processor to use the AMS numbering rules (where only certain environments produce numbered equations, as they would be in LaTeX). It is also possible to set the tagging to ‘all’, so that every displayed equation will get a number, regardless of the environment used.
You can use
\notag or
\nonumber to preventindividual equations from being numbered, and
\tag{} can be usedto override the usual equation number with your own symbol instead (orto add an equation tag even when automatic numbering is off).
Note that the AMS environments come in two forms: starred andunstarred. The unstarred versions produce equation numbers (when
tags is set to
'ams') and the starred ones don’t. For example
\begin{equation} E = mc^2\end{equation}
will be numbered, while
\begin{equation*} e^{\pi i} + 1 = 0\end{equation*}
will not be numbered (when
tags is
'ams').
You can use
\label to give an equation an identifier that you canuse to refer to it later, and then use
\ref or
\eqref withinyour document to insert the actual equation number at that location,as a reference. For example,
In equation \eqref{eq:sample}, we find the value of aninteresting integral:\begin{equation} \int_0^\infty \frac{x^3}{e^x-1}\,dx = \frac{\pi^4}{15} \label{eq:sample}\end{equation}
includes a labeled equation and a reference to that equation. Note that references can come before the corresponding formula as well as after them.
You can configure the way that numbers are displayed and how thereferences to them by including the
tagFormat extension, andsetting options within the
tagFormat block of your
texconfiguration. See the tagFormat extension for moredetails.
If you are using automatic equation numbering and modifying the page dynamically, you can run into problems due to duplicate labels. See Resetting Automatic Equation Numbering for how to address this.
|
Research Open Access Published: On some classes of difference equations of infinite order Advances in Difference Equations volume 2015, Article number: 211 (2015) Article metrics
31k Accesses
3 Citations
9 Altmetric
Abstract
We consider a certain class of difference equations on an axis and a half-axis, and we establish a correspondence between such equations and simpler kinds of operator equations. The last operator equations can be solved by a special method like the Wiener-Hopf method.
Introduction
Difference equations of finite order arise very often in various problems in mathematics and applied sciences, for example in mathematical physics and biology. The theory for solving such equations is very full for equations with constant coefficients [1, 2], but fully incomplete for the case of variable coefficients. Some kinds of such equations were obtained by the second author by studying general boundary value problems for mode elliptic pseudo differential equations in canonical non-smooth domains, but there is no solution algorithm for all situations [3–5]. There is a certain intermediate case between the two mentioned above, namely it is a difference equation with constant coefficients of infinite order. Here we will briefly describe these situations.
where the functions \(a_{k}(x)\), \(k=1,\ldots,n\), \(v(x)\) are defined on
M and given, and \(u(x)\) is an unknown function. Since \(n\in{\mathbf{N}}\) is an arbitrary number and all points \(x, x+1,\ldots,x+n\), \(\forall x\in M\), should be in the set M, this set M may be a ray from a certain point or the whole R.
A more general type of difference equation of finite order is the equation
where \(\{\beta_{k}\}_{k=0}^{n}\subset{\mathbb{R}}\).
Further, such equations can be equations with a continuous variable or a discrete one, and this property separates such an equation on a class of properly difference equations and discrete equations. In this paper we will consider the case of a continuous variable
x, and a solution on the right-hand side will be considered in the space \(L_{2}({\mathbb {R}})\) for all equations. Difference equation of a finite order with constant coefficients
This is an equation of the type
and it easily can be solved by the Fourier transform:
Indeed, applying the Fourier transform to (3) we obtain
or renaming
Difference equation of infinite order with constant coefficients
The same arguments are applicable for the case of an unbounded sequence \(\{\beta_{k}\}_{-\infty}^{+\infty}\). Then the difference operator with complex coefficients
has the following symbol:
Lemma 1 The operator \(\mathcal{D}\) is a linear bounded operator \(L_{2}({\mathbb {R}})\to L_{2}({\mathbb{R}})\) if \(\{a_{k}\}_{-\infty}^{+\infty}\in {l}^{1}\). Proof
The proof of this assertion can be obtained immediately. □
If we consider the operator (4) for \(x\in{\mathbb{Z}}\) only
Difference and discrete equations
Obviously there are some relations between difference and discrete equations. Particularly, if \(\{\beta_{k}\}_{-\infty}^{+\infty }={\mathbb{Z}}\), then the operator (5) is a discrete convolution operator. For studying discrete operators in a half-space the authors have developed a certain analytic technique [9–11]. Below we will try to enlarge this technique for more general situations.
General difference equations
We consider the equation
where \({\mathbb{R}}_{+}=\{x\in{\mathbb{R}}, x>0\}\).
For studying this equation we will use methods of the theory of multi-dimensional singular integral and pseudo differential equations [3, 6, 12] which are non-usual in the theory of difference equations. Our next goal is to study multi-dimensional difference equations, and this one-dimensional variant is a model for considering other complicated situations. This approach is based on the classical Riemann boundary value problem and the theory of one-dimensional singular integral equations [13–15].
Background
The first step is the following. We will use the theory of so-called paired equations [15] of the type
in the space \(L_{2}(\mathbb{R})\), where
a, b are convolution operators with corresponding functions \(a(x)\), \(b(x)\), \(x\in{\mathbb{R}}\), \(P_{\pm }\) are projectors on the half-axis \({\mathbb{R}}_{\pm}\). More precisely,
where
P, Q are two projectors related to the Hilbert transform
Equation (8) is closely related to the Riemann boundary value problem [13, 14] for upper and lower half-planes. We now recall the statement of the problem: finding a pair of functions \(\Phi^{\pm}(\xi )\) which admit an analytic continuation on upper (\({\mathbb{C}}_{+}\)) and lower (\({\mathbb{C}}_{-}\)) half-planes in the complex plane \(\mathbb {C}\) and of which their boundary values on \(\mathbb{R}\) satisfy the following linear relation:
where \(G(\xi)\), \(g(\xi)\) are given functions on \(\mathbb{R}\).
Topological barrier
We suppose that the symbol \(G(\xi)\) is a continuous non-vanishing function on the compactification \(\dot{\mathbb{R}}\) (\(G(\xi)\neq0\), \(\forall\xi\in\dot{\mathbb{R}}\)) and
The last condition (10), is
necessary and sufficient for the unique solvability of the problem (9) in the space \(L_{2}({\mathbb{R}})\) [13, 14]. Moreover, the unique solution of the problem (9) can be constructed with a help of the Cauchy type integral
where \(G_{\pm}\) are factors of a factorization for the \(G(t)\) (see below),
Difference equations on a half-axis
Equation (6) can easily be transformed into (7) in the following way. Since the right-hand side in (6) is defined on \(\mathbb{R}_{+}\) only we will continue \(v(x)\) on the whole \({\mathbb {R}}\) so that this continuation \(\mathit{lf}\in L_{2}({\mathbb{R}})\). Further we will rename the unknown function \(u_{+}(x)\) and define the function
Thus, we have the following equation:
which holds for the whole space \({\mathbb{R}}\).
After the Fourier transform we have
where \(\sigma(\xi)\) is called a
symbol of the operator \(\mathcal{D}\). Definition
A factorization for an elliptic symbol is called its representation if it is in the form
where the factors \(\sigma_{+}\), \(\sigma_{-}\) admit an analytic continuation into the upper and lower complex half-planes \({\mathbb{C}}_{\pm}\), and \(\sigma^{\pm1}_{\pm}\in L_{\infty}({\mathbb{R}})\).
Example 1
Let us consider the Cauchy type integral
It is well known this construction plays a crucial role for a decomposition \(L_{2}({\mathbb{R}})\) on two orthogonal subspaces, namely
where \(A_{\pm}({\mathbb{R}})\) consists of functions admitting an analytic continuation onto \({\mathbb{C}}_{\pm}\).
The boundary values of the integral \(\Phi(z)\) satisfy the Plemelj-Sokhotskii formulas [13, 14], and thus the projectors
P and Q are corresponding projectors on the spaces of analytic functions [15].
The simple example we need is
Theorem 2 Let \(\sigma(\xi)\in C(\dot{\mathbb{R}})\), \(\operatorname{Ind}\sigma =0 \). Then (6) has unique solution in the space \(L_{2}({\mathbb {R}}_{+})\) for arbitrary right- hand side \(v\in L_{2}({\mathbb{R}}_{+})\), and its Fourier transform is given by the formula Proof
We have
Further, since \(\sigma^{-1}_{-}(\xi)\widetilde{\mathit{lv}}(\xi)\in L_{2}({\mathbb{R}})\) we decompose it into two summands
and write
The left-hand side of the last quality belongs to the space \(A_{+}({\mathbb{R}})\), but the right-hand side belongs to \(A_{-}({\mathbb {R}})\), consequently these are zeros. Thus,
and
or in the complete form
□
Remark 1
This result does not depend on the continuation
lv. Let us denote by \(M_{\pm}(x)\) the inverse Fourier images of the functions \(\sigma ^{-1}_{\pm}(\xi)\). Indeed, (12) leads to the following construction: Remark 2
The condition \(\sigma(\xi)\in C(\dot{\mathbb{R}})\) is not a strong restriction. Such symbols exist for example in the case that \(\sigma (\xi)\) is represented by a finite sum, and \(\beta_{k}\in{\mathbb {Q}}\). Then \(\sigma(\xi)\) is a continuous periodic function.
General solution
Since \(\sigma(\xi)\in C(\dot{\mathbb{R}})\), and Ind
σ is an integer, we consider the case \(\ae\equiv\operatorname{Ind} \sigma\in{\mathbb{N}}\) in this section. Theorem 3 Let \(\operatorname{Ind} \sigma\in{\mathbb{N}}\). Then a general solution of (6) in the Fourier image can be written in the form and it depends on æ arbitrary constants. Proof
The function
has the index 0, and we can factorize this function
Further, we write after (11)
factorize \(\omega^{-\ae}(\xi)\sigma(\xi)\), and rewrite
Taking into account our notations we have
because
and
and we conclude from the last that the left-hand side and the right-hand side also are a polynomial \(P_{\ae-1}(\xi)\) of order \(\ae-1\). It follows from the generalized Liouville theorem [13, 14] because the left-hand side has one pole of order æ in \({\mathbb{C}}\) in the point \(z=i\). So, we have
□
Remark 3
This result does not depend on the choice of the continuation
l. Corollary 4 Let \(v(x)\equiv0\), \(\ae\in{\mathbb{N}}\). Then a general solution of the homogeneous equation (6) is given by the formula Solvability conditions Theorem 5 Let \(-\operatorname{Ind} \sigma\in{\mathbb{N}}\). Then (6) has a solution from \(L_{2}({\mathbb{R}}_{+})\) iff the following conditions hold: Proof
We argue as above and use the equality (14); we write it as
Since we work with \(L_{2}({\mathbb{R}})\) both the left-hand side and the right-hand side are equal to zero at infinity, hence these are zeros, and
But there is some inaccuracy. Indeed, this solution belongs to the space \(A_{+}({\mathbb{R}})\), but more exactly it belongs to its subspace \(A^{k}_{+}({\mathbb{R}})\). This subspace consists of functions analytic in \({\mathbb{C}}_{+}\) with zeros of the order −æ in the point \(z=i\). To obtain a solution from \(L_{2}({\mathbb{R}}_{+})\) we need some corrections in the last formula. Since the operator
P is related to the Cauchy type integral we will use certain decomposition formulas for this integral (see also [12–14]).
Let us denote \(\sigma^{-1}_{-}(\xi)\widetilde{\mathit{lv}}(\xi)\equiv g(\xi)\) and consider the following integral:
Using a simple formula for a kernel
we obtain the following decomposition:
So, we have a following property. If the conditions
hold, then we obtain
Hence the boundary values on \({\mathbb{R}}\) for the left-hand side and the right-hand one are equal, and thus
Substituting the last formula into the solution formula we write
□
Conclusion
It seems this approach to difference equations may be useful for studying the case that the variable
x is a discrete one. We have some experience in the theory of discrete equations [9–11], and we hope that we can be successful in this situation also. Moreover, in our opinion the developed methods might be applicable for multi-dimensional difference equations. References 1.
Milne-Thomson, LM: The Calculus of Finite Differences. Chelsea, New York (1981)
2.
Jordan, C: Calculus of Finite Differences. Chelsea, New York (1950)
3.
Vasil’ev, VB: Wave Factorization of Elliptic Symbols: Theory and Applications. Introduction to the Theory of Boundary Value Problems in Non-Smooth Domains. Kluwer Academic, Dordrecht (2000)
4.
Vasilyev, VB: General boundary value problems for pseudo differential equations and related difference equations. Adv. Differ. Equ.
2013, 289 (2013) 5.
Vasilyev, VB: On some difference equations of first order. Tatra Mt. Math. Publ.
54, 165-181 (2013) 6.
Mikhlin, SG, Prößdorf, S: Singular Integral Operators. Akademie Verlag, Berlin (1986)
7.
Sobolev, SL: Cubature Formulas and Modern Analysis: An Introduction. Gordon & Breach, Montreux (1992)
8.
Dudgeon, DE, Mersereau, RM: Multidimensional Digital Signal Processing. Prentice Hall, Englewood Cliffs (1984)
9.
Vasilyev, AV, Vasilyev, VB: Discrete singular operators and equations in a half-space. Azerb. J. Math.
3(1), 84-93 (2013) 10.
Vasilyev, AV, Vasilyev, VB: Discrete singular integrals in a half-space. In: Current Trends in Analysis and Its Applications, Proc. 9th Congress, Krakow, Poland, August 2013, pp. 663-670. Birkhäuser, Basel (2015)
11.
Vasilyev, AV, Vasilyev, VB: Periodic Riemann problem and discrete convolution equations. Differ. Equ.
51(5), 652-660 (2015) 12.
Eskin, G: Boundary Value Problems for Elliptic Pseudodifferential Equations. Am. Math. Soc., Providence (1981)
13.
Gakhov, FD: Boundary Value Problems. Dover, New York (1981)
14.
Muskhelishvili, NI: Singular Integral Equations. North-Holland, Amsterdam (1976)
15.
Gokhberg, I, Krupnik, N: Introduction to the Theory of One-Dimensional Singular Integral Equations. Birkhäuser, Basel (2010)
Acknowledgements
The authors are very grateful to the anonymous referees for their valuable suggestions. This work is supported by Russian Fund of Basic Research and government of Lipetsk region of Russia, project No. 14-41-03595-a.
Additional information Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
|
It looks like you're new here. If you want to get involved, click one of these buttons!
I've spent four lectures on the logic of partitions; you may be wondering why. One reason was to give you examples illustrating this important fact:
Theorem. Left adjoints preserve joins and right adjoints preserve meets. Suppose \(f : A \to B\) and \(g : B \to A\) are monotone functions between posets. Suppose that \(f\) is the left adjoint of \(g\), or equivalently, \(g\) is the right adjoint of \(f\). If the join of \(a,a' \in A\) exists then so does the join of \(f(a), f(a') \in B\), and
$$ f(a \vee a') = f(a) \vee f(a'). $$ If the meet of \(b,b' \in B\) exists then so does the meet of \(g(b), g(b') \in A\), and
$$ g(b \wedge b') = g(b) \wedge g(b'). $$ The proof is very easy, so this deserves being called a "Theorem" only because it's so fundamental! I will prove it later, in more generality. Right now let's see how it's relevant to what we've been doing.
In Lecture 9 we saw something interesting about the subsets. Given any set \(X\) there's a poset \(P(X)\) consisting of all subsets of \(X\). Given any function \(f : X \to Y\) there's a monotone map
$$ f^* : P(Y) \to P(X) $$sending any subset of \(Y\) to its preimage under \(f\). And we saw that \( f^{\ast} \) has both a left adjoint and a right adjoint. This means that \( f^{\ast} \)
is both a right adjoint and a left adjoint. (Remember: having a left adjoint means being a right adjoint, and vice versa.)
So by our Theorem, we see that \(f^* : P(Y) \to P(X)\) preserves both meets and joins! You can also see this directly - see Puzzle 41 in Lecture 13. But what matters here is the general pattern.
In Lecture 13 we also saw something interesting about partitions. Given any set \(X\) there's a poset \( \mathcal{E}(X)\) consisting of all partitions of \(X\). Given any function \(f : X \to Y\) there's a monotone map
$$ f^* : \mathcal{E}(Y) \to \mathcal{E}(X) $$ sending any partition of \(Y\) to its pullback along \(f\). And we saw that while \( f^{\ast} \) preserves meets, it does not preserve joins!
So by our Theorem, we see that \(f^* : \mathcal{E}(Y) \to \mathcal{E}(X)\)
cannot be a left adjoint. On the other hand, it might be a right adjoint.
And indeed it is! So, this strange difference between the logic of subsets and the logic of partitions is really all about adjoints.
Puzzle 42. Given a function \(f : X \to Y\) there is a way to push forward any partition \(P\) on \(X\) and get a partition \(f_{!} (P)\) on \(Y\). In pictures it looks like this:
although this is not the most exciting example, since here \(f_{!}(P)\) has just one part. Figure out the precise general description of \(f_{!} (P)\). If you get stuck read Section 1.5.2 of
Seven Sketches. Puzzle 43. Show that for any function \(f : X \to Y\), pushing forward partitions along \(f\) gives a monotone map
$$ f_{!} : \mathcal{E}(X) \to \mathcal{E}(Y) . $$
Puzzle 44. Show that for any function \(f : X \to Y\), \(f^* : \mathcal{E}(Y) \to \mathcal{E}(X)\) is the right adjoint of \(f_{!}: \mathcal{E}(X) \to \mathcal{E}(Y)\).
|
Diagonal construction Set
context $ f:C\to \mathcal {\mathcal P}(C) $ definiendum $ x\in D_f $ inclusion $ D_f \subseteq C $ postulate $ x \notin f(x) $ Discussion
We take an arbitrary set $C$ and argue about all the functions $f:C\to \mathcal {\mathcal P}(C)$ from $C$ to the powerset ${\mathcal P}(C)$. For any such $f$, we define $ D_f$ as the subset of $C$ containing the elements $x\in C$ for which $ x \notin f(x) $.
So for example we might consider
$C=\{0,1\}$
so that
${\mathcal P}(C)=\{\{\},\{0\},\{1\},\{0,1\}\}$
and if we were to consider the function $f$ that maps $0$ to $f(0)=\{0,1\}$ and $1$ to $f(1)=\{0\}$, then as $0\in \{0,1\}$ but $1\notin \{0\}$, we are left with $D_f=\{1\}$.
The point of $ D_f \subseteq C $ defined as above is this: Given any set $C$, if one attempted to use its elements to index its subsets, i.e. via a function $f:C\to \mathcal P(C)$, then the “flipped diagonal subset” $ D_f \in {\mathcal P}$ will always be found to be missed.
Proof of the above and also proof of Cantor's theorem: For all $x,X$, we eighter have $x\in X$ or $x\notin X$. If $X=Y$, then $x\in X$ has the same truth value as $x\in Y$.So for all $x,X,Y$, we have
$(X=Y) \Rightarrow \neg(x\in X\land x\notin Y),$
which is the same as
$(x\in X\land x\notin Y) \Rightarrow \neg(X=Y).$
This holds always. Now, specifically, for any $x\in D_f$ and using $X=D_f$ and $Y=f(x)$, the left hand side reads $x\in D_f\land x\notin f(x)$ (which by definition of $D_f$ is the same as just $x\in D_f$) and the right hand side reads $\neg(f(x)=D_f)$. The same happens for $x\in C$ but not in $D_f$ and where we then switch $X$ and $Y$. This means $\nexists x\ (f(x)=D_f)$. Since $D_f\subseteq C$, i.e. $D_f \in \mathcal P(C) = \mathrm{codom} (f)$, we see that no such $f$ is a surjection, let alone a bijection. So the cardinality of any set is less than that of its power set. $\Box$
If, in a similar spirit, $C,f$ are taken to be a sequence and an enumeration of sequences, then we see that there is at least once sequence which escapes enumeration. This further translates to the uncountability of real numbers.
Notice the occurence of a
negation of a formula in which $x$ appears twice in the comprehension part. This sort of set comprehension is typical for this sort of business. Continuum hypothesis
A set is smaller than another one, if you can embed the first into the second but not the other way around. This is more formally defined using injective functions.
If $ {\mathbb N} $ denotes the natural numbers (let’s rather write $ \omega_0 $ for it here) and $ {\mathbb R} $ denotes the real numbers, then you can find an injection from the former set into the latter. This means there is a function $ i : \omega_0 \to {\mathbb R} $ so that for any distinct two $ n,m \in \omega_0 $ you have $ i(n) \neq i(m) $. For example $ i(n) := e^n $ does the job.
However, Cantor proves that you can’t find such an injective function in $ {\mathbb R} \to \omega_0 $. So the set of reals is bigger than the set of natural numbers.
Btw., note that the space of functions $ \omega_0 \to {\mathbb R} $ can be represented internally to a category of sets. I.e. there are representations of this collection of functions from $\omega_0$ to $ {\mathbb R} $ as a set, and this set (up to bijection) is written $ {\mathbb R}^{\omega_0} $. If $ \{0,1\} $ is some set with only two elements, Cantor also shows that even $\{0,1\}^{\omega_0}$ is bigger than $\omega_0$
Given that we have a notion of bigger now, call $\omega_1 $ the next big infinite set after the naturals and $\omega_2$ the set after that one.
Let’s now consider a large category of sets (or classes, or model, or however you want to call those collections) which obeys the laws of our standard theory of sets (ZFC). There is an object in this category which you denote by $\{0,1\}^{\omega_0}$, but what this thing actually contains contains depends on how you stuff it. The only rule is that it’s content doesn’t violate the laws of ZFC, but this still leaves some freedom. It’s now possible to find a category where the sets called $\{0,1\}^{\omega_0}$ and $\omega_1$ are in bijection. (By the standard construction of the real numbers from $\{0,1\}^{\omega_0}$, this also means the reals are the next big things after the naturals.)
BUT in the 60’s the mathematician Cohen showed you can define a category of sets that simultaneously fulfills the laws of ZFC, but where the sets are such that it’s possible to find an injective function from $\omega_2$ to $\{0,1\}^{\omega_0}$! This means taking $\omega_0$ to $\{0,1\}^{\omega_0}$ jumps in size over a whole kind of infinity, namely $\omega_1$. Since the category obeyed ZFC, then if ZFC is a consistent theory, this implies that you can’t use the formal laws of the theory ZFC to prove $\{0,1\}^{\omega_0}$ and $\omega_1$ are in bijection. We say that the statement is independent of this theory of sets.
Cohen invented a new technique to construct the above injection $\omega_2 \to \{0,1\}^{\omega_0}$. It’s called „forcing“ and a since then big deal in logic. Basically, he first notes that $\omega_2 \to \{0,1\}^{\omega_0}$ is in bijection to a characteristic function $\omega_2 \times {\omega_0} \to \{0,1\} $ He uses ZFC to construct the big set of functions $ \{ f : X \to \{0,1\} \} $ where X’s are finite subsets of $\omega_2 \times {\omega_0} $. Next comes some order theory, and „filters“ where patches the f’s together and ends up with one huge function $ F $, so that for different $ \alpha, \beta \in \omega_2 $, the functions $ n \mapsto F(\alpha, n) $ and $ n \mapsto F(\beta, n) $ are pairwise distinct. This then does the trick.
The continuum hypothesis being independent is somewhat ugly, I’d say. It implies, amongst other things, that from ZFC you can’t prove that given a big and a small set B and S, you cannot even prove that the number of subsets of B are more than the number of subsets of S. So even if your axioms of a theory of sets are in-your-face strong, like ZFC is, it’s hard to capture what you’d like to be true for sets.
Continuum hypothesis funfact
The continuum hypothesis $\nexists Y.\,|\mathbb N|<|Y|<|\mathbb R|$ is independent of ZFC. Now, in fact, there are models where the continuum hypotsesis fails and where the $Y$ has $\mathcal P(Y)\cong\mathbb R$! The fact that of course also $\mathcal P(\mathbb N)\cong\mathbb R$ implies that ZFC (if consistent) doesn't prove “If a set $Y$ is bigger than another set $X$, then it has more subsets”. (Martin's axiom describes such a world where there is an $Y$ which is of bigger cardinality than the set of natural numbers, but not significalntly different anyway.)
|
This phrasing is only true for organic molecules. If for example, sodium hydride loses hydrogen, the sodium ion will get reduced. But since you seem to come from a biochemical background, this simplification is okay since you will be dealing with organic molecules primarily.
The idea behind that statement is that hydrogen atoms in biomolecules are typically only bound to carbon, nitrogen, oxygen and sulfur. Hydrogen is less electronegative than all these elements and therefore any $\ce{X-H}$ bond will be polarised towards $\ce{X}$; the non-hydrogen atom.
When determining oxidation states, bonds are formally cleaved heterolytically in such a way that the
electronegative partner gets both electrons. Then, the electrons on the formal atomic ions created this way are counted and subtracted from the number the compound should have. Bonds between the same element are cleaved homolytically and the same procedure applied. Thus, if we take ethene ($\ce{C2H4}$, structure see below) the $\ce{C-H}$ bonding electrons are formally attributed to carbon entirely while the $\ce{C=C}$ double bond is split equally. Doing this, each carbon atom ends up with six electrons, formally, and therefore a formal charge of $2-$ or an oxidation state of $\mathrm{-II}$. If we did the same for ethyne ($\ce{C2H2}$) — a compound with two hydrogen atoms removed — we arrive at five formal electrons and thus an oxidation state of $\mathrm{-I}$. Therefore, going from ethene to ethyne is an oxidation.
$$\begin{align}\ce{H2C=&CH2} & \ce{H-C&#C-H}\\\text{ethe}&\text{ne} &\text{eth}&\text{yne}\end{align}$$
Having gotten the theory out of the way, what does this mean for the simplified rule? Well, as I mentioned any $\ce{X-H}$ bond is polarised
away from the hydrogen and thus effectively adds an additional electron to $\ce{X}$. Carbon, which is the most common bonding partner of hydrogen, is less electronegative than all the other typical elements of biomolecules, and thus will formally lose electrons to them. When removing hydrogen from a compound chemically, we will always remove at least one $\ce{C-H}$ bond in favour of a $\ce{C-C, C-O, C-N}$ or $\ce{C-S}$ bond. Thus, the carbon atom will formally lose electrons and we will have an oxidation.
This can also be rationalised in another way to determine oxidation states: the additive method. Starting from a neutral compound, assign each hydrogen atom a $\mathrm{+I}$, each oxygen $\mathrm{-II}$. Add that up and invert the sign; this is the oxidation state of all the carbons (assuming a molecule of the formula $\ce{C_mH_nO_x}$). Divide by the number of carbons to get an average oxidation state of carbon.
$$\chi(\ce{C}) = -\frac{n - 2x}{m} = \frac{2x - n}{m}$$
As you see, each hydrogen atom in a compound enters into the formula with a
negative sign. Thus, removing a hydrogen must increase the average oxidation state of the carbons. Unfortunately, there is no trivial way to extend this formula to compounds containing oxygen and nitrogen or oxygen and sulfur.
|
Another way:
mat = {{4, 11, 14}, {8, 7, -2}};
sph[{x_, y_, z_}] := x^2 + y^2 + z^2;
ContourPlot[sph[PseudoInverse[mat].{u, v}] == 1,
{u, -20, 20}, {v, -20, 20},
Epilog -> {Red, PointSize@Large, Point[{{18, 6}, {3, -9}}],
Line[{{{18, 6}, -{18, 6}}, {{3, -9}, -{3, -9}}}]},
PlotLabel -> Simplify[sph[PseudoInverse[mat].{u, v}] == 1]]
(If a set of points $\{p\}$ satisfies $F(p)=0$, then the transformed set $\{q=T(p)\}$ satisfies $F(T^{-1}(q))=0$.)
Aside: You can get the semi-major axes of the transformed ellipse from
SingularValueDecomposition.
{U, Σ, V} = SingularValueDecomposition[mat];
U.Σ // MatrixForm
The zero vector could be interpreted as one of the axes of the sphere collapsing under the transformation.
The axes of the sphere represented by the third component
v are mapped to the above by the transformation:
mat.V // MatrixForm
Some referencess:
David Austin, "We Recommend a Singular Value Decomposition",
Feature Column, AMS (undated?, online)
D. Kalman, "A singularly valuable decomposition: The svd of a matrix",
The College MathematicsJournal 27 (1996), no. 1, 2-23; revised 2002
More on the SVD:
If $(x,y)$ denotes the inner product, then$$\|Ax\|^2=(Ax,Ax)=(x,A^TAx) \le \lambda \cdot(x,x) = \lambda\,,$$where $\lambda$ is the greatest eigenvalue of $A^TA$, which has nonnegative real eigenvalues. Consequently $\sqrt{\lambda}$ is the greatest singular value of $A$ and the maximum of $\|Ax\|$, which is achieved when $x$ is a unit eigenvector of $A^TA$ corresponding to the eigenvalue $\lambda$.
SVD Demo:An ${\bf R}^3 \rightarrow {\bf R}^2$ version of the illustration in the Austin article (ref. above).
Note that in the SVD below,
V is an orthogonal matrix and represents a rigid motion that aligns the standard coordinate vectors with the eigenvectors of $A^TA$. In this case it is a reflection and
-V is a rotation that accomplishes the same alignment. The demo allows the user to rotate the unit sphere from the pole aligned with the z-axis to a sphere with a pole aligned with the eigenvector corresponding to the singular value $0$. In fact, it may be rotated so that the pole may aligned with any of the three eigenvectors.
{U, Σ, V} = SingularValueDecomposition[mat]; (* from above *)
rot[θ_] = RotationMatrix[θ, First@Pick[##, 1] & @@ Reverse@Eigensystem[-V]];
axesplot[
xform_, (* transformation = mat or IdentityMatrix[3] *)
angle_ (* rotation angle for rot[] *)
] := With[{
axes = Transpose[rot[angle]],
colors = ColorData[97, "ColorList"][[{1, 3, 4}]]},
{Red, PointSize@Large, Thick,
MapThread[ (* rotated axes *)
Function[{axis, color}, {color, Point[axis], Line[{-axis, axis}]}],
{Transpose[xform.axes], colors}],
Gray, Thickness[Medium], (* axes transformed by V *)
InfiniteLine[{{0, 0, 0}, #}.Transpose[xform]] & /@ Transpose[V]}
];
sphParam[θ_, ϕ_] = CoordinateTransform["Spherical" -> "Cartesian", {1, ϕ, θ}];
Manipulate[
GraphicsRow[{
(* sphere *)
Show[
ParametricPlot3D[rot[t].sphParam[θ, ϕ],
{θ, 0, 2 Pi}, {ϕ, 0, Pi}, PlotStyle -> None, Mesh -> 15],
Graphics3D[axesplot[IdentityMatrix[3], -t]],
AxesLabel -> {"x", "y", "z"}, Ticks -> None,
ViewPoint -> Dynamic@vp, ViewVertical -> Dynamic@vv, (* preserves view as t changes *)
SphericalRegion -> True
],
(* mat.sphere *)
ParametricPlot[mat.rot[t].sphParam[θ, ϕ],
{θ, 0, 2 Pi}, {ϕ, 0, Pi}, PlotStyle -> None,
Mesh -> 15, BoundaryStyle -> {Gray, Thin},
Epilog -> axesplot[mat, -t], Axes -> None
]
}],
{t, 0., 2. Pi, TrackingFunction -> (* sets "stops" at critical angles *)
(t = Nearest[Pi Range[1, 5, 2]/3, #, {1, 0.07}] /. {{} -> #, {tt_} :> tt}; &),
Appearance -> "Labeled"},
(* ViewPoint, ViewVertical variables, without controls *)
{{vp, {1.3, -2.4, 2}}, None}, {{vv, {0, 0, 1}}, None}
]
Aligned with the eigenvectors:
|
I thought this result was a bit interesting. Mahlon M. Day in the paper [1] showed that the amenable groups are precisely the groups where there Markov-Kakutani theorem holds.
If $(X,\mathcal{M})$ is an algebra of sets, then a function $\mu:\mathcal{M}\rightarrow[0,1]$ is said to be a finitely additive probability measure if $\mu(\emptyset)=0,\mu(X)=1$ and $\mu(A\cup B)=\mu(A)+\mu(B)$ whenever $A,B\in\mathcal{M}$ and $A\cap B=\emptyset$. If $G$ is a group, then a finitely additive probability measure $\mu:P(G)\rightarrow G$ on the algebra of sets $(G,P(G))$ is said to be left-invariant if $\mu(aR)=\mu(R)$ for each $R\subseteq G$.
A group $G$ is said to be amenable if there exists a left-invariant finitely additive probability measure $\mu:P(G)\rightarrow[0,1]$. For example, every finite group is amenable, and every abelian group is amenable. Furthermore, the class of amenable groups is closed under taking quotients, subgroups, direct limits, and finite products.
Let $C$ be a convex subset of a real vector space. Then a function $f:C\rightarrow C$ is said to be an affine map if $f(\lambda x+(1-\lambda)y)=\lambda f(x)+(1-\lambda)f(y)$ for each $\lambda\in[0,1]$ and $x,y\in C$.
$\textbf{Theorem}$(Day) Let $G$ be a group. Then the following are equivalent.
$G$ is amenable.
Let $X$ be a Hausdorff topological vector space and let $C\subseteq X$ be a compact convex subset. Let $\phi:G\rightarrow C^{C}$ be a group action such that each $\phi(g)$ is a continuous affine map. Then there is a point in $C$ fixed by every element of $G$.
Let $X$ be a locally convex topological vector space and let $C\subseteq X$ be a compact convex subset. Let $\phi:G\rightarrow C^{C}$ be a group action such that each $\phi(g)$ is a continuous affine map. Then there is a point in $C$ fixed by every element in $G$.
[1] Fixed-point theorems for compact convex sets.Mahlon M. Day.Illinois J. Math. Volume 5, Issue 4 (1961), 585-590.
[2] Ceccherini-Silberstein, Tullio, and M. Coornaert. Cellular Automata and Groups. Heidelberg: Springer, 2010.
|
(11C) Tree Heights
11-02-2018, 01:24 PM (This post was last modified: 11-02-2018 01:26 PM by Gamo.)
Post: #1
(11C) Tree Heights
This program was adapted from the Hand-Held-Calculator Programs
for the Field Forester.
More detail information attached here.
Procedure:
1. Enter slope distance to base of tree [A] -->Display known distance
2. Enter Slope percent to tip, [R/S] --> Display 0 (Slope to tip Stored)
3. Enter Slope percent to Base.
I. If Positive [B] --> display 0 // slope base entered
II. If Negative [C] --> display 0 // slope base entered
4. [D] ---> Tree Heights
-----------------------------------------------
Example: FIX 1
Slope Percent to tip = 40
Negative slope percent to base = 20
Distance to tree = 56
What is the Tree Heights?
56 [A] display 56
40 [R/S] display 0
20 [C] display 0
[D] 32.9
Tree Heights is 32.9
-------------------------------------------------
Slope Percent to tip = 40
Positive slope percent to base = 20
Distance to tree = 56
What is the Tree Heights?
56 [A] display 56
40 [R/S] display 0
20 [B] display 0
[D] 10.9
Tree Heights is 10.9
Program:
Code:
Gamo
11-02-2018, 02:43 PM
Post: #2
RE: (11C) Tree Heights
An excellent read:
Hand-Held-Calculator Programs for the Field Forester
Wayne D. Shepperd, Associate Silviculturist
General Technical Report RM-76 (July 1980)
Rocky Mountain Forest and Range Experiment Station
Forest Service
U.S. Department of Agriculture
Abstract
A library of programs written for hand-held, programmable
calculators is described which eliminates many of the computations
previously done by hand in the field. Programs for scaling aerial
photos, variable plot cruising, basal area factor gauge calibration,
and volume calculations are included.
Contents
Introduction............................ 1
Slope to Horizontal Distance....... 2
Basal Area Computation............ 3
Tree Heights.......................... 4
Adequacy of Sample Test.......... 5
Multispecies Board Foot Volumes 7
BAF Gauge Calibration.............. 9
Limiting Distance................... 10
Photo Work Program............... 12
Spruce Variable Plot Cruising.... 14
Literature Cited..................... 17
BEST!
SlideRule
11-02-2018, 06:15 PM (This post was last modified: 11-02-2018 09:48 PM by Dieter.)
Post: #3
RE: (11C) Tree Heights
(11-02-2018 01:24 PM)Gamo Wrote: This program was adapted from the Hand-Held-Calculator Programs
Thank you very much. The attached program description seems to refer to a TI program: enter three values with four (!) label keys, finally press another key for the result. But this is HP, the 11C uses RPN, here all this can be done much shorter and more straightforward, even without using a single data register. A direct translationm on the other hand, duplicates the clumsy original procedure:
(11-02-2018 01:24 PM)Gamo Wrote: 1. Enter slope distance to base of tree [A] -->Display known distance
We can do better. ;-)
First of all, mathematically there is no need to distinguish positive or negative base angles and handle them separately. The same formula will work for both cases, as tan(–x) = –tan(x). Also there is no need to calculate sin(90°–B1) as this is equivalent to cos(B1).
Converting the slope values to angles is done in a subroutine. But on the 11C this is merely four steps,*) so two calls require (2x GSB, LBL, 4 steps, RTN) eight lines altogether. This does not save any program steps, compared to having the same four steps twice in the program. So a subroutine has no advantage, and without it the program would even run slightly faster. I left it in there anyway so that the user may do the slope-to-angle conversion with f[E], independently from the rest of the program.
Here is my attempt at realizing all this in a compact 10/11/15C program, but it should run just as well on many other HPs. If your calculator does not feature LBL A or LBL E simply replaced them with numeric ones.
Code:
01 LBL A
Enter base distance [ENTER] tip slope percent [ENTER] base slope percent.
Press f[A] to get the tree height.
Additional feature:
Enter slope percent, press f[E] and get the equivalent angle.
Examples, using your above data:
56 [ENTER] 40 [ENTER] –20 f[A] => 32,95
56 [ENTER] 40 [ENTER] 20 f[A] => 10,98
What is the equivalent angle for a slope of 30% ?
30 f[E] => 16,70°
Edit: here is a version for the HP25(C) which may also run on other calculators without labels and subroutines:
Code:
01 ENTER
Dieter
__________
*) In your original program you could even do it with 3 steps: 1 % TANˉ¹
11-02-2018, 07:03 PM
Post: #4
RE: (11C) Tree Heights
11-02-2018, 07:41 PM
Post: #5
RE: (11C) Tree Heights
Ah, thank you very much.
But I don't see much of a real program. It's more like a "program outline", as stated in the attachment, a kind of recipe for writing your own program.
BTW the result for the second example, rounded to one decimal, should be 11,0 instead of 10,9.
Dieter
11-03-2018, 01:35 AM (This post was last modified: 11-03-2018 01:38 AM by Gamo.)
Post: #6
RE: (11C) Tree Heights
Dieter thanks for the better program update.
This book only show the program guide line to adapted to any programmable
calculator as state at the beginning of the book.
Personally I program this Tree Height as simple to operate as possible so I put
all input operation separately on each labels like so
[A] For Known Distance and Slope Tip
[B] For known Positive Slope Base
[C] For Known Negative Slope Base
[D] Compute Tree Height
------------------------------------------------------
SlideRule Thanks for the program guide line page.
Remark: At second page of this book there are marked for the typo error
On Page 5 Example on the first line:
Should be: Positive Slope Percent to Tip=40
----------------------------------------------------
Gamo
11-03-2018, 12:47 PM (This post was last modified: 11-03-2018 02:13 PM by Dieter.)
Post: #7
RE: (11C) Tree Heights
See below. ;-)
(11-03-2018 01:35 AM)Gamo Wrote: Personally I program this Tree Height as simple to operate as possible
Does it get simpler than entering the three values on the stack?
(11-03-2018 01:35 AM)Gamo Wrote: so I put all input operation separately on each labels like so
Again: there is no need for separate calculations for positive or negative slope values. Try it: simply enter –20 at [B]. You may also use two separate labels for the distance and the slope percent to the tip.
Finally here is another version:
In many cases it is a good idea not to follow a given path but to try a new approach instead. This is also the case here. The tree height can also be calculated this way:
b = a·cos(B2) · tan(B1) – a·sin(B2)
The point here is that the sine and cosine term can be simultaneously calculated by means of the P–>R command. And the tangent simply is the tip slope divided by 100.
This leads to the following even shorter program:
Code:
01 LBL A
And here is a version that uses the label keys:
Code:
01 LBL A
f[USER]
Enter base distance [A]
Enter base slope percent [B] (may be positive or negative)
Enter tip slope percent [C]
Calculate tree height with [D]
56 [A] => 56,00
20 [B] => 20,00
40 [C] => 40,00
[D] => 10,98
-20 [B] => -20,00
[D] => 32,95
Addendum:
I was playing around a bit with a TI59 emulator, so here also is a version for the TI58/59.
Code:
000 76 LBL
Usage is the same as above.
The final steps round the result to two decimals.
Dieter
11-04-2018, 03:32 PM
Post: #8
RE: (11C) Tree Heights
We don't really need trigonometric functions here.
Good old Pythagoras is good enough:
Code:
01 LBL A
Examples:
56 ENTER
20 ENTER
40 A
10.9825
56 ENTER
-20 ENTER
40 A
32.9475
Cheers
Thomas
11-04-2018, 04:57 PM (This post was last modified: 11-04-2018 05:35 PM by Dieter.)
Post: #9
RE: (11C) Tree Heights
(11-04-2018 03:32 PM)Thomas Klemm Wrote: We don't really need trigonometric functions here.
Great! This way it can also be done on the 12C and other calculators without trigs or polar/rectangular conversion:
Code:
01 X<>Y
Since no →P is required this may even run slightly faster than Thomas' original version. If available, replace "ENTER x" with x².
(11-04-2018 03:32 PM)Thomas Klemm Wrote: Examples:
Same for the above version. Press [R/S] instead of [A]. ;-)
I you, like me, prefer to enter base distance [ENTER] tip slope [ENTER] base slope, simply remove the first line.
Gamo, if you want to implement this for the 11C using the label keys A...D, here is an adapted version:
Code:
01 LBL A
This thread shows once again how a new approach and a bit of better mathematical insight can substantially improve a given solution. So don't adapt programs or algorithms, rethink the problem and realize your own solution. Or "dare to think for yourself", as others have put it.
Dieter
11-05-2018, 12:52 AM
Post: #10
RE: (11C) Tree Heights
Thanks Thomas Klemm and Dieter
Programs updates is more streamline now even work on HP-12C
Excellent Idea !!
Gamo
11-22-2018, 05:27 PM (This post was last modified: 11-22-2018 05:35 PM by ijabbott.)
Post: #11
RE: (11C) Tree Heights
(11-04-2018 04:57 PM)Dieter Wrote:(11-04-2018 03:32 PM)Thomas Klemm Wrote: We don't really need trigonometric functions here.
That's a neat solution! It's also worth mentioning that if you know the tangent, sine or cosine of an angle between 0 and 90 degrees, you can derive the others with standard arithmetic and the square root function.
\( \tan(x) = \frac{\sqrt{1-\cos^2(x)}}{\cos(x)} \), or: \( \tan(x) = \sqrt{\frac{1}{\cos^2(x)} - 1} \)
Code:
ENTER
\( \cos(x) = \frac{1}{\sqrt{1 + \tan^2(x)}} \)
Code:
ENTER
\( \tan(x) = \frac{\sin(x)}{\sqrt{1 - \sin^2(x)}} \)
Code:
ENTER
\( \sin(x) = \frac{\tan(x)}{\sqrt{1 + \tan^2(x)}} \)
Code:
ENTER
\( \sin(x) = \sqrt{1 - \cos^2(x)} \), and: \( \cos(x) = \sqrt{1 - \sin^2(x)} \)
Code:
ENTER
Of course, "ENTER", "×" can be replaced by "x²" in all of the above, if available.
— Ian Abbott
User(s) browsing this thread: 1 Guest(s)
|
Given a parametric family of distributions $\{p_\theta\mid\theta \in \Theta\}$, one can show that under some regularity conditions, the following approximation is valid
$$\operatorname{KL}(p_\theta\parallel p_{\theta + d \theta}) = d \theta^TF(\theta) \, d\theta + \mathcal O(\|d\theta\|^3), $$ where $$F(\theta)_{ij} := \mathbb E_{x \sim p_\theta}\left[\frac{\partial^2}{\partial \theta_i \, \partial \theta_j} \log(p_\theta(x))\right] $$ is the Fisher information matrix of $p_\theta$. A very rough sketch of the proof can be found on wikipedia.
Question 1
Is there such an approximation formula for the Wassertein distance or other measures of discrepancy between probability distributions ?
Question 2
Same question, specialized to $f$-divergences (of which KL is a particular case).
|
Event detail Paris/Berkeley/Bonn/Zürich Analysis Seminar: Planar Sobolev extension domains
Seminar | February 22 | 9:10-10 a.m. | 238 Sutardja Dai Hall
Yi Zhang, University of Bonn
A domain $\Omega \subset \mathbb R^2$ is called a $W^{1,\,p}$-extension domain if it admits an extension operator $E\colon W^{1,\,p}(\Omega ) \to W^{1,\,p}(\mathbb R^2)$ with controlled norm. A full geometric characterization of these domains for $p=2$ was given around 1980. The case $p >2$ was finally solved by P. Shvartsman in 2010. We discuss the remaining cases, and give some new understandings of the geometric characterizations from the point of view of (classical) complex analysis.
|
I think you have the inequality sign wrong. The inequality as you write it cannot hold. To see this, consider the numerical case where $y=4 > y'=3, d=1 < d'=2$ and $u= \sqrt(x)$. I let you do the calculation yourself.
I suppose the correct inequality is: \begin{equation} u(y−d)−u(y−d') \leq u(y'−d)−u(y′−d′) \end{equation}
This equation holds true due to the curvature of a concave function. If you're on the upward sloping part of the function, the closer you are to the summit, the flatter the slope. So the vertical distance between the images of the two points in the domain on the left side of the equation is smaller than the same on the right side if $y>y'$ and $d \leq d'$, and equal if $y=y'$. On the downward sloping part it's the other way around. The set of inequality conditions imply each other. Changing one inequality requires changing the others. You can check this for yourself by picking points on a concave curve that satisfy the inequalities.
It is really just an elaborate version of $|x| \leq |y|$, where if $x \leq 0, y \leq x$ and if $x \geq 0, y \geq x$. I hope this answers your question.
|
Vacuum Solutions to Einstein’s Field Equations¶ Einstein’s Equation¶
Einstein’s Field Equation(EFE) is a ten component tensor equation which relates local space-time curvature with local energy and momentum. In short, they determine the metric tensor of a spacetime given arrangement of stress-energy in space-time. The EFE is given by
Here, \(R_{\mu\nu}\) is the Ricci Tensor, \(R\) is the curvature scalar(contraction of Ricci Tensor), \(g_{\mu\nu}\) is the metric tensor, \(\Lambda\) is the cosmological constant and lastly, \(T_{\mu\nu}\) is the stress-energy tensor. All the other variables hold their usual meaning.
Metric Tensor¶
The metric tensor gives us the differential length element for each durection of space. Small distance in a N-dimensional space is given by :
\(ds^2 = g_{ij}dx_{i}dx_{j}\)
The tensor is constructed when each \(g_{ij}\) is put in it’s position in a rank-2 tensor. For example, metric tensor in a spherical coordinate system is given by:
\(g_{00} = 1\)
\(g_{11} = r^2\)
\(g_{22} = r^2sin^2\theta\)
\(g_{ij} = 0\) when \(i{\neq}j\)
We can see the off-diagonal component of the metric to be equal to 0 as it is an orthogonal coordinate system, i.e. all the axis are perpendicular to each other. However it is not always the case. For example, a euclidean space defined by vectors i, j and j+k is a flat space but the metric tensor would surely contain off-diagonal components.
Notion of Curved Space¶
Imagine a bug travelling across a 2-D paper folded into a cone. The bug can’t see up and down, so he lives in a 2d world, but still he can experience the curvature, as after a long journey, he would come back at the position where he started. For him space is not infinite.
Mathematically, curvature of a space is given by Riemann Curvature Tensor, whose contraction is Ricii Tensor, and taking its trace yields a scalar called Ricci Scalar or Curvature Scalar.
Straight lines in Curved Space¶
Imagine driving a car on a hilly terrain keeping the steering absolutely straight. The trajectory followed by the car, gives us the notion of geodesics. Geodesics are like straight lines in higher dimensional(maybe curved) space.
Mathematically, geodesics are calculated by solving set of differential equation for each space(time) component using the equation:
\(\ddot{x}_i+0.5*g^{im}*(\partial_{l}g_{mk}+\partial_{k}g_{ml}-\partial_{m}g_{kl})\dot{x}_k\dot{x}_l = 0\)
which can be re-written as
\(\ddot{x}_i+\Gamma_{kl}^i \dot{x}_k\dot{x}_l = 0\)
where \(\Gamma\) is Christoffel symbol of the second kind.
Christoffel symbols can be encapsulated in a rank-3 tensor which is symmetric over it’s lower indices. Coming back to Riemann Curvature Tensor, which is derived from Christoffel symbols using the equation
\(R_{abc}^i=\partial_b\Gamma_{ca}^i-\partial_c\Gamma_{ba}^i+\Gamma_{bm}^i\Gamma_{ca}^m-\Gamma_{cm}^i\Gamma_{ba}^m\)
Of course, Einstein’s indicial notation applies everywhere.
Contraction of Riemann Tensor gives us Ricci Tensor, on which taking trace gives Ricci or Curvature scalar. A space with no curvature has Riemann Tensor as zero.
Exact Solutions of EFE¶ Schwarzschild Metric¶
It is the first exact solution of EFE given by Karl Schwarzschild, for a limited case of single spherical non-rotating mass. The metric is given as:
\(d\tau^2 = -(1-r_s/r)dt^2+(1-r_s/r)^{-1}dr^2+r^2d\theta^2/c^2+r^2sin^2\theta d\phi^2/c^2\)
where \(r_s=2*G*M/c^2\)
and is called the Schwarzschild Radius, a point beyond where space and time flips and any object inside the radius would require speed greater than speed of light to escape singularity, where the curvature of space becomes infinite and so is the case with the tidal forces. Putting \(r=\infty\), we see that the metric transforms to a metric for a flat space defined by spherical coordinates.
\(\tau\) is the proper time, the time experienced by the particle in motion in the space-time while \(t\) is the coordinate time observed by an observer at infinity.
Using the metric in the above discussed geodesic equation gives the four-position and four-velocity of a particle for a given range of \(\tau\). The differential equations can be solved by supplying the initial positions and velocities.
Kerr Metric and Kerr-Newman Metric¶
Kerr-Newman metric is also an exact solution of EFE. It deals with spinning, charged massive body as the solution has axial symettry. A quick search on google would give the exact metric as it is quite exhaustive.
Kerr-Newman metric is the most general vacuum solution consisting of a single body at the center.
Kerr metric is a specific case of Kerr-Newman where charge on the body \(Q=0\). Schwarzschild metric can be derived from Kerr-Newman solution by putting charge and spin as zero \(Q=0\), \(a=0\).
|
Writing Mathematics for MathJax¶ Putting mathematics in a web page¶
To put mathematics in your web page, you can use TeX and LaTeX notation, MathML notation, AsciiMath notation, or a combination of all three within the same page; the MathJax configuration tells MathJax which you want to use, and how you plan to indicate the mathematics when you are using TeX/LaTeX or AsciiMath notation. These three formats are described in more detail below.
TeX and LaTeX input¶
Mathematics that is written in TeX or LaTeX format is indicated using
math delimiters that surround the mathematics, telling MathJax whatpart of your page represents mathematics and what is normal text.There are two types of equations: ones that occur within a paragraph(in-line mathematics), and larger equations that appear separated fromthe rest of the text on lines by themselves (displayed mathematics).
The default math delimiters are
$$...$$ and
\[...\] fordisplayed mathematics, and
\(...\) for in-line mathematics. Notein particular that the
$...$ in-line delimiters are
not usedby default. That is because dollar signs appear too often innon-mathematical settings, which could cause some text to be treatedas mathematics unexpectedly. For example, with single-dollardelimiters, “… the cost is $2.50 for the first one, and $2.00 foreach additional one …” would cause the phrase “2.50 for the firstone, and” to be treated as mathematics since it falls between dollarsigns. See the section on TeX and LaTeX Math Delimiters for more information on using dollar signs asdelimiters.
Here is a complete sample page containing TeX mathematics (see the MathJax Web Demos Repository for more).
<!DOCTYPE html><html><head><title>MathJax TeX Test Page</title><script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script><script type="text/javascript" id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js"></script></head><body>When \(a \ne 0\), there are two solutions to \(ax^2 + bx + c = 0\) and they are$$x = {-b \pm \sqrt{b^2-4ac} \over 2a}.$$</body></html>
Since the TeX notation is part of the text of the page, there are some caveats that you must keep in mind when you enter your mathematics. In particular, you need to be careful about the use of less-than signs, since those are what the browser uses to indicate the start of a tag in HTML. Putting a space on both sides of the less-than sign should be sufficient, but see TeX and LaTeX support for more details.
If you are using MathJax within a blog, wiki, or other content management system, the markup language used by that system may interfere with the TeX notation used by MathJax. For example, if your blog uses Markdown notation for authoring your pages, the underscores used by TeX to indicate subscripts may be confused with the use of underscores by Markdown to indicate italics, and the two uses may prevent your mathematics from being displayed. See TeX and LaTeX support for some suggestions about how to deal with the problem.
There are a number of extensions for the TeX input processor that areloaded by combined components that include the TeX input format (e.g.,
tex-chtml.js), and others that are loaded automatically whenneeded. See TeX and LaTeX Extensions fordetails on TeX extensions that are available.
MathML input¶
For mathematics written in MathML notation, you mark your mathematicsusing standard
<math> tags, where
<math display="block">represents displayed mathematics and
<math display="inline"> orjust
<math> represents in-line mathematics.
MathML notation will work with MathJax in HTML files, not just XHTMLfiles, even in older browsers and that the web page need not be servedwith any special MIME-type. Note, however, that in HTML (as opposed toXHTML), you should
not include a namespace prefix for your
<math>tags; for example, you should not use
<m:math> except in an XHTML filewhere you have tied the
m namespace to the MathML DTD by adding the
xmlns:m="http://www.w3.org/1998/Math/MathML" attribute to your file’s
<html> tag.
In order to make your MathML work in the widest range of situations,it is recommended that you include the
xmlns="http://www.w3.org/1998/Math/MathML" attribute on all
<math> tags in your document (and this is preferred to the use ofa namespace prefix like
m: above, since those are deprecated inHTML5), although this is not strictly required.
Here is a complete sample page containing MathML mathematics (see the MathJax Web Demos Repository for more).
<!DOCTYPE html><html><head><title>MathJax MathML Test Page</title><script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script><script type="text/javascript" id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/mml-chtml.js"></script></head><body><p>When<math xmlns="http://www.w3.org/1998/Math/MathML"> <mi>a</mi><mo>≠</mo><mn>0</mn></math>,there are two solutions to<math xmlns="http://www.w3.org/1998/Math/MathML"> <mi>a</mi><msup><mi>x</mi><mn>2</mn></msup> <mo>+</mo> <mi>b</mi><mi>x</mi> <mo>+</mo> <mi>c</mi> <mo>=</mo> <mn>0</mn></math>and they are<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"> <mi>x</mi> <mo>=</mo> <mrow> <mfrac> <mrow> <mo>−</mo> <mi>b</mi> <mo>±</mo> <msqrt> <msup><mi>b</mi><mn>2</mn></msup> <mo>−</mo> <mn>4</mn><mi>a</mi><mi>c</mi> </msqrt> </mrow> <mrow> <mn>2</mn><mi>a</mi> </mrow> </mfrac> </mrow> <mtext>.</mtext></math></p></body></html>
When entering MathML notation in an HTML page (rather than an XHTMLpage), you should
not use self-closing tags, as these are not partof HTML, but should use explicit open and close tags for all your mathelements. For example, you should use
<mspace width="5pt"></mspace>
rather than
<mspace width="5pt" /> in an HTML document. If youuse the self-closing form, some browsers will not build the math treeproperly, and MathJax will receive a damaged math structure, whichwill not be rendered as the original notation would have been.Typically, this will cause parts of your expression to not bedisplayed. Unfortunately, there is nothing MathJax can do about that,since the browser has incorrectly interpreted the tags long beforeMathJax has a chance to work with them.
See the MathML page for more on MathJax’s MathML support.
AsciiMath input¶
MathJax v2.0 introduced a new input format, AsciiMath notation, by incorporating ASCIIMathML. This input processor has not been fully ported to MathJax version 3 yet, but there is a version of it that uses the legacy version 2 code to patch it into MathJax version 3. None of the combined components currently include it, so you would need to specify it explicitly in your MathJax configuration in order to use it. See the AsciiMath page for more details.
By default, you mark mathematical expressions written in AsciiMath bysurrounding them in “back-ticks”, i.e.,
`...`.
Here is a complete sample page containing AsciiMath notation:
<!DOCTYPE html><html><head><title>MathJax AsciiMath Test Page</title><script>MathJax = { loader: {load: ['input/asciimath', 'output/chtml']}}</script><script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script><script type="text/javascript" id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/startup.js"></script><body><p>When `a != 0`, there are two solutions to `ax^2 + bx + c = 0` andthey are</p><p style="text-align:center"> `x = (-b +- sqrt(b^2-4ac))/(2a) .`</p></body></html>
See the AsciiMath support page for more on MathJax’s AsciiMath support and how to configure it.
Putting Math in Javascript Strings¶
If your are using javascript to process mathematics, and need to put aTeX or LaTeX expression in a string literal, you need to be aware thatjavascript uses the backslash (
\) as a special character instrings. Since TeX uses the backslash to indicate a macro name, youoften need backslashes in your javascript strings. In order toachieve this, you must double all the backslashes that you want tohave as part of your javascript string. For example,
var math = '\\frac{1}{\\sqrt{x^2 + 1}}';
This can be particularly confusing when you are using the LaTeX macro \, which must both be doubled, as \. So you would do
var array = '\\begin{array}{cc} a & b \\\\ c & d \\end{array}';
to produce an array with two rows.
|
If we have a rod uniformly charged with $Q$ stretching from $-a$ to $a$ on the $x$-axis as shown in the picture
And we want to calculate electric field in point $2a$ on the $x$-axis, we know that the electric field of a point charge $dQ$ on the rod in point $2a$ is: $$\vec{dE}=\frac{dQ}{4\pi\epsilon_0r^2}\vec{i_x}$$ where $r$ is the distance between the charge and the point $2a$.
We know that $dQ=Q'dl$ and, in this case, $Q'=\frac{Q}{2a}$, so $dQ=\frac{Q}{2a}dl$ and finally $$\vec{dE}=\frac{Qdl}{8a\pi\epsilon_0r^2}\vec{i_x}$$
Now, to calculate the total field, we need to sum all the fields of all point charges on the charged rod in point $2a$ and we do that by integration. $$E=\int \limits_{a}^{3a}dE=\frac{Q}{8a\pi\epsilon_0}\int \limits_{a}^{3a}\frac{dr}{r^2}=\frac{Q}{8a\pi\epsilon_0}\bigg(\frac{1}{a}-\frac{1}{3a}\bigg)=\frac{Q}{12\pi\epsilon_0a^2}$$
That is where I get confused. Why can't we integrate from $3a$ to $a$? We would get the same field intensity, but with a minus sign, which would be correct for the point $-2a$, or would we? I think I'm missing something here regarding the integration direction and the vectors.
|
Please assume that this graph is a highly magnified section of the derivative of some function, say $F(x)$. Let's denote the derivative by $f(x)$.Let's denote the width of a sample by $h$ where $$h\rightarrow0$$Now, for finding the area under the curve between the bounds $a ~\& ~b $ we can a...
@Ultradark You can try doing a finite difference to get rid of the sum and then compare term by term. Otherwise I am terrible at anything to do with primes that I don't know the identities of $\pi (n)$ well
@Silent No, take for example the prime 3. 2 is not a residue mod 3, so there is no $x\in\mathbb{Z}$ such that $x^2-2\equiv 0$ mod $3$.
However, you have two cases to consider. The first where $\binom{2}{p}=-1$ and $\binom{3}{p}=-1$ (In which case what does $\binom{6}{p}$ equal?) and the case where one or the other of $\binom{2}{p}$ and $\binom{3}{p}$ equals 1.
Also, probably something useful for congruence, if you didn't already know: If $a_1\equiv b_1\text{mod}(p)$ and $a_2\equiv b_2\text{mod}(p)$, then $a_1a_2\equiv b_1b_2\text{mod}(p)$
Is there any book or article that explains the motivations of the definitions of group, ring , field, ideal etc. of abstract algebra and/or gives a geometric or visual representation to Galois theory ?
Jacques Charles François Sturm ForMemRS (29 September 1803 – 15 December 1855) was a French mathematician.== Life and work ==Sturm was born in Geneva (then part of France) in 1803. The family of his father, Jean-Henri Sturm, had emigrated from Strasbourg around 1760 - about 50 years before Charles-François's birth. His mother's name was Jeanne-Louise-Henriette Gremay. In 1818, he started to follow the lectures of the academy of Geneva. In 1819, the death of his father forced Sturm to give lessons to children of the rich in order to support his own family. In 1823, he became tutor to the son...
I spent my career working with tensors. You have to be careful about defining multilinearity, domain, range, etc. Typically, tensors of type $(k,\ell)$ involve a fixed vector space, not so many letters varying.
UGA definitely grants a number of masters to people wanting only that (and sometimes admitted only for that). You people at fancy places think that every university is like Chicago, MIT, and Princeton.
hi there, I need to linearize nonlinear system about a fixed point. I've computed the jacobain matrix but one of the elements of this matrix is undefined at the fixed point. What is a better approach to solve this issue? The element is (24*x_2 + 5cos(x_1)*x_2)/abs(x_2). The fixed point is x_1=0, x_2=0
Consider the following integral: $\int 1/4*(1/(1+(u/2)^2)))dx$ Why does it matter if we put the constant 1/4 behind the integral versus keeping it inside? The solution is $1/2*\arctan{(u/2)}$. Or am I overseeing something?
*it should be du instead of dx in the integral
**and the solution is missing a constant C of course
Is there a standard way to divide radicals by polynomials? Stuff like $\frac{\sqrt a}{1 + b^2}$?
My expression happens to be in a form I can normalize to that, just the radicand happens to be a lot more complicated. In my case, I'm trying to figure out how to best simplify $\frac{x}{\sqrt{1 + x^2}}$, and so far, I've gotten to $\frac{x \sqrt{1+x^2}}{1+x^2}$, and it's pretty obvious you can move the $x$ inside the radical.
My hope is that I can somehow remove the polynomial from the bottom entirely, so I can then multiply the whole thing by a square root of another algebraic fraction.
Complicated, I know, but this is me trying to see if I can skip calculating Euclidean distance twice going from atan2 to something in terms of asin for a thing I'm working on.
"... and it's pretty obvious you can move the $x$ inside the radical" To clarify this in advance, I didn't mean literally move it verbatim, but via $x \sqrt{y} = \text{sgn}(x) \sqrt{x^2 y}$. (Hopefully, this was obvious, but I don't want to confuse people on what I meant.)
Ignore my question. I'm coming of the realization it's just not working how I would've hoped, so I'll just go with what I had before.
|
If you were to flip a coin 150 times, what is the probability that it would land tails 7 times in a row? How about 6 times in a row? Is there some forumula that can calculate this probability?
Here are some details; I will only work out the case where you want $7$ tails in a row, and the general case is similar. I am interpreting your question to mean "what is the probability that, at least once, you flip at least 7 tails in a row?"
Let $a_n$ denote the number of ways to flip $n$ coins such that at no point do you flip more than $6$ consecutive tails. Then the number you want to compute is $1 - \frac{a_{150}}{2^{150}}$. The last few coin flips in such a sequence of $n$ coin flips must be one of $H, HT, HTT, HTTT, HTTTT, HTTTTT$, or $HTTTTTT$. After deleting this last bit, what remains is another sequence of coin flips with no more than $6$ consecutive tails. So it follows that
$$a_{n+7} = a_{n+6} + a_{n+5} + a_{n+4} + a_{n+3} + a_{n+2} + a_{n+1} + a_n$$
with initial conditions $a_k = 2^k, 0 \le k \le 6$. Using a computer it would not be very hard to compute $a_{150}$ from here, especially if you use the matrix method that David Speyer suggests.
In any case, let's see what we can say approximately. The asymptotic growth of $a_n$ is controlled by the largest positive root of the characteristic polynomial $x^7 = x^6 + x^5 + x^4 + x^3 + x^2 + x + 1$, which is a little less than $2$. Rearranging this identity gives $2 - x = \frac{1}{x^7}$, so to a first approximation the largest root is $r \approx 2 - \frac{1}{128}$. This means that $a_n$ is approximately $\lambda \left( 2 - \frac{1}{128} \right)^n$ for some constant $\lambda$, which means that $\frac{a_{150}}{2^{150}}$ is roughly
$$\lambda \left( 1 - \frac{1}{256} \right)^{150} \approx \lambda e^{ - \frac{150}{256} } \approx 0.56 \lambda$$
although $\lambda$ still needs to be determined.
Edit: So let's approximate $\lambda$. I claim that the generating function for $a_n$ is
$$A(x) = 1 + \sum_{n \ge 1} a_{n-1} x^n = \frac{1}{1 - x - x^2 - x^3 - x^4 - x^5 - x^6 - x^7}.$$
This is because, by iterating the argument in the second paragraph, we can decompose any valid sequence of coin flips into a sequence of one of seven blocks $H, HT, ...$ uniquely, except that the initial segment does not necessarily start with $H$. To simplify the above expression, write $A(x) = \frac{1 - x}{1 - 2x + x^8}$. Now, the partial fraction decomposition of $A(x)$ has the form
$$A(x) = \frac{\lambda}{r(1 - rx)} + \text{other terms}$$
where $\lambda, r$ are as above, and it is this first term which determines the asymptotic behavior of $a_n$ as above. To compute $\lambda$ we can use l'Hopital's rule; we find that $\lambda$ is equal to
$$\lim_{x \to \frac{1}{r}} \frac{r(1 - rx)(1 - x)}{1 - 2x + x^8} = \lim_{x \to \frac{1}{r}} \frac{-r(r+1) + 2r^2x}{-2 + 8x^7} = \frac{r^2-r}{2 - \frac{8}{r^7}} \approx 1.$$
So my official guess at the actual value of the answer is $1 - 0.56 = 0.44$. Anyone care to validate it?
Sequences like $a_n$ count the number of words in objects called regular languages, whose enumerative behavior is described by linear recurrences and which can also be analyzed using finite state machines. Those are all good keywords to look up if you are interested in generalizations of this method. I discuss some of these issues in my notes on generating functions, but you can find a more thorough introduction in the relevant section of Stanley's Enumerative Combinatorics.
I'll sketch a solution; details are left to you.
As you flip your coin, think about what data you would want to keep track of to see whether $7$ heads have come up yet. You'd want to know: Whether you have already won and what the number of heads at the end of your current sequence was. In other words, there are $8$ states:
$A$: We have not flipped $7$ heads in a row yet, and the last flip was $T$.
$B$: We have not flipped $7$ heads in a row yet, and the last two flips was $TH$.
$C$: We have not flipped $7$ heads in a row yet, and the last three flips were $THH$.
$\ldots$
$G$: We have not flipped $7$ heads in a row yet, and the last seven flips were $THHHHHH$.
$H$: We've flipped $7$ heads in a row!
If we are in state $A$ then, with probability $1/2$ we move to state $B$ and with probability $1/2$ we stay in state $A$. If we are in state $B$ then, with probability $1/2$ we move to state $C$ and with probability $1/2$ we move back to state $A$. $\ldots$ If we are in state $G$, with probability $1/2$ we move forward to state $H$ and with probability $1/2$ we move back to state $A$. Once we are in state $H$ we stay there.
In short, define $M$ to be the matrix $$\begin{pmatrix} 1/2 & 1/2 & 1/2 & 1/2 & 1/2 & 1/2 & 1/2 & 0 \\ 1/2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1/2 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1/2 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1/2 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1/2 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1/2 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1/2 & 1 \end{pmatrix}$$
Then the entries of $M^n$ give the probability of transitioning from one given state to another in $n$ coin flips. (Please, please, please, do not go on until you understand why this works! This is one of the most standard uses of matrix multiplication.) You are interested in the lower left entry of $M^{150}$.
Of course, a computer algebra system can compute this number for you quite rapidly. Rather than do this, I will discuss some interesting math which comes out of this.
(1) The Perron-Frobenius theorem tells us that $1$ is an eigenvalue of $M$ (with corresponding eigenvector $(0,0,0,0,0,0,0,1)^T$, in this case) and all the other eigenvalues are less then $1$. Let $\lambda$ be the largest eigenvalue less than $1$, then probability of getting $7$ heads in a row, when we flip $n$ times, is approximately $1-c \lambda^n$ for some constant $c$.
(2) You might wonder what sort of configurations of coin flips can be answered by this method. For example, could we understand the case where we flip $3$ heads in a row before we flip $2$ tails in a row? (Answer: Yes.) Could we understand the question of whether the first $2k$ flips are a palindrome for some $k$? (Answer: No, not by this method.) In general, the question is which properties can be recognized by finite state automata, also called the regular languages. There is a lot of study of this subject.
(3) See chapter $8.4$ of
Concrete Mathematics, by Graham, Knuth and Patashnik, for many more coin flipping problems.
With respect to the exact answer, the recursion is related to the Fibonacci n-Step Numbers (look at the phrase below the table).
With respect to an approximate/asympotic solution: it can also be obtained by probabilistic reasoning.
Instead of throwing N (150) coins, lets consider the (thought) experiment of throwing M alternate
runs (a ‘run’ is a sequence of consecutive tails/heads). That is, instead of throwing N iid Bernoulli random variables (two values with prob=1/2), we throw M iid geometric random variables ( p(1)=1/2 p(2)=1/4 p(3)=1/8 ...) which we interpret as the length of each alternate run of consecutive tails/head in the coins sequence.
The expected value of the the runs (in both experiments) is 2. If we choose M=N/2, then the expected total number of coins (in the seconde experiment) will be N, and so we can expect (informally) that the two experiments are asympotically equivalent (in the original experiment the number of coins is fixed, the number of runs is a random variable; in the second, the reverse; this could be related to the use of different ensembles in statistical physics).
Now, in the modified experiment it’s easy to compute the probability that no run exceeds a length L: it’s just $(1-1/2^{L})^{N/2}$ If we consider just tails, the number of "trials" would be N/4 instead. This, in our case, (L=150, N=6) gives P=0.44160
The probability for NO run in $n$ flips is the coefficient of $x^n$ of the polynom series of $$G := (p,r,x) \to \frac{1-p^r x^r} {1-x+(1-p)*(p^r)*x^{r+1}}$$ For $p=0.5$, $r=7$, the coefficient of $x^{150}$ is $0.558041257197$.
The probability for One run or more is $0.441958742803$ (44.19%).
Here's a way to get an approximation for the answer via a simulation. It's kinda fun to tweak the numbers and see the probability change.
Run this in
Tools -> Javascript Console in Chrome or
Tools -> Web Developer -> Web Console in FireFox:
function experiment(streak, tosses) { var i, ctr = 0; for (i = 0; i < tosses; i++) { if (Math.random() < 0.5) ctr = 0; else if (++ctr === streak) return 1; } return 0;}function run(streak, tosses, runs) { var i, total = 0; for (i = 0; i < runs; i++) { total += experiment(streak, tosses); } return total / runs;}console.log('\nP(A) = %s\n', run(7, 150, 1e6));
I get the following result:
P(A) = 0.441693
Which is close to the other answers here.
I would've thought a better approximation would be:
$$λ(1−1 256 ) 144 ≈λe −144/256 ≈0.57λ $$
$$P = 1 - 0.57 = 0.43$$
Since it's not possible to get 7 consecutive heads in the first 6 flips. However this is clearly a worse approximation, assuming the Monte Carlo is accurate.
Here is some code in R to get the answer using the transition matrix method
a=c(0.5,0.5,0.5,0.5,0.5,0.5,0.5,0)b=c(0.5,0,0,0,0,0,0,0)c=c(0,0.5,0,0,0,0,0,0)d=c(0,0,0.5,0,0,0,0,0)e=c(0,0,0,0.5,0,0,0,0)f=c(0,0,0,0,0.5,0,0,0)g=c(0,0,0,0,0,0.5,0,0)h=c(0,0,0,0,0,0,0.5,1)M=rbind(a,b,c,d,e,f,g,h)library(expm)Mn<-M%^%150Mn[1,8]
Poisson approximation. There are $N-L+1$ possible runs of length $L=7$ - each of probability $p=(1/2)^L$. If a run happens, it is followed by expected $\lessapprox1$ more runs - due to positive association. Density of runs is $p$ hence density of run "clumps" (that is locally maximal runs of length at least $7$) is $$\lambda\gtrapprox \frac p 2$$ as runs on average come in pairs and hence $$P(\text{no clumps})\lessapprox e^{-\frac p 2(N-L+1)}\lessapprox 0.570$$ which overestimates $\approx 0.558$ by $2.1\%$. We can improve slightly by noting that the average size of a clump is $\approx L+1$, hence a better estimate is $$\lambda\approx \frac p 2\frac {N-L+1}{N-L}$$ which yields $$P(\text{no clumps})\approx e^{-\frac p 2(N-L+2)}\approx 0.568$$ which overestimates by $\approx 1.7\%$.
Alternatively, note that expected time to see a run of $L$ ones is $T=2(2^L-1)$, hence probability to see a run with $N-L+1$ possible starts is approximately $$(1-\frac 1 T)^{N-L+1}\approx e^{-\frac p 2(N-L+1)(1+p)}$$
|
Assume that we have a general one-period market model consisting of d+1 assets and N states.
Using a replicating portfolio $\phi$, determine $\Pi(0;X)$, the price of a European call option, with payoff $X$, on the asset $S_1^2$ with strike price $K = 1$ given that
$$S_0 =\begin{bmatrix} 2 \\ 3\\ 1 \end{bmatrix}, S_1 = \begin{bmatrix} S_1^0\\ S_1^1\\ S_1^2 \end{bmatrix}, D = \begin{bmatrix} 1 & 2 & 3\\ 2 & 2 & 4\\ 0.8 & 1.2 & 1.6 \end{bmatrix}$$
where the columns of D represent the states for each asset and the rows of D represent the assets for each state
What I tried:
We compute that:
$$X = \begin{bmatrix} 0\\ 0.2\\ 0.6 \end{bmatrix}$$
If we solve $D'\phi = X$, we get:
$$\phi = \begin{bmatrix} 0.6\\ 0.1\\ -1 \end{bmatrix}$$
It would seem that the price of the European call option $\Pi(0;X)$ is given by the value of the replicating portfolio
$$S_0'\phi = 0.5$$
On one hand, if we were to try to see if there is arbitrage in this market by seeing if a state price vector $\psi$ exists by solving $S_0 = D \psi$, we get
$$\psi = \begin{bmatrix} 0\\ -0.5\\ 1 \end{bmatrix}$$
Hence there is no strictly positive state price vector $\psi$ s.t. $S_0 = D \psi$. By 'the fundamental theorem of asset pricing' (or 'the fundamental theorem of finance' or '1.3.1' here), there exists arbitrage in this market.
On the other hand the price of 0.5 seems to be confirmed by:
$$\Pi(0;X) = \beta E^{\mathbb Q}[X]$$
where $\beta = \sum_{i=1}^{3} \psi_i = 0.5$ (sum of elements of $\psi$) and $\mathbb Q$ is supposed to be the equivalent martingale measure given by $q_i = \frac{\psi_i}{\beta}$.
Thus we have
$$E^{\mathbb Q}[X] = q_1X(\omega_1) + q_2X(\omega_2) + q_3X(\omega_3)$$
$$ = 0 + \color{red}{-1} \times 0.2 + 2 \times 0.6 = 1$$
$$\to \Pi(0;X) = 0.5$$
I guess $\therefore$ that we cannot determine the price of the European call using $\Pi(0;X) = \beta E^{Q}[X]$ because there is no equivalent martingale measure $\mathbb Q$
So what's the verdict? Can we say the price is 0.5? How can we price even if there is arbitrage?
Edit: I noticed that one of the probabilities, in what was attempted to be the equivalent martingale measure, is negative. I remember reading about
negative probabilities, but these links 1 2 mentioned by Wiki seem to assume absence of arbitrage so I think they are not applicable. Or are they?
Is it perhaps that this market can be considered to be arbitrage-free under some quasiprobability measure that allows negative probabilities?
Edit (to address a deleted answer):
Thanks BKay.
1 So you mean there is no unique price for $X$ but we can find upper bounds? Like in your example the least upper bound so far is 0.3 then we can continue to find lower upper bounds $u_1, u_2, ...$ (or even higher lower bounds $l_1, l_2, ...$) to say the price of $X$ is in $[0,\inf_n u_n]$ (or $[\sup_n l_n,\inf_n u_n]$)?
2 Re stochastic domination, I haven't heard that term in classes, but I think I read about that before. Might that depend on the (quasi)probability measure? Under this probability measure $0.5 S_1^2$ dominates $X$ but what about under some quasiprobability measure?
3 the $q_i$'s, not the $\psi_i$'s are the probabilities
|
This calculator can be used to design either low-pass filters or high-pass filters. Choose your filter type, enter a value for the capacitor, enter a value for the potentiometer, and then select the taper for the potentiometer. Click and hold to rotate the knob and vary the resistance. As the resistance changes with the sweep of the pot, the cutoff frequency (~f_c~) will change; this value is displayed on the Bode plot directly below the knob. A Bode plot is a graph of the frequency response of a system.
Below the Bode plot is another graph displaying the selected guitar chord. Altering the values of the low/high pass filter will show the effects on the multiple frequencies of the chord in this graph.
The frequencies of these guitar chords are filtered based on the high/low pass filter above. On a high pass filter, values lower than the frequency cutoff (~f_c~) point will be filtered out - you will see the magnitude of their waveforms decrease as they pass the frequency cutoff. In a low pass filter, frequency values higher than the frequency cutoff (~f_c~) point will be filtered out. The amount of gain in the frequency waveform (the magnitude of the wave) will be reduced as the frequency is filtered. Unfiltered frequencies will show the full gain (1).
Passive low-pass and high-pass filters are found in a multitude of circuits - including the tone knob on a guitar, the tone stack in amplifiers, and tone controls in pedals. Even voltage controlled OTA low-pass filters found in synthesizers are derived from these simple circuits. Low frequencies are allowed to pass in a low-pass filter whereas high frequencies are allowed to pass in a high-pass filter. The cutoff sets the point where the frequencies are reduced, resulting in attenuation. Everything below the cutoff point in a low-pass filter is considered within the pass band and everything above it is within the stop band. With a high-pass filter it’s just the opposite. Everything above the cutoff point is considered within the pass band and everything below it is within the stop band.
The most common versions of these circuits are RC networks comprised of a single resistor and a single capacitor. A potentiometer used as a variable resistor is often used in place of the resistor to vary the cutoff frequency.
RC Low-Pass Filter with Variable Cutoff RC High-Pass Filter with Variable Cutoff
These can be combined in different ways as well. The Big Muff Pi’s tone control famously uses a low-pass filter and a high-pass filter with a potentiometer mixing between the two.
High-pass / Low-pass Mix
When designing a filter for audio, we’ll want to know the frequency of the cutoff point. This is calculated using the same formula for both low-pass filters and high-pass filters:$$f_c = \frac{1}{2\pi RC}$$
~f_c~ is the cutoff frequency in hertz. ~R~ is the value of the resistor in ohms. ~C~ is the value of the capacitor in farads.
Suppose we have a circuit in which we want to filter out frequencies above 5,000Hz. Let’s also say that we have a 500kΩ resistor. We need to find the capacitor value to achieve the cutoff point of 5kHz. So ~f_c = \text{5,000}~ and ~R = \text{500,000}~. Solving the equation for ~C~ we find:$$C = \frac{1}{2\pi R f_c}$$$$C = \frac{1}{2 \pi \times 500{,}000\text{Ω} \times 5{,}000\text{Hz}}$$$$C \approx 63.66 \times 10^{-12}\text{F}$$
After converting our answer to picofarads we find that we will need a 63.66pF capacitor to use with the 500k resistor to get a cutoff frequency of 5kHz. This isn’t a common capacitor value so we can see what happens when we use the more common 62pF value with the formula.$$f_c = \frac{1}{2\pi RC}$$$$f_c = \frac{1}{2 \pi \times 500{,}000\text{Ω} \times (62 \times 10^{-12}\text{F})}$$$$f_c = 5{,}134.03\text{Hz}$$
In this case, the closest common value gets the cutoff frequency fairly close to the desired 5kHz. If it needed to be exactly at 5kHz we could always solve for the resistor value and use a trimmer to dial in the exact resistance.$$R = \frac{1}{2 \pi C f_c}$$$$R = \frac{1}{2 \pi \times (62 \times 10^{-12}\text{F}) \times 5{,}000\text{Hz}}$$$$R = 513{,}403.04\text{ Ω}$$
We could either use a 1 Meg trimmer to get this resistance or use resistors in series/parallel.
RC filters have some key characteristics that you may want to consider before choosing them for your design. They are first-order filters because they have one pole; this is due to the fact that they only have one reactive component, the capacitor. With a single pole, the filter will always have a -6dB/octave or -20dB/decade slope. If the amount of poles increases, the slope will also increase. This can be seen with the famous Moog filter, which has 4 poles and a -24dB/octave or -80dB/decade slope. While our RC filter and the Moog filter both function very similarly, the sound is very different.
Low-pass Bode Plot High-pass Bode Plot
Another interesting aspect of the RC filters is their effect on the phase angle of different frequencies. At the cutoff frequency the phase is 45° out of phase. For a low-pass filter the phase shift is -45° and for a high-pass filter the phase shift is +45°.
Low-pass Frequency Waveform (-45° phase shift) High-pass Frequency Waveform (+45° phase shift)
Using the following we can find the phase angle of a set frequency in a low-pass filter. Phase Shift ~\Phi~ is the phase shift in radians. ~ƒ~ is the frequency in hertz. ~R~ is the resistor value in ohms. ~C~ is the capacitor value in farads.$$ \Phi_{\text{Phase Shift}} = -\arctan{(2 \pi f R C)}$$
If we use the component values we solved for above (~R = \text{513,403.04Ω}~, ~C = 62 \times 10^{-12}~) and use the cutoff frequency in the formula (~f_c = \text{5,000Hz}~) we get the following:$$\Phi_{\text{Phase Shift}} = -\arctan{(2 \pi \times 5{,}000 \times 513{,}403.04 \times (62 \times 10^{-12}))}$$$$\Phi_{\text{Phase Shift}} = -\arctan{(2 \pi \times 0.1592)}$$$$\Phi_{\text{Phase Shift}} = -\arctan{(0.9999)}$$$$\Phi_{\text{Phase Shift}} = -0.7853 \text{ radians}$$
We’ll use the following to convert the radians to degrees$$\text{degrees} = \text{radians} \times \frac{180}{\pi}$$$$\text{degrees} = -0.7853 \times \frac{180}{\pi}$$$$\text{degrees} = -45$$
Suppose we used the same components in a high pass filter and wanted to check the phase angle at our cutoff point. We’ll use the following to find the phase shift in high-pass filters:$$ \Phi_{\text{Phase Shift}} = \arctan{(2 \pi f R C)}$$
Using the same values as above$$\Phi_{\text{Phase Shift}} = \arctan{(2 \pi \times 5{,}000 \times 513{}403.04 \times (62 \times 10^{-12}))}$$$$\Phi_{\text{Phase Shift}} = \arctan{(2 \pi \times 0.1592)}$$$$\Phi_{\text{Phase Shift}} = \arctan{(0.9999)}$$$$\Phi_{\text{Phase Shift}} = 0.7853 \text{ radians}$$$$\text{degrees} = \text{radians} \times \frac{180}{\pi}$$$$\text{degrees} = 0.7853 \times \frac{180}{\pi}$$$$\text{degrees} = 45$$
These equations give us the expected -45° and +45° results for the phase shift at the cutoff frequency for the low-pass and high-pass filters. You can use these equations to check the phase shift of any frequency in your circuit.
Note that the information presented in this article is for reference purposes only. Amplified Parts makes no claims, promises, or guarantees about the accuracy, completeness, or adequacy of the contents of this article, and expressly disclaims liability for errors or omissions on the part of the author. No warranty of any kind, implied, expressed, or statutory, including but not limited to the warranties of non-infringement of third party rights, title, merchantability, or fitness for a particular purpose, is given with respect to the contents of this article or its links to other resources.
|
Intro (you may skip this if you're an expert, I'm including this for completeness):
Say I have two bases for two systems,
The first is a spin-1/2 system $|+\rangle = \left(\begin{array}{c} 1\\0 \end{array}\right),|-\rangle=\left(\begin{array}{c} 0\\1 \end{array}\right)$
The second is a spin-1 system, with $|1_+\rangle=\left(\begin{array}{c} 1\\0\\0 \end{array}\right),|1_0\rangle=\left(\begin{array}{c} 0\\1\\0 \end{array}\right),|1_-\rangle=\left(\begin{array}{c} 0\\0\\1 \end{array}\right)$
Now for the first system, I can use the Pauli matrix
$$\hat{S_z}=\frac{1}{2}\hbar\hat{\sigma}_z = \left( \begin{array}{cc} -\frac{\hbar }{2} & 0 \\ 0 & \frac{\hbar }{2} \\ \end{array} \right)$$
in order to get the projection of my state on the z-axis. Likewise, I could use the projection matrix $$\hat{J}_z=\hbar\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \\ \end{array} \right)$$
To project the other state on the z-axis. Those operators will act on my basis in the following way:
$$\hat{S}_z|+\rangle=\frac{\hbar}{2}|+\rangle$$ $$\hat{J}_z|1_+\rangle=\hbar|1_+\rangle$$
Problem: (here comes the question)
So far everything is good! Now the problem comes when I introduce a space for the composite system, so I'm getting the basis
$$|S_z\rangle\otimes |J_z\rangle\rightarrow\left\{ |+,1_+\rangle,|+,1_0\rangle,|+,1_-\rangle,|-,1_+\rangle,|-,1_0\rangle,|-,1_-\rangle\right\}$$
Now the question is: how do I use the matrix formalism to have such operations just like I had them before in the single systems:
$$S_z|+,1_+\rangle=\frac{\hbar}{2}|+,1_+\rangle$$ $$S_z|+,1_-\rangle=\frac{\hbar}{2}|+,1_-\rangle$$ $$J_z|+,1_0\rangle=0|+,1_0\rangle$$ $$J_z|+,1_-\rangle=-\hbar|+,1_-\rangle$$
In other words, how do I write the state-kets and the operators in the composite system in matrix formalism (just like I showed in the beginning) to give results compatible with what I would expect in the examples?
Is this wrong in some way?
Every time I try to do this with Kronecker Product (like $\hat{S}_z \otimes \hat{J}_z$) I arrive at a mess, and I get terms proportional to $\hbar^2$, and I don't get the eigen-values I expect, and I'm not sure what I'm doing wrong. Could you please show me how to do this?
Thank you.
|
Data Types for Control & DSP
There's a lot of information out there on what data types to use for digital signal processing, but there's also a lot of confusion, so the topic bears repeating.
I recently posted an entry on PID control. In that article I glossed over the data types used by showing "double" in all of my example code. Numerically, this should work for most control problems, but it can be an extravagant use of processor resources. There ought to be a better way to determine what precision you need out of your arithmetic, and what sorts of arithmetic you can use on your processor.
This blog post seeks to answer two questions: "what data types must I use to get the job done?" and "what data types are fast enough on my processor to get the job done?" If you're lucky, there is some overlap in the answers to those two questions and you can go out and design a controller that works. If you're not that lucky, at least you know that you either need to seek out a new, faster processor, or perhaps a new, more forgiving job.
All of these issues are discussed in more depth in my book, Applied Control Theory for Embedded Systems.
For the purposes of this discussion the world of data representation is divided into three: floating point types, integer types, and fixed-point types. You should know what floating point and integer representation are. Fixed-point representation is just a generalization of integer representation, with the weights of the bits (possibly) scaled such that the least-significant bit is a fraction. The notion of non-integer fixed-point arithmetic is discussed in this Wikipedia article: the short story is that if I talk about a number in Q0.31 notation I mean a fixed-point number in the range $-1 < x < 1$.
Each of these three types has advantages and disadvantages. Floating point arithmetic is conceptually easy, but on all but desktop-class processors it is slower than integer or fixed-point arithmetic, and it has some subtle "gotchas" that can bite you if you're not careful. Integer arithmetic uses familiar data types, but in general the scaling for signal processing (and, hence, control systems) is all wrong. Non-integer fixed-point types are not directly supported in common languages, and can be hard for a beginner to wrap their heads around, but are close to optimal for a wide range of problems.
For all of these data types, you have to worry about quantization, for floating point numbers you have to worry about varying quantization effects, and for integer and other fixed-point data types you need to worry about overflow.
For the purposes of this post, I'll make an example PID controller. I'll use $u$ to mean the measurement of the controlled variable, $ut$ to be the target value of the controlled variable, and $y$ to mean the controller output. For all variables, $x_n$ means the value of $x$ at sample time $n$. The variables $k_i, k_p, k_d, k_{dp}$ are the integrator gain, the proportional gain, the derivative gain, and the derivative band-limiting factor, respectively. The math for this controller is
$$xi_n = xi_{n-1} + k_i \left ( ut_n - u_n \right )$$
$$xd_n = xd_n + k_{dp} \left ( \left ( ut_n - u_n \right ) - xd_{n-1} \right )$$
$$y_n = xi_n + k_p u_n + k_d k_{dp} \left ( \left ( ut_n - u_n \right ) - xd_{n-1} \right )$$
The first problem that crops up with this algorithm is the integrator gain. For most control systems, the integrator gain is much less than 1. This means that the factor $k_i \left ( ut_n - u_n \right )$ is, in general, small. Moreover, as you increase the sampling rate, you need to adjust the integrator gain downward. It is up to you to insure that for the smallest possible value of $u_n$, the factor $k_i \left ( ut_n - u_n \right )$ fits in the data type that you have chosen for the integrator state, $xi$.
As a concrete example, consider a system that uses a 16-bit ADC to measure the plant's output variable. Further, let's assume that we scale this output variable to a range $0 \le u_n < 1$. If $n_{ADC}$ is the ADC reading that ranges from 0 to 65535, then we calculate $u_n = \frac{n_{ADC}}{65536}$. Now, further assume that the integrator gain is a not-unreasonable $k_i = 0.0002$, and that the integrator state can fall in the range $-1 < xi < +1$.
With this example, the smallest increment of the ADC can be $\frac{1}{65536}$. This, in turn, means that the smallest increment of the factor $k_i \left ( ut_n - u_n \right )$ can be $\frac{0.0002}{65536}$, or about $3 \cdot 10^{-9}$.
If you store $xi$ in a 32-bit IEEE floating-point variable, then the mantissa has an effective length of 25 bits. When $xi$ has an absolute value greater than $\frac{1}{2}$, the smallest increment that can be added into $xi$ is $2^{-26}$, or about $15 \cdot 10^{-9}$. That's about five times larger than the smallest increment that may occur.
What does all this razz-matazz with numbers mean? It means that in this circumstance, the integrator term of your PID controller is missing out on potentially important information in the feedback. This, in turn, could result in your system getting "stuck" at the wrong value until the error grows to objectionable amounts. In a real-world system, this would mean that you might see a small amount of random drift around your desired target point, or a small oscillation around your desired target point.
To make sure this doesn't happen, you should make sure that the smallest increment that will register in your integrator state is as small or smaller than the smallest increment that can be presented to it. Better yet, make sure that the smallest increment that will register on your integrator state is smaller than about $\frac{1}{8}$ of the smallest increment that will be presented to it.
Determine the smallest increment that your integrator state will absorb. For an integer, this increment is 1. For a signed fractional number that ranges from -1 to 1, with $n$ bits this increment is $2^{-(n-1)}$, or $2^{-31}$ for a 32-bit number. For a 32-bit IEEE floating point number ("float" in most C compilers) that ranges from -1 to 1, this increment can be as high as $2^{-25}$. The increment isn't a constant -- this is one of the lovely things about using floating point. For a 64-bit IEEE floating point number ("double" in most C compilers) that ranges from -1 to 1, this increment can be as high as $2^{-54}$. Again, the increment isn't a constant. Determine the smallest increment that you will present to your integrator. This will be the smallest increment of your sensor (usually an ADC, but you know your system), multiplied by any pre-scaling factors you may apply, then multiplied by the integrator gain. Check which number is bigger, and by how much -- if the smallest increment you'll ever present to the integrator state is eight times bigger than the smallest increment it can register, then you're probably OK.
Astute readers will notice that there's a problem with the controller equation that I show if you're using integers -- when the smallest increment that an integrator can register is $\pm 1$, then you need to swap things around. In this case, you should refrain from scaling the ADC output: let $u_n = n_{ADC}$. Then, move the integrator gain:
$$xi_n = xi_{n-1} + \left ( ut_n - u_n \right )$$
$$xd_n = xd_n + k_{dp} \left ( \left ( ut_n - u_n \right ) - xd_{n-1} \right )$$
$$y_n = k_i xi_n + k_p u_n + k_d k_{dp} \left ( \left ( ut_n - u_n \right ) - xd_{n-1} \right )$$
Now, your integrator state will always register the smallest change in the ADC. You will have to scale all of your variables for the convenience of the mathematics rather than your own convenience, but it'll work.
With fixed-point numbers, quantization is fixed -- either the smallest increment you're presenting to the integrator state is too small to be registered, or it's not. Life isn't so easy with floating point. With floating points, if the value of a state (such as $xi$) is small then the smallest increment that you can add in to it is also small. But as the value of the state grows the smallest increment you can add in also grows -- so if you're dealing with floating point numbers you need to do your calculations based off of the maximum value that the state can take (or the maximum value that you allow the state to take).
Floating point numbers only have problems with getting too big when they grow past the point where small changes in the system inputs can affect them properly. Fixed point numbers, however, can have much more dramatic problems. The problem is called overflow.
Consider the C code snippet:
int a = 30000; printf("The number is %d\n", a + 2768);
Can you say what the output will be? You can't, really.
If you try this on a normal PC, the output will be "32768". However, if you can find a system that uses 16-bit integers and 2's compliment notation (and that has a working printf), the output will most likely be "-32768". The reason for this is that 32768 does not fit into a 2's compliment signed 16-bit integer, and because C tends to be pretty bone-headed about handling this situation. The phenomenon that we've just seen is called overflow.
If you are designing a digital control system (or any digital signal processing system) you need to either design your data paths so that overflow simply cannot happen, or you need to make sure that overflow is handled gracefully.
Designing data paths so that overflow cannot happen is beyond the scope of this paper. If you understand the relevant signal processing theory, you can start from the control system as designed and the maximum possible ranges of all the inputs, and you can compute the largest value for each of the states and intermediate values in the system. If those largest values are all smaller than anything that will overflow, then you don't have to worry.
Designing data paths that handle overflow gracefully is conceptually more simple: at each step in the computation that might result in an increased value (whether it's a state variable or an intermediate value), you test for overflow, and you deal with it gracefully. I have found that in C and C++, the way to do this is to test for overflow and if it happens, let the result take on the greatest value allowed by the data type.
This overflow handling is detailed in my book, but I can give an example using integer math. In this case I'm defining that "overflow" is anything that results in a value greater than INT_MAX/2.
int add(int a, int b) { // Assume that a and b are both smaller than INT_MAX/2 int x = a + b; if (x > INT_MAX / 2) { x = INT_MAX / 2; } else if (x < -(INT_MAX / 2)) { x = -(INT_MAX / 2); } return x; }
There are more sophisticated ways of handling this, especially if you're willing to do some assembly-language programming, but the above code shows the idea.
So far we've dealt with the "what does my data type need to do?" side of the question. The other side of the question is "what can my processor do?" I would like to be able to give you some firm guidelines on how to find this out before the fact -- but I can't. I can give you some rules of thumb to narrow your choices down, but in the end analysis you'll need to write a sample controller, with your intended data types, and then benchmark its performance to figure out how much of the available processor resources it consumes.
The rules of thumb that I can give you are:
Doing the work in fixed-point math is almost always faster than floating-point math done in software. If you have floating-point hardware, it may or may not be faster than fixed-point math (and if it's slower, it'll be a lot closer). Be careful when a processor claims to have floating-point hardware. 32-bit floating point hardware is much more common than 64-bit, and you often have to do some strenuous digging to figure out that you're only going to get 32-bit. 16-bit fixed point will work for a few systems. 32-bit floating point gives more precision than 16-bit fixed point, but less than 32-bit fixed point. 64-bit floating point will probably give you more precision than you'll need -- if this isn't the case please hire me, I want to work on your project! It always takes more clock cycles than you think -- benchmark.
It's a good idea to do your benchmarking early in a project -- you'll often find yourself either needing to adopt fixed-point math, or needing to buy an entirely different processor. If you are a manager, drive your team to choose a processor candidate early, then buy an evaluation board and drive them to benchmark their candidate software. If you are a line engineer, then do everything you can to get your hands on an appropriate evaluation board and give it a whirl. In all cases, try to build time into the project to absorb a false start on the processor selection.
In a perfect world -- at least for the lazy programmer -- you'll always be able to use a processor that can keep up with the system sampling rate while using 64-bit floating point math. While this can happen (sometimes even when you're using a slow 8-bit processor!), you'll often be stuck with using fixed point math of some type, to make your speed numbers.
Previous post by Tim Wescott:
PID Without a PhD
Next post by Tim Wescott:
Fibonacci trick
Say, I'm not understanding your PID equations exactly, so please enlighten.
The way I see it, the Proportional constant "Kp" in the last equation should be multiplied by the error ( "Utn - Un" ) and not simply the measured process variable "Un". Also, the derivative state equation seem odd to me, especially in its use of "Xdn" on both sides of the equation. Shouldn't the one on the right side of the derivative state equation be "Xd(n-1)" instead of Xdn ?
Anyway, great read---thanks!
-Myles
Enjoy!
I don't think you really get this until you write some code and something horrible happens. I saw a picture once of a log smashed into the operator's console of a saw mill. The tree was larger than 65.535 inches -- you can probably fill in the blanks, or at least guess at the bug.
Jerry
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
> No, that wasn't my reasoning. Right or wrong, my position in Lecture 6 was this: I wasn't assuming the posets in question have all infs and sup, I was *claiming* that they *must* have the infs and sups in question, given my assumptions.
> ...
> That said, if someone gave me a puzzle whose answer was \\(a\\), and I said the answer was \\( \bigvee \\{b \in A: \; a \ge b \\} \\), we'd have to say my answer wasn't the best available, because I failed to simplify it as much as possible.
Okay.
I was thinking like this: the *most general* way to think about Galois connections is on preorders. But this is annoying because they don't obey the anti-symmetry rule. They don't have infima and suprema which are natural.
However, I'm arguing there's a place a we can go: *The Dedekind-MacNeille Completion Functor*. If we embed our preorder up there, now we've got a real partial order like we've always wanted. We've even got sets which is nice. And we've got suprema and infima. And, when I can get around to it, I think I can prove a transfer theorem for adjunctions and fixed points.
(Transfer is my idea, but I got the idea of using it to transform preorders from [Erné (1991)](https://link.springer.com/article/10.1007/BF00383401).)
Here's a parallel: the *textbook* way to think about derivatives in calculus is with the \\(\delta-\epsilon\\) formulation on a real closed Archimedean field. But this is annoying because there's a lot of quantifiers and those are hard. Also, we don't have infinitesimals or their reciprocals which are natural (for Euler and Leibniz, anyway). Even Archimedes found it natural to use infinitesimals and break the rules that are his namesake in his [lost palimpsest](https://en.wikipedia.org/wiki/Archimedes_Palimpsest). And we can have it all with the Robinson's ultraproduct construction, and we have the transfer theorem for first order propositions.
Now, I can see why maybe it's annoying. Nobody really uses nonstandard analysis for much because it's hard to motivate and ultraproducts are clumsy. But for some, it validates their intuition. And I say Dedekind-MacNeille completions do the same for preorders. But that's just my opinion.
|
The MathJax Processing Model¶
The purpose of MathJax is to bring the ability to include mathematics easily in web pages to as wide a range of browsers as possible. Authors can specify mathematics in a variety of formats (e.g., MathML, LaTeX, or AsciiMath), and MathJax provides high-quality mathematical typesetting even in those browsers that do not have native MathML support. This all happens without the need for special downloads or plugins, but rendering will be enhanced if high-quality math fonts (e.g., STIX) are available to the browser.
MathJax is broken into several different kinds of components: page preprocessors, input processors, output processors, and the MathJax Hub that organizes and connects the others. The input and output processors are called jax, and are described in more detail below.
When MathJax runs, it looks through the page for special tags that hold mathematics; for each such tag, it locates an appropriate input jax which it uses to convert the mathematics into an internal form (called an element jax), and then calls an output jax to transform the internal format into HTML content that displays the mathematics within the page. The page author configures MathJax by indicating which input and output jax are to be used.
Often, and especially with pages that are authored by hand, themathematics is not stored (initially) within the special tags neededby MathJax, as that would require more notation than the average pageauthor is willing to type. Instead, it is entered in a form that ismore natural to the page author, for example, using the standard TeXmath delimiters
$...$ and
$$...$$ to indicate what part of thedocument is to be typeset as mathematics. In this case, MathJax canrun a preprocessor to locate the math delimiters and replace them bythe special tags that it uses to mark the formulas. There arepreprocessors for TeX notation, MathMLnotation, AsciiMath notation and the jsMath notation that uses span and div tags.
For pages that are constructed programmatically, such as HTMLpages that result from running a processor on text in some otherformat (e.g., pages produced from Markdown documents, or via programslike tex4ht), it would be best to use MathJax’s special tagsdirectly, as described below, rather than having MathJax runanother preprocessor. This will speed up the final display of themathematics, since the extra preprocessing step would not be needed.It also avoids the conflict between the use of the less-than sign,
<, in mathematics and as an HTML special character (that startsan HTML tag), and several other issues involved in having themathematics directly in the text of the page (see the documentation onthe various input jax for more details on this).
How mathematics is stored in the page¶
In order to identify mathematics in the page, MathJax uses special
<script> tags to enclose the mathematics. This is done becausesuch tags can be located easily, and because their content is notfurther processed by the browser; for example, less-than signs can beused as they are in mathematics, without worrying about them beingmistaken for the beginnings of HTML tags. One may also consider themath notation as a form of “script” for the mathematics, so a
<script> tag makes at least some sense for storing the math.
Each
<script> tag has a
type attribute that identifies thekind of script that the tag contains. The usual (and default) valueis
type="text/javascript", and when a script has this type, thebrowser executes the script as a javascript program. MathJax,however, uses the type math/tex to identify mathematics in the TeXand LaTeX notation, math/mml for mathematics in MathML notation, andmath/asciimath for mathematics in AsciiMath notation. When thetex2jax, mml2jax, or asciimath2jax preprocessors run, theycreate
<script> tags with these types so that MathJax can processthem when it runs its main typesetting pass.
For example,
<script type="math/tex">x+\sqrt{1-x^2}</script>
represents an in-line equation in TeX notation, and
<script type="math/tex; mode=display"> \sum_{n=1}^\infty {1\over n^2} = {\pi^2\over 6}</script>
is a displayed TeX equation.
Alternatively, using MathML notation, you could use
<script type="math/mml"> <math> <mi>x</mi> <mo>+</mo> <msqrt> <mn>1</mn> <mo>−<!-- − --></mo> <msup> <mi>x</mi> <mn>2</mn> </msup> </msqrt> </math></script>
for in-line math, or
<script type="math/mml"> <math display="block"> <mrow> <munderover> <mo>∑<!-- ∑ --></mo> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi mathvariant="normal">∞<!-- ∞ --></mi> </munderover> </mrow> <mrow> <mfrac> <mn>1</mn> <msup> <mi>n</mi> <mn>2</mn> </msup> </mfrac> </mrow> <mo>=</mo> <mrow> <mfrac> <msup> <mi>π<!-- π --></mi> <mn>2</mn> </msup> <mn>6</mn> </mfrac> </mrow> </math></script>
for displayed equations in MathML notation. As other input jax are created, they will use other types to identify the mathematics they can process.
Page authors can use one of MathJax’s preprocessors to convert frommath delimiters that are more natural for the author to type (e.g.,TeX math delimiters like
$$...$$) to MathJax’s
<script>format. Blog and wiki software could extend from their own markuplanguages to include math delimiters, which they could convert toMathJax’s
<script> format automatically.
Note, however, that Internet Explorer has a bug that causes it toremove the space before a
<script> tag if there is also a spaceafter it, which can cause serious spacing problems with in-line mathin Internet Explorer. There are three possible solutions to this inMathJax. The recommended way is to use a math preview (an elementwith class
MathJax_Preview) that is non-empty and comes rightbefore the
<script> tag. Its contents can be just the word
[math], so it does not have to be specific to the mathematicsscript that follows; it just has to be non-empty (though it could haveits style set to
display:none). See also the
preJax and
postJax options in the Core Configuration Options document for another approach.
The components of MathJax¶
The main components of MathJax are its preprocessors, its input and output jax, and the MathJax Hub, which coordinates the actions of the other components.
Input jax are associated with the different script types (like math/tex or math/mml) and the mapping of aparticular type to a particular jax is made when the various jaxregister their abilities with the MathJax Hub at configuration time.For example, the MathML input jax registers the math/mmltype, so MathJax will know to call the MathML input jax when it seesmath elements of that type. The role of the input jax is to convertthe math notation entered by the author into the internal format usedby MathJax (called an element jax). This internal format isessentially MathML (represented as JavaScript objects), so an inputjax acts as a translator into MathML. Output jax convert that internal element jax format into aspecific output format. For example, the NativeMML output jax insertsMathML tags into the page to represent the mathematics, while theHTML-CSS output jax uses HTML with CSS styling to lay out themathematics so that it can be displayed even in browsers that don’tunderstand MathML. MathJax also has an SVG output jax thatwill render the mathematics using scalable vector grtaphics. Outputjax could be produced that render the mathematics using HTML5 canvaselements, for example, or that speak an equation for blind users. TheMathJax contextual menu can be used to switch between the output jaxthat are available.
Each input and output jax has a small configuration file that isloaded when that input jax is included in the jax array in theMathJax configuration, and a larger file that implements the corefunctionality of that particular jax. The latter file is loaded thefirst time the jax is needed by MathJax to process some mathematics.Most of the combined configuration files include only the smallconfiguration portion for the input and output jax, making theconfiguraiton file smaller and faster to load for those pages thatdon’t actually incldue mathematics; the combined configurations thatend in
-full include both parts of the jax, so there is no delaywhen the math is to be rendered, but at the expense of a largerinitial download.
The
MathJax Hub keeps track of the internal representations of thevarious mathematical equations on the page, and can be queried toobtain information about those equations. For example, one can obtaina list of all the math elements on the page, or look up a particularone, or find all the elements with a given input format, and so on.In a dynamically generated web page, an equation where the sourcemathematics has changed can be asked to re-render itself, or if a newparagraph is generated that might include mathematics, MathJax can beasked to process the equations it contains.
The Hub also manages issues concerning mouse events and other user interaction with the equation itself. Parts of equations can be made active so that mouse clicks cause event handlers to run, or activate hyperlinks to other pages, and so on, making the mathematics as dynamic as the rest of the page.
|
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. Measurement of the top quark mass with lepton+jets final states using $$\mathrm {p}$$ p $$\mathrm {p}$$ p collisions at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV
The European Physical Journal C, ISSN 1434-6044, 11/2018, Volume 78, Issue 11, pp. 1 - 27
The mass of the top quark is measured using a sample of $${{\text {t}}\overline{{\text {t}}}$$ tt¯ events collected by the CMS detector using proton-proton...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Journal Article
2. Measurement of prompt and nonprompt $$\mathrm{J}/{\psi }$$ J / ψ production in $$\mathrm {p}\mathrm {p}$$ p p and $$\mathrm {p}\mathrm {Pb}$$ p Pb collisions at $$\sqrt{s_{\mathrm {NN}}} =5.02\,\text {TeV} $$ s NN = 5.02 TeV
The European Physical Journal C, ISSN 1434-6044, 04/2017, Volume 77, Issue 4, pp. 1 - 27
Abstract This paper reports the measurement of $$\mathrm{J}/{\psi }$$ J / ψ meson production in proton–proton ( $$\mathrm {p}\mathrm {p}$$ p p ) and...
Journal Article
3. Recombinant P-selectin glycoprotein ligand–immunoglobulin, a P-selectin antagonist, as an adjunct to thrombolysis in acute myocardial infarction. The P-Selectin Antagonist Limiting Myonecrosis (PSALM) trial
American Heart Journal, ISSN 0002-8703, 2006, Volume 152, Issue 1, pp. 125.e1 - 125.e8
Inflammatory responses induced by reperfusion of previously ischemic myocardial tissue may lead to further damage of the microvascular structures. A group of...
RPSGL-IG | CARDIAC & CARDIOVASCULAR SYSTEMS | OXYGEN RADICALS | NO-REFLOW | SIZE | DISEASE | INJURY | RANDOMIZED-TRIAL | ACUTE CORONARY SYNDROMES | MODEL | ISCHEMIA-REPERFUSION | Recombinant Proteins - therapeutic use | Coronary Vessels - physiology | Humans | Middle Aged | Tomography, Emission-Computed, Single-Photon | Male | Myocardial Infarction - diagnostic imaging | Positron-Emission Tomography | Tissue Plasminogen Activator - therapeutic use | Regional Blood Flow | Recombinant Proteins - administration & dosage | Stroke Volume | Coronary Angiography | Membrane Glycoproteins - administration & dosage | Membrane Glycoproteins - therapeutic use | Myocardial Infarction - drug therapy | Fibrinolytic Agents - therapeutic use | Thrombolytic Therapy | Adolescent | Adult | Female | Myocardial Infarction - physiopathology | Aged | Drug Therapy, Combination | PET imaging | Cardiology | Analysis | Cardiac patients | Heart attack | Endothelium | Ligands | Heart attacks | Glucose | Cell adhesion & migration | Index Medicus | Abridged Index Medicus
RPSGL-IG | CARDIAC & CARDIOVASCULAR SYSTEMS | OXYGEN RADICALS | NO-REFLOW | SIZE | DISEASE | INJURY | RANDOMIZED-TRIAL | ACUTE CORONARY SYNDROMES | MODEL | ISCHEMIA-REPERFUSION | Recombinant Proteins - therapeutic use | Coronary Vessels - physiology | Humans | Middle Aged | Tomography, Emission-Computed, Single-Photon | Male | Myocardial Infarction - diagnostic imaging | Positron-Emission Tomography | Tissue Plasminogen Activator - therapeutic use | Regional Blood Flow | Recombinant Proteins - administration & dosage | Stroke Volume | Coronary Angiography | Membrane Glycoproteins - administration & dosage | Membrane Glycoproteins - therapeutic use | Myocardial Infarction - drug therapy | Fibrinolytic Agents - therapeutic use | Thrombolytic Therapy | Adolescent | Adult | Female | Myocardial Infarction - physiopathology | Aged | Drug Therapy, Combination | PET imaging | Cardiology | Analysis | Cardiac patients | Heart attack | Endothelium | Ligands | Heart attacks | Glucose | Cell adhesion & migration | Index Medicus | Abridged Index Medicus
Journal Article
4. Study of the underlying event in top quark pair production in $$\mathrm {p}\mathrm {p}$$ p p collisions at 13 $$~\text {Te}\text {V}$$ Te
The European Physical Journal C, ISSN 1434-6044, 02/2019, Volume 79, Issue 2
Journal Article
European Physical Journal C, ISSN 1434-6044, 11/2018, Volume 78, Issue 11
Journal Article
6. Observation of Charge-Dependent Azimuthal Correlations in p-Pb Collisions and Its Implication for the Search for the Chiral Magnetic Effect
Physical Review Letters, ISSN 0031-9007, 03/2017, Volume 118, Issue 12, pp. 122301 - 122301
Charge-dependent azimuthal particle correlations with respect to the second-order event plane in p-Pb and PbPb collisions at a nucleon-nucleon center-of-mass...
PARITY VIOLATION | SEPARATION | PHYSICS, MULTIDISCIPLINARY | FIELD | Hadrons | Correlation | Large Hadron Collider | Searching | Correlation analysis | Collisions | Solenoids | Atomic collisions | NUCLEAR PHYSICS AND RADIATION PHYSICS | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
PARITY VIOLATION | SEPARATION | PHYSICS, MULTIDISCIPLINARY | FIELD | Hadrons | Correlation | Large Hadron Collider | Searching | Correlation analysis | Collisions | Solenoids | Atomic collisions | NUCLEAR PHYSICS AND RADIATION PHYSICS | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
European Physical Journal C, ISSN 1434-6044, 04/2017, Volume 77, Issue 4
Journal Article
8. Measurement of the top quark mass with lepton+jets final states using $\mathrm {p}$$ $$\mathrm {p}$ collisions at $\sqrt{s}=13\,\text {TeV}
European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 11/2018, Volume 78, Issue 11
The mass of the top quark is measured using a sample of $\mathrm{t\overline{t}}$ events containing one isolated muon or electron and at least four jets in the...
PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
European Physical Journal A, ISSN 1434-6001, 2014, Volume 50, Issue 4, pp. 1 - 27
Journal Article
European Physical Journal A, ISSN 1434-6001, 04/2014, Volume 50, Issue 4
Photoproduction off protons of the p pi(0)eta three-body final state was studied with the Crystal Barrel/TAPS detector, at the electron stretcher accelerator...
CHIRAL DYNAMICS | LIGHT-BARYON SPECTRUM | MESON | PHYSICS, NUCLEAR | RESTORATION | RELATIVISTIC QUARK-MODEL | BEAM ASYMMETRY | PION | NUCLEON RESONANCES | I-S | PHOTOPRODUCTION | PHYSICS, PARTICLES & FIELDS
CHIRAL DYNAMICS | LIGHT-BARYON SPECTRUM | MESON | PHYSICS, NUCLEAR | RESTORATION | RELATIVISTIC QUARK-MODEL | BEAM ASYMMETRY | PION | NUCLEON RESONANCES | I-S | PHOTOPRODUCTION | PHYSICS, PARTICLES & FIELDS
Journal Article
11. Measurement of the weak mixing angle using the forward–backward asymmetry of Drell–Yan events in $$\mathrm {p}\mathrm {p}$$ pp collisions at 8$$\,\text {TeV}$$ TeV
The European Physical Journal C, ISSN 1434-6044, 9/2018, Volume 78, Issue 9, pp. 1 - 30
A measurement is presented of the effective leptonic weak mixing angle ($$\sin ^2\theta ^{\ell }_{\text {eff}}$$ sin2θeffℓ ) using the forward–backward...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Journal Article
|
№ 9
All Issues Haar’s condition and joint polynomiality of separate polynomial functions Abstract
For systems of functions $F = \{ f_n \in K^X : n \in N\}$ and $G = \{ g_n \in K^Y : n \in N\}$ we consider an $F$ -polynomial $f = \sum^n_{k=1}\lambda_k f_k$, a $G$-polynomial $h = \sum^n_{k,j=1} \lambda_{k,j} f_k \otimes g_j$, and an $F \otimes G$-polynomial $(f_k\otimes g_j)(x, y) = = f_k(x)g_j(y)$, where $(f_k\otimes g_j)(x, y) = f_k(x)g_j(y)$. By using the well-known Haar’s condition from the approximation theory we study the following question: under what assumptions every function $h : X \times Y \rightarrow K$, such that all $x$-sections $h^x = h(x, \cdot )$ are $G$-polynomials and all $y$-sections $h_y = h(\cdot , y)$ are $F$ -polynomials, is an $F \otimes G$-polynomialy. A similar problem is investigated for functions of $n$ variables.
Citation Example: Kosovan V. M., Maslyuchenko V. K., Voloshyn H. A. Haar’s condition and joint polynomiality of
separate polynomial functions // Ukr. Mat. Zh. - 2017. - 69, № 1. - pp. 17-27. Full text
|
No
Your idea is good, but you are misunderstanding what 'any dimension' means.
When you think of a 1-foot cube, you are thinking that, since the cube is no longer than 1 foot to a side, it has a length of 1 foot in each direction. This is not true. In fact, the cube has an extent of no
less than one foot in each direction and no more than √3.
In order for total extent of the cloak to be no longer than 1 foot in any dimension, you actually need, at best (in terms of volume), a 1-foot diameter sphere. If you are using a cube it would need to be a 1/√3-foot cube.
That said, even this is not enough.
Imagine your tear is a simple 2-foot line. Let's say you fold it in half, so now you have a line that goes 1 foot in a direction and then another 1 foot back to where it started. Being able to rely on the volumetric simplification described above means being able to rely on the folded tear being considered to be 1-foot in length. This is by no means the most likely interpretation of 'length'-- that would be that the total distance traveled along that axis is 2-feet, so the length of the path in that dimension is, similarly, 2 feet. Otherwise the intuitive theorem
$$\sum_{i=0}^n length_i = total \; length$$
is, in fact, false.
If the length of a line folded exactly upon itself does not change, then there is a maximum to the length of a single-line tear that can be mended since folding can only accomplish so much and travel in non-cardinal directions is not unrelated. Travel in any direction counts as travel in any other direction with the length reduced by multiplying by the cosine of the angle between them, which makes the math complicated.
You can mend a tear of pi/2 feet in length by folding it into a perfect circle, and this is the theoretical maximum in two dimensions. You can add more on for each extra dimension.
But that's too hard/long for playing!
I mean, that depends on the group. But if it doesn't work, just remember wizards can use origami+complicated math to increase by a substantial but less-than-doubling amount how much they can mend, and if they can go to planes with more spatial dimensions then they can fix more with even more complicated folds and accompanying geometric analyses. The exact amount doesn't need to be calculated any more than how big the hole in the wizard's cloak needed to be when you put it there.
Proof:
The theoretical maximum is achieved when the total length in each dimension exactly equals 1 foot, if such a curve exists.
An interesting property of circles is that they are perfectly identical regardless of how you rotate them; a circle is the same shape with the same relevant properties regardless of what orientation you pick to use as an axis. Because of this property, the 'length' of a circle in each and every dimension is the same.
Another helpful property is that, for any given dimension, we can break a circle up into two always-increasing or always-decreasing half-circles. This lets us sum the length for each and then combine them into a total length in that dimension, rather than having to take a line integral (it's an easy line integral though, and good practice if you are learning such things).
For always-increasing (or decreasing) curves, the 'length' as we have defined it is identical to the magnitude of the displacement because:
$$ \Delta x = \sum_{i=0}^n dx_i$$ where dx is the signed infinitessimal displacement vector at x=i, and the length is $$ s_x=\sum_{i=0}^n |dx_i| $$, but for always-positive dx in the always-increasing case we can remove the absolute value leaving $$s_x=\sum_{i=0}^n dx_i \cong \Delta x$$
So, the 'length' of a circle in a single dimension is twice its diameter. Since we can't exceed 1 foot, that gives $\pi d= \pi /2$ for the perimiter of the circle. We've know we've used up all our possible length in two dimenstions, so this result must be optimal.
"Wait!" You might think. "What if we used many circles? Each circle still uses the maximum amount of 'length' in each direction, so we should be able to combine them to make a thing, too!"
Yep! That's accurate, but the thing isn't any bigger-- it's just equally optimal. You can fold your tear into one big circle or a very circular infinity sign or the olympic rings or whatever you want, as long as it's all circles, and you'll still get exactly pi/2 length at best. Yay math!
You
do get extra length in three dimensions (and additional length for each dimension you add after that), but the math is way more complicated so I need to work on making a good, simple explanation for that. Feel free to edit one in if so inclined!
|
Rotational Equilibrium If an object is in rotational equilibrium, the resultant turning force or torque, equal to force times distance from an axis for each force, is zero about any given axis. This is known as the principle of moments. The horizontal bar below is in equilibrium. About the axis shown there is no net turning force - no net moment. The sum of the clockwise moments equals the sum of the anticlockwise moments. Taking moments about the axis.
\[30 \times 4=20 \times 3+5 \times x\]
\[120=60+5x \rightarrow x=\frac{120-60}{5}=12m\]
Add comment
|
Consider a signal $x(t)$, which is input to a pulse shaping filter with transfer function $g_t(t)$:
$$x(t) = \sum_n d_n \delta(t-nT_s)$$
with $n$ an index from negative to positive infinity and $d_n$ random, equiprobable binary symbols such as 1 and -1 (antipodal) or 0 and 1 (on-off keying).
Well, the power spectral density of the filtered output is given by: $$S_{ss} = \frac{1}{T_s} E\left[|d_n|^2 \right] |G_t(f)|^2 $$ where $E[\cdot]$ denotes the expected value.
I'm confused by this formula since up to now I thought the power spectral density is given by the signal mutliplied with the absolute value squared of the transfer function. But why do I have to consider the expectation value in this case and why am I scaling by the sampling time?
Thx for any help!
|
I am working my way through a paper proving the prime number theorem and I have come across the following (to use ${2n \choose n}$ was apparently due to Chebyshev, hence the title):
Lemma: Define $\vartheta(x) = \displaystyle\sum_{p \leq x} \log(p)$ (where $p$ is a prime number). Then, $\vartheta(x)=O(x)$. Proof: Let $n \in \mathbb{N}$. Then by the binomial theorem,
$$2^{2n} = (1+1)^{2n} = {2n \choose 0} + {2n \choose 1} + \ldots + {2n \choose 2n} \geq {2n \choose n}.$$
But
$${2n \choose n} = \frac{(2n)!}{(n!)^2} \geq \displaystyle\prod_{n < p \leq 2n} p,$$
since ${2n \choose n}$ is an integer and no prime greater than $n$ can divide $n!$.
(I have no trouble following the rest of the proof, so I omit it.)
My question is how does that last inequality follow from the fact that no such $p$ divides $(n!)$? I don't see how it follows directly, so I have tried to approach it using induction which led me to something like this:
Assuming the inductive hypothesis $\frac{(2k)!}{(k!)^2} \geq \displaystyle\prod_{k < p \leq 2k} p$, I can get
$$\frac{(2(k+1))!}{((k+1)!)^2} \geq \frac{(2k+2)(2k+1)}{(k+1)^2} \displaystyle\prod_{k < p \leq 2k} p.$$
To complete the induction, I'd want to get the statement
$$\frac{(2(k+1))!}{((k+1)!)^2} \geq \displaystyle\prod_{k+1 < p \leq 2(k+1)} p.$$
This makes me think either that induction is not the way to proceed or the step in question is a direct "obvious" step that I am simply not seeing.
|
Search
Now showing items 1-10 of 155
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
|
I'm doing some classical field theory exercises with the Lagrangian $$\mathscr{L} = -\frac{1}{4}F_{\mu \nu}F^{\mu \nu}$$ where $F_{\mu \nu} = \partial_\mu A_\nu - \partial_\nu A_\mu$. To find the conjugate momenta $\pi^\mu_{\ \ \ \nu} = \partial \mathscr{L} / \partial(\partial_\mu A^\nu)$, I can use two methods.
First method: directly apply this to $\mathscr{L}$. We get a a factor of $2$ since there are two $F$'s, and another factor of $2$ since each $F$ contains two $\partial_\mu A_\nu$ terms, giving $$\pi^\mu_{\ \ \ \nu} = -F^\mu_{\ \ \ \nu}.$$
Second method: get $\mathscr{L}$ in terms of $A$ by expanding and integrating by parts, yielding $$\mathscr{L} = \frac{1}{2}(\partial_\mu A^\mu)^2 - \frac{1}{2}(\partial_\mu A^\nu)^2.$$ Differentiating this gets factors of $2$ and gives $$\pi^\mu_{\ \ \ \nu} = \partial_\rho A^\rho \delta^\mu_\nu - \partial^\mu A_\nu.$$
These two answers are different! (They do give the same equations of motion, at least.) I guess that means doing the integration by parts changed the canonical momenta.
Is this something I should be worried about? In particular, I have another exercise that wants me to show that one of the canonical momenta vanishes -- this isn't true for the ones I get from the second method! Plus, my stress-energy tensor is changed too. When a problem asks for "the" canonical momenta, am I forbidden from integrating by parts?
|
You could always break the question into two perpendicular directions
Your velocity will be subtracted from horizontal velocity of your friend's. The vertical velocity will remain same. The resultant of the vertical and new horizontal velocity will give you the final answer of how you see your friend.
Velocity of friend in horizontal direction will be $(v_2 \cos A)$
Velocity of friend in vertical direction will be $(v_2 \sin A)$
Horizontal Direction: $ (v_2 \cos A -v_1) $ is the velocity of your friend in the horizontal direction relative to you.
Vertical Direction: $ (v_2 \sin A) $ is the velocity of your friend in the vertical direction relative to you. Nothing changes as your entire velocity is in the horizontal direction.
Combining the two directions:$ \sqrt{(v_2 \cos A -v_1)^2 + (v_2 \sin A)^2} $ is the velocity of your friend relative to you.
Method 2
The resultant vector of your friends velocity vector with the negative of your velocity vector i.e. (Friends velocity vector) - (your velocity vector)
|
Measurement of rate of a moving clock is carried out the following way:
Stationary observer in reference frame $S$ (the observer on earth) places the clock $C_1$ at coordinate $x_1$ of his frame and the clock $C_2$ at coordinate $x_2$ of his frame.
Then this observer sends a beam of light from clock $C_1$ towards clock $C_2$. He assumes, that one - way speed of light is c (Einstein synchrony convention). Since he knows distance and speed of light, he synchronizes these clocks, so as they show “the same time” in reference frame $S$.
https://en.wikipedia.org/wiki/Einstein_synchronisation
Then this observer can measure rate of a clock, which moves in his reference frame $S$
Imagine that moving clock (an observer in the spaceship) $C'$ passes by clock $C_1$ at moment of time $t_1$ first and clock $C_2$ at moment of time $t_2$ some later. At these moments, readings of the moving clock and the corresponding fixed clock of reference frame $S$ next to it are compared.
Let the counters of moving clock measure the time interval $\tau _ {0}$ during the movement from the point $x_ {1}$ to the point $x_ {2}$ and the counters of clocks $C_1$ and $C_2$ of the fixed or “rest” frame $S$, will measure the time interval $\tau$. This way,
$$\tau '=\tau _{0} =t'_{2} -t'_{1},$$
$$\tau =t_{2} -t_{1} \quad (1)$$
But according to the inverse Lorentz transformations we have
$$t_{2} -t_{1} ={(t'_{2} -t'_{1} )+{v\over c^{2} } (x'_{2} -x'_{1} )\over \sqrt{1-v^{2} /c^{2} } } \quad (2)$$
Substituting (1) into (2) and noting that the moving clock is always at the same point in the moving reference frame $S'$, that is,
$$x'_{1} =x'_{2} \quad (3)$$
We obtain
$$\tau ={\tau _{0} \over \sqrt{1-v^{2} /c^{2} } } ,\qquad (t_{0} =\tau ') \quad (4) $$
This formula means that the time interval measured by the fixed clocks is greater than the time interval measured by the single moving clock. Time in reference frame S is running $\gamma$ times faster from the point of view of moving clock $C'$. This means that the moving clock lags behind the fixed ones, that is, it slows down.
The animation below depicts the rest frame (a row of synchronized clocks) and the moving clock (single clock).
Chapter: time dilationhttp://www.pstcc.edu/departments/natural_behavioral_sciences/Web%20Physics/Chapter039.htm
NOTE
Proper time is the time on your wristwatch. However, if you are stationary in the reference frame $S$, all clocks in this reference frame, billions of clocks, let’s say, show the same time (according to SR). These clocks do not move relatively to you.
By means of these clocks you can record time and spatial coordinate of an event (when and where it takes place).
Time is the same in the whole reference frame, or your rest frame.
An observer in SR never admits, that he moves. Every observer is at rest in his own reference frame and all other stuff (clocks, rods etc. ) moves around.Observers in SR change frame. Tom has his own, Ben has his own, Herb has his own etc. It is never mutual.
Thus, an observer in the spaceship does the same trick as the observer on earth. He assumes that he is at rest and the Earth moves. He puts two spatially separated clocks and measures time intervals the same way as the Earthman.That is drawn on the picture below.
I don’t know what to say if these proper times are the same. May be they are rather personal.
Good to note - as soon as an observer assumes, that he is not at rest but he is in moton an certain reference frame, he will see, that all clocks are ticking faster, than his own and measuring rods are longer than his own.
|
In a case of free Dirac field we have
$$ \hat {H} = \int \epsilon_{\mathbf p}\left( \hat {a}^{+}_{s}(\mathbf p )\hat {a}_{s}(\mathbf p ) - \hat {b}_{s}(\mathbf p )\hat {b}_{s}^{+}(\mathbf p ) \right)d^{3}\mathbf p, $$
$$ \hat {\mathbf P} = \int \mathbf p \left( \hat {a}^{+}_{s}(\mathbf p )\hat {a}_{s}(\mathbf p ) - \hat {b}_{s}(\mathbf p )\hat {b}_{s}^{+}(\mathbf p ) \right)d^{3}\mathbf p, $$
$$ \hat {Q} = \int \left( \hat {a}^{+}_{s}(\mathbf p )\hat{a}_{s}(\mathbf p ) + \hat{b}_{s}(\mathbf p )\hat{b}^{+}_{s}( \mathbf p )\right)d^{3}\mathbf p. $$
So, if an operator $\hat {b}^{+}$ act on energy vector, it will decrease the value of an energy. This is commonly referred to as the fact that it creates an antiparticle with negative energy, because $\hat {b}^{+}$ is interpreted as creation operator (only by analogy with the scalar field, if I understand correctly). The solution to this problem is the postulation of anti-commutation relations between operators (in addition, the summary energy of the field begin to be positive definite quantity).
But why don't we call $\hat {b}$ the creation operator, and $\hat {b}^{+}$ the destruction operator? Then, requiring the positivity condition for the integrand (if it is possible), we'll get physically correctly result without postulation of anticommutation relations.
Where did I make the mistake?
|
We throw the word around so much without knowing what it actually means. Why are the physics based definition and the general definition of beats so different?
Colloquial beats and physics based definition of beats are not that different from each other!
In colloquial terms, beats are rhythmic repetition of sound.
In Physics' terms, beats arise the rhythmic rise and fall of the amplitude of the resultant when two waves of very similar frequencies interfere with each other. It arises from the addition of sines formula. i.e. \sin \omega_1 t + \sin \omega_2 t = 2 \sin \frac{\omega_1 + \omega_2}{2} t \cos \frac{\omega_1 - \omega_2}{2} t .
This shows that the amplitude of the resultant wave varies as \cos \frac{\omega_1 - \omega_2}{2} t . The beat frequency, therefore, is given by \frac{\omega_1 - \omega_2}{4 \pi} .
Read more about Physics based definition of beats at: https://byjus.com/physics/beats/
|
№ 9
All Issues Bezout rings of stable ranк 1.5 and the decomposition of a complete linear group into its multiple subgroups Abstract
A ring $R$ is called a ring of stable rank 1.5 if, for any triple $a, b, c \in R, c \not = 0$, such that $aR + bR + cR = R$, there exists $r \in R$ such that $(a + br)R + cR = R$. It is proved that a commutative Bezout domain has a stable rank 1.5 if and only if every invertible matrix $A$ can be represented in the form $A = HLU$, where $L, U$ are elements of the groups of lower and upper unitriangular matrices (triangular matrices with 1 on the diagonal) and the matrix $H$ belongs to the group $$\bf{G} \Phi = \{ H \in \mathrm{G}\mathrm{L}n(R) | \exists H_1 \in \mathrm{G}\mathrm{L}_n(R) : H\Phi = \Phi H_1\},$$ where $\Phi = \mathrm{d}\mathrm{i}\mathrm{a}\mathrm{g} (\varphi 1, \varphi 2,..., \varphi n), \varphi 1| \varphi 2| ... | \varphi n, \varphi n \not = 0$.
Citation Example: Shchedrik V. P. Bezout rings of stable ranк 1.5 and the decomposition of a complete linear group
into its multiple subgroups // Ukr. Mat. Zh. - 2017. - 69, № 1. - pp. 113-120. Full text
|
> To prove that \\(r(x,y) = x \wedge y\\) it's therefore enough to show
>
> $$ a \le_A x \wedge y \textrm{ if and only if } a \le_A x \textrm{ and } a \le_A y. $$
>
> **MD Puzzle 2'.** Can someone show this?
Let's look directly at the definition of \\(\wedge\\) from Fong and Spivak, pg. 17:
> Definition 1.60. Let \\((P, \leq)\\) be a preorder, and let \\(A \subseteq P\\) be a subset. We say that an element
> \\(p \in P\\) is the meet of \\(A\\) if
>
> 1. for all \\(a \in A\\), we have \\(p \leq a\\), and
> 2. for all \\(q\\) such that \\(q \leq a\\) for all \\(a \in A\\), we have that \\(q \leq p\\).
>
> We write \\(p = \bigwedge A\\), or \\(p = \bigwedge_{a \in A} a\\). If \\(A\\) just consists of two elements, say \\(A = \\{a, b\\}\\), we
can denote \\(\bigwedge A\\) simply by \\(a \wedge b\\).
So let's assume \\( a \le_A x\\) and \\(a \le_A y\\). We want to show \\(a \le_A x \wedge y \\). By assumption we have \\(\forall z \in \\{x,y\\}. a \leq z\\). Then by (2) in Definition 1.60 we have \\(a \leq \bigwedge \\{x,y\\}\\), which can be rewritten as \\(a \leq x \wedge y\\) according to Spivak and Fong's short hand.
Next let's assume \\(a \le_A x \wedge y \\). We want to show \\( a \le_A x\\) and \\(a \le_A y\\). Our assumption \\(a \le_A x \wedge y \\) is shorthand for \\(a \leq \bigwedge \\{x,y\\}\\). By (1) we have \\(\forall z \in \\{x,y\\}. a \leq z\\). But that's just the same as \\( a \le_A x\\) and \\(a \le_A y\\) as desired.
|
Wikipedia credits this to Maxwell. This derivation can be found in Maxwell's
Treatise on Electricity and Magnetism vol. 2, part 4, ch. 2 (§§502-527). I went through the derivation and found two self cancelling mistakes made by Maxwell. Those mistakes were corrected by J.J. Thomson in his edition of Maxwell's treatise.
The fact that Maxwell made some mistakes and that the final result is unaffected is itself an indication that someone else before Maxwell originally made that derivation and Maxwell simply took this derivation from someone else (although he doesn't mention in his treatise from where he took this derivation).
So was it really Maxwell who originally made this derivation of general force law equation or was it someone else?
EDIT: (@Geremia)
Ampere derived in 1820s the force between current elements. He made an assumption that it was along the line joining the two elements. A few scientists after Ampere pointed out that current elements have direction as well and hence there is not a solid reason on why force between current elements would be like other forces in nature. In 1845, Hermann Grassmann derived another force law (which is taught today in schools). The force law of Grassmann doesn't obey Newton's third law even in the weak form.
In the year 1873, Maxwell published his treatise. In that treatise, there is a derivation (Vol 2, Article 502-527) of general force law equation. The general force law equation says there can be infinite valid force laws as far as source circuit is closed. The general force law equation is:
$$d^{2}\vec{F}=kII'dsds'\left[\left(\frac {1}{r^{2}}\left({\frac {\partial r}{\partial s}}{\frac {\partial r}{\partial s'}}-2r{\frac {\partial ^{2}r}{\partial s\partial s'}}\right) +r{\frac {\partial ^{2}Q}{\partial s\partial s'}}\right)(\hat{r})-{\frac {\partial Q}{\partial s'}}(\hat{s})+{\frac {\partial Q}{\partial s}}(\hat{s'})\right]$$
where:
$\hat{r}$ is a unit vector pointing from $s$ to $s'$ (field circuit to source circuit)
$Q$ is any function of $r$. But we must be careful while assuming the value of $Q$ because we
shouldgive same form for $Q$ in $r$ and $s'$ coordinates.
This general force law equation can be found in article 525 of Maxwell's treatise. Thus I am only asking who originally made this "general force law equation" (and
notAmpere's original force law).
Wikipedia call's it Maxwell's 1873 derivation. As explained in my original post, it doesn't seem to me that it was Maxwell who originally wrote it down.
Please tell if it was really Maxwell or someone else?
|
Non-Commutative Ring with Unity and 2 Ideals not necessarily Division Ring Theorem
Let $\struct {R, +, \circ}$ specifically not be commutative.
Let $\struct {R, +, \circ}$ be such that the only ideals of $\struct {R, +, \circ}$ are $\set {0_R}$ and $R$ itself.
Then it is not necessarily the case that $\struct {R, +, \circ}$ is a division ring. Proof
$S$ is not a division ring, as for example:
$\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}$
and so both $\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}$ and $\begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}$ are proper zero divisors of $S$.
$e_{a b} = \begin{cases} 1 & : a = i, b = j \\ 0 & : \text {otherwise} \end{cases}$
Thus for example:
$E_{1 2} = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}$
Then for all $i, j \in \set {1, 2}$:
\(\displaystyle \dfrac 1 \lambda E_{i r} A E_{s j}\) \(=\) \(\displaystyle \dfrac 1 \lambda E_{i r} A E_{s j}\) \(\displaystyle \) \(=\) \(\displaystyle E_{i j}\) It follows that: $\forall i, j \in \set {1, 2}: E_{i j} \in J$
It remains to be shown that $J$ is the whole of $S$.
.
|
First of all, the inequality you end up with is wrong: you need either$$a \leq -\frac{2\sqrt{39}}{13} \;\;\;\text{ or }\;\;\; a \geq \frac{2\sqrt{39}}{13}$$
Since the root $$x_2 = -\frac{a}{6} - \sqrt{\frac{13a^2 - 12}{36}} = \frac{1}{6}\left(-a - \sqrt{13a^2 - 12}\right)$$is always smaller than $x_1$ (when both are real, of course), it suffices to find the largest $a$ such that $x_2$ is positive.
Hence we are looking for the largest $a$ such that$$-a - \sqrt{13a^2 - 12} \geq 0.$$
Now, as the root is always positive and minus the root is therefore negative, obviously we'll need $a \leq 0$. Rearrange as$$-a \geq \sqrt{13a^2 - 12}$$and square both sides to find$$a^2 \geq 13a^2 - 12,$$and rearrange again to find $$12 \geq 12a^2$$or hence$$1\geq a^2.$$Since $a$ must be negative, this means that $-1 \leq a \leq 0$.
Therefore, the values of $a$ such that the equation has $2$ (not necessarily distinct) real roots which are both positive, are$$a \in \left[-1,-\frac{2\sqrt{39}}{13}\right]$$hence the largest such $a$ is $-\frac{2\sqrt{39}}{13}$
|
Ok so there's a lot of litterature about nearby cycles functor since it was introduced by Grothendieck and Deligne but I couldn't find any clear answer to the following natural question:
Problem: Let $X$ be a reduced complex analytic space, $f = (f_1,f_2) : X \to \mathbb{C}^2$ a couple of functions and $K \in D^b_c(A_X)$ a constructible complex. When do we have a natural isomorphism between iterated nearby cycles: $$ \psi_{f_1}\psi_{f_2}(K) \simeq \psi_{f_2}\psi_{f_1}(K)$$
This is well known as part of the hypercube description of perverse sheaves when $f = id$ and $K$ is constructible with respect to the strict normal crossings divisor defined by the coordinates hyperplanes.
In general, I don't think a natural map between the two sides even exists but one might look for something like morphisms$$ \psi_{f_1}\psi_{f_2}(K) \leftarrow \psi_f(K) \to \psi_{f_2}\psi_{f_1}(K).$$
where $\psi_f(K)$ would be some sort of global or simultaneous nearby cycles that would induce the iterated nearby cycles under suitable hypothesis.
Actually I'm more interested in the algebraic case where $K$ is regular holonomic D-module but this kind of problems seems to have been studied a lot more by topologists in the spirit of Thom's isotopy lemmas to I'm trying to understand the Milnor fibration viewpoint. Please correct me if I'm wrong.
Dimenson 1 base: In the case $(X,x)$ is a germ of analytic space inside $U \subset \mathbb{C}^N$ and $g: X \to \mathbb{C}$ a single analytic function, the Milnor-Le fibration theorem states that for $0<\eta \ll \varepsilon \ll 1$ $$ g: \bar{B}(x,\varepsilon) \cap g^{-1}(D^*(f(x),\eta)) \to D^*(g(x),\eta) $$ is a locally trivial fibration over the punctured disc $D^*(g(x),\eta)$. The fiber $F_{g,x} = \bar{B}(x,\varepsilon) \cap g^{-1}(\eta)$ is the local Milnor fiber of $g$ at $x$.
Almost by definition, we have $\psi_g(K)_x = R\Gamma(F_{g,x},K)$.
First question: is the fibration independant of the local embedding $X \subset \mathbb C^N$. This seems to be well known but I've never seen an actual proof. It seems to me it could be proved quite easily if one can replace the ball $\bar{B}(x,\varepsilon)$ by a polydisk as in Le's "La monodromie n'a pas de point fixe" but I haven't written it down yet.
Dimenson > 1 base: Consider $f = (f_1,f_2): X \to \mathbb{C}^2$.
In this case, Milnor's fibration theorem fails in general (classical exemples include simple blow-ups or Whitney's umbrella).
But, by iterating the usual one function construction for $f_i: X \to \mathbb{C}$, one can still define a Milnor fibration $X_{(f_1;f_2),x}(\varepsilon,\eta) \to S_{\eta_1}^1 \times S_{\eta_2}^1$ independant of $0 < \eta_1 \ll \eta_2 \ll \varepsilon$ with fiber $F_{(f_1,f_2),x}$. This is done for example in McCrory and Parusinky's "Complex monodromy and the topology of real algebraic sets". We have $$ \psi_{f_1}\psi_{f_2}(K)_x = R\Gamma(F_{(f_1,f_2),x};K) $$ But this fibration depends of the ordering we chose: $F_{(f_1;f_2),x} \neq F_{(f_2;f_1),x}$.
I expect the problem to disappear with Thom's $a_f$ condition. More precisely, in "Morphismes sans éclatement et cycles évanescent" Sabbah defines a morphism $f:X \to Y$ between stratified analytic spaces as being "sans éclatement" ("without blowup") if
the stratification on $Y$ satisfies Whitney's conditions, for each strata $Y_\beta$ subset $Y$, the stratification on $X$ induces a Whitney stratification on $f^{-1}(Y_\beta)$. Thom's $a_f$ condition is satisfied.
Let's stratify $\mathbb{C}^2$ by the coordinates hyperplanes and suppose there is a stratification $S$ of $X$ so that $K\in D^b(A_X)$ is $S$-constructible and $f:X\to \mathbb{C}^2$ is without blow-up. Then we have a locally trivial topological fibration $f: B(x,\varepsilon) \cap f^{-1}((\mathbb{C}^*)^2) \to (\mathbb{C}^*)^2$ with stratified fiber $F_{f,x}$.
Question: Am I right in thinking that the above fibration induces the iterated Milnor fibrations so that $$ \psi_{f_1}\psi_{f_2}(K)_x = R\Gamma(F_{f,x}, K) = \psi_{f_2}\psi_{f_1}(K)_x $$
Thanks
|
Complex Down-Conversion Amplitude Loss
This blog illustrates the signal amplitude loss inherent in a traditional complex down-conversion system. (In the literature of signal processing, complex down-conversion is also called "quadrature demodulation.")
The general idea behind complex down-conversion is shown in Figure 1(a). And the traditional hardware block diagram of a complex down-converter is shown in Figure 1(b).
Let's assume the input to our down-conversion system is an analog radio frequency (RF) signal, \(R_F(t)\), defined as:$$ R_F(t) = A \cdot cos(\omega_ct + \phi_c)\tag{1} $$
where \(A\) is the cosine wave's peak amplitude, \(\omega_c = 2\pi f_c\) is the carrier frequency in radians/sec., \(\phi_c\) is the cosine wave's initial phase angle measured in radians, and \(t\) is our time variable measured in seconds.
In Figure 1(b) the subscript LO means 'local oscillator' where \(\omega_{LO} = 2\pi f_{LO}\) is the local oscillator's frequency in radians/sec. and \(\phi_{LO}\) is the oscillator's initial phase angle measured in radians.
Algebraic Form of Output c(t)
To begin determining the amplitude loss of the output \(c(t)\) relative to the \(R_F(t)\) input, we analyze Figure 1(b) as follows:$$ x_i(t) = R_F(t) \cdot cos(\omega_{LO}t + \phi_{LO})\\\ = A \cdot cos(\omega_ct + \phi_c) \cdot cos(\omega_{LO}t + \phi_{LO}).\tag{2} $$
Using the \(cos(a)cos(\beta)\) trigonometric
product identity, we can write Eq. (2) as:
Notice that \(x_i(t)\) comprises a low-frequency sinusoid and a high-frequency sinusoid. And both sinusoids have peak amplitudes that are one half the peak amplitude of \(R_F(t)\) in Eq. (1).
In a similar way, we can represent Figure 1(b)'s \(x_q(t)\) as:$$ x_q(t) = R_F(t) \cdot [-sin(\omega_{LO}t + \phi_{LO})] \\ = A \cdot cos(\omega_ct + \phi_c) \cdot [-sin(\omega_{LO}t + \phi_{LO})]. \tag{4} $$
Using the \(cos(\alpha)sin(\beta)\) trigonometric
product identity, and keeping Eq. (4)'s minus sign in mind, we can write Eq. (4) as:
Like \(x_i(t)\), \(x_q(t)\) comprises a low-frequency sinusoid and a high-frequency sinusoid where both sinusoids have peak amplitudes equal to one half the \(A\) peak amplitude of \(R_F(t)\) in Eq. (1).
Assuming the lowpass filters in Figure 1(b) have passband gains of one (unity), and they completely eliminate the high-frequency sinusoids in \(x_i(t)\) and \(x_q(t)\), we can describe the filters' outputs as:$$ i(t) = A \cdot cos((\omega_c -\omega_{LO})t + \phi_c -\phi_{LO})/2 \tag{6} $$ and $$ q(t) = A \cdot sin((\omega_c -\omega_{LO})t + \phi_c -\phi_{LO})/2. \tag{7} $$
Both \(i(t)\) and \(q(t)\) are real-valued signals. Again, the above derivation is based on the assumption that the lowpass filters in Figure 1(b) have
infinite attenuation in their stopbands!
Complex Down-Conversion Output Notations
In rectangular notation, our down-converter's \(c(t)\) output is represented by:$$ c(t) = i(t) + jq(t) \\ = (A/2)[cos((\omega_c -\omega_{LO})t + \phi_c -\phi_{LO}) \\ + jsin((\omega_c -\omega_{LO})t + \phi_c -\phi_{LO})]. \tag{8} $$
In polar notation, our down-converter's \(c(t)\) output is represented by:$$ c(t) = {A\over 2} e^{j[\omega_c-\omega_{LO})t + \phi_c - \phi_{LO}]} \tag{9} $$
So our final output is a complex exponential whose magnitude is always \(A/2\), its frequency is \(\omega_c-\omega_{LO}\) radians/sec., and it's phase angle (at t = 0) is \(\phi_c-\phi_{LO}\) radians.
• Scenario #1: When \(\omega_{LO} = \omega_c \) and \(\phi_{LO} = \phi_c\), the \(c(t)\) output is a real-valued constant equal to A/2. (\(i(t) = A/2\) and \(q(t) = 0\).)
• Scenario #2: When \(\omega_{LO} = \omega_c\) and \(\phi_{LO} ≠ \phi_c\), the \(c(t)\) output is a complex-valued constant whose magnitude is equal to A/2, and whose phase angle is a constant \(\phi_c‑\phi_{LO}\) radians.
• Scenario #3: When \(\omega_{LO} < \omega_c\) the \(c(t)\) output is a complex exponential, whose magnitude is equal to A/2 rotating counterclockwise on the complex plane at a rate of \(\omega_c‑\omega_{LO}\) radians/second. (Output signals \(i(t)\) and \(q(t)\) are quadrature-related real-valued sinusoids, having peak amplitudes of A/2, whose frequencies are both \(\omega_c‑\omega_{LO}\) radians/second.)
Don't Be Fooled When Estimating Signal Amplitude Loss
It's common, the first time we use software to model our down-conversion process, to plot and examine the amplitudes of discrete versions of the \(R_F(t)\), \(i(t)\), and \(q(t)\) signals. When \(\omega_{LO} < \omega_c\) our time-domain plots will show that both \(i(t)\) and \(q(t)\) have peak amplitudes that are half the peak amplitude of \(R_F(t)\). Seeing that, it's easy to fall into the trap of thinking, "Ah ha. Output amplitude losses by a factor of two. The down-converter's output amplitude loss measured in decibels (dB) must be -6 dB."
That approach is
incorrect. The following shows the correct way to determine the dB loss of our down-converter.
Complex Down-Conversion Signal Loss
The dB loss of output \(c(t)\) relative to the input \(R_F(t)\) in our Figure 1(b) down-conversion process is:
dB loss of \(c(t) = \)$$ 10 log_{10}{\left({Power \, of \, c(t)}\over {Power \, of \, R_F(t)}\right)}\tag{10} $$
From Eq. (9)$$ Power \, of \, c(t) = c(t) \cdot c^*(t) \\ = {A \over 2}e^{j[(\omega_c-\omega_{LO})t + \phi_c - \phi_{LO}]} \cdot {A \over 2}e^{-j[(\omega_c-\omega_{LO})t + \phi_c - \phi_{LO}]} \\ = (A/2)^2 \cdot e^{j0} = (A/2)^2 = A^2/4 $$
where '*' means conjugate. And we know that the power of the \(R_F(t)\) input cosine wave is:
Power of \(R_F(t) = A^2/2\).
Then from Eq.(10), our complex down-converter's true signal loss is:
$$ dB \, loss \, of \, c(t) = 10 log_{10} \left({A^2/4}\over {A^2/2} \right) = 10 log_{10} \left(1\over 2 \right) = -3 \, dB. \tag{11} $$
To depict the average signal powers, P, within our down-converter, I present Figure 2. (A dash-line ellipse indicates a dual-path complex signal.) There we see that the average power of the \(c(t)\) output is one half the average power of the \(R_F(t)\) input.
Figure 2 tells us that the complex down-converter's signal power loss of 3 dB is caused by the lowpass filtering and
not by the complex frequency translation.
Software Modeling a Down-Conversion System
If you decide to model the Figure 2 down-conversion process in software, know that your \(c(t)\) output magnitude results will not correspond exactly with the above equations. That's due to the difficulty in implementing ideal lowpass filters having infinite attenuation in their stopbands. In addition, real-world filters suffer from "start up transient" behavior until their output sequences reach a steady state time-domain response.
When you model a down-conversion system using software it's common to plot the discrete Fourier transform (DFT) spectra of various signals within the system's signal paths. You do that to ensure that your frequency-translated, or filtered, signals have the expected spectral content.
But comparing the average power of various signals is a bit tricky using spectral data. For example, Figure 3(a) shows the spectral power, measured in dB, of an arbitrary real-valued sinusoidal \(x(n)\) time sequence. (Ten times the log
10 of squared spectral magnitudes.) Figure 3(b) shows the spectral power of an arbitrary \(y(n)\) complex exponential time sequence. Because the spectral peaks are all the same in that figure you might naively assume that the average powers of \(x(n)\) and \(y(n)\) are equal to each other. They are not! (The average power of \(y(n)\) is one half the average power of \(x(n)\).)
Rather than trying to compare signal average powers based on spectral dB data, I suggest you plot your various down-converters' signals in the time-domain. And then examine, and compare, their instantaneous amplitudes or magnitudes.
Of course, the most reliable way to compare the average power of two time signals is to compute each signal's average signal power, over N time samples, using:$$ Average \, power \, of \, x(n) = {1 \over N} \sum_{n=0}^{N-1}\lvert x(n)\rvert^2 \tag{12} $$
where time index n is: n = 0,1,2,...N-1. Equation (12) is valid for both real-valued or complex-valued \(x(n)\) signal sequences. Once you have the average power values of two signals you can determine their power ratio, measured in dB, using Eq. (10).
Lyons Ranting and Raving
There's one last point I want to make here. More than once I've seen documents graphically depicting complex down-conversion as shown in Figure 4.
It's simply incorrect to show the implementation of the 'j' operator as a hardware multiplication. In addition, it makes no sense to show a hardware adder summing a purely real-valued signal with what appears to be a purely imaginary-valued signal to produce a complex-valued signal.
So, ...if you ever write a document presenting a traditional complex down-conversion hardware block diagram I strongly suggest you use the dual-output Figure 2 rather than Figure 4.
Previous post by Rick Lyons:
A Complex Variable Detective Story – A Disconnect Between Theory and Implementation
Next post by Rick Lyons:
Why Time-Domain Zero Stuffing Produces Multiple Frequency-Domain Spectral Images
May I ask you about about LO equation in Figure 1? Why do you use the negative sine (conjugate of exp(jwt)) rather than the positive? Does it make difference? I noticed many texts use this notation and it kinda drives me crazy as I'm still student and hate to see different things LOL!
I apologize, I just saw your comment this morning. You are correct, many times in the literature people show a down-converter's sine oscillator as being 'sin()'. That is badly misleading because it implies the sine oscillator is +sin(). Well, ...using +sin() would result in frequency up-conversion. So if we really mean a down-conversion process we need to show the sine oscillator as -sin().
There seems to be something I am missing conceptually at xi(t) and xq(t) in Figure 2. My hand derivation shows the power of (A^2)/4 only applies to either of the two tones at xi(t) or xq(t). There is a A/2 amplitude for the lower frequency and an A/2 amplitude for the upper frequency at the mixer output. But since xi(t) (or xq(t)) is BEFORE the low-pass filter, shouldn't the power at each net equal to (A^2)/4 + (A^2)/4 = (A^2)/2 (ref. to B.P. Lathi's textbook Modern Digital & Analog Communication Systems (Eq 2.6b)? And thus AFTER the LPF, the power of i(t) (or q(t)) should each be equal to (A^2)/4 since we have taken out the power of the upper frequency instead of (A^2)/8?
Can someone enlighten me please? Thanks so much!
Hello chiwang_shun. I don't have a copy of Lathi's book.
Looking at a single mixer, you wrote: "There is a A/2 amplitude for the lower frequency and an A/2 amplitude for the upper frequency at the mixer output."
To quote Rocky Balboa, "This is very true." Because a 1/2 amplitude loss is equal to a 1/4 factor in power loss (-6 dB), we can say, "The upper sideband power and the lower sideband power are
1/4 the total power of the RF input signal. The sum of the upper sideband power plus lower sideband power (the total power of the mixer output) is 1/2 the total power of the RF input signal. each
You wrote, "...power at each net equal to... ." I do not know what your word "net" means.
Chiwang_shun, I hope I have answered you question. If not, please let me know.
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.
|
User:Nikita2/sandbox A mapping $\varphi:D\to D'$ possesses Luzin's $\mathcal N$-property if the image of every set of measure zero is a set of measure zero. A mapping $\varphi$ possesses Luzin's $\mathcal N{}^{-1}$-property if the preimage of every set of measure zero is a set of measure zero. Briefly\begin{equation*}\mathcal N\text{-property:}\quad \Sigma\subset D, |\Sigma| = 0 \Rightarrow |\varphi(\Sigma)|=0,\end{equation*}\begin{equation*}\mathcal N{}^{-1}\text{-property:} \quad M \subset D, |M| = 0 \Rightarrow |\varphi^{-1}(M)|=0.\end{equation*} Contents $\mathcal N$-property of a function $f$ on an interval $[a,b]$
Let $f:[a,b]\to \mathbb R$ be a measurable function. In this case the definition is following:
For any set $E\subset[a,b]$ of measure zero ($|E|=0$), the image of this set, $f(E)$, also has measure zero.It was introduced by N.N. Luzin in 1915 (see [1]). The following assertions hold. A function $f\not\equiv \operatorname{const}$ on $[a,b]$ such that $f'(x)=0 $ almost-everywhere on $[a,b]$ (see for example Cantor ternary function) does not have the Luzin $\mathcal N$-property. If $f$ does not have the Luzin $\mathcal N$-property, then on $[a,b]$ there is a perfect set $P$ of measure zero such that $|f(P)|>0$. An absolutely continuous function has the Luzin $\mathcal N$-property. If $f$ has the Luzin $\mathcal N$-property and has bounded variation on $[a,b]$ (as well as being continuous on $[a,b]$), then $f$ is absolutely continuous on $[a,b]$ (the Banach–Zaretskii theorem). If $f$ does not decrease on $[a,b]$ and $f'$ is finite on $[a,b]$, then $f$ has the Luzin $\mathcal N$-property. In order that $f(E)$ be measurable for every measurable set $E\subset[a,b]$ it is necessary and sufficient that $f$ have the Luzin $\mathcal N$-property on $[a,b]$. A function $f$ that has the Luzin $\mathcal N$-property has a derivative $f'$ on the set for which any non-empty portion of it has positive measure. For any perfect nowhere-dense set $P\subset[a,b]$ there is a function $f$ having the Luzin $\mathcal N$-property on $[a,b]$ and such that $f'$ does not exist at any point of $P$.
The concept of Luzin's -property can be generalized to functions of several variables and functions of a more general nature, defined on measure spaces.
References
[1] N.N. Luzin, "The integral and trigonometric series" , Moscow-Leningrad (1915) (In Russian) (Thesis; also: Collected Works, Vol. 1, Moscow, 1953, pp. 48–212) Comments
There is another property intimately related to the Luzin -property. A function continuous on an interval has the Banach -property if for all Lebesgue-measurable sets and all is a such that
This is clearly stronger than the -property. S. Banach proved that a function has the -property (respectively, the -property) if and only if (respectively, only if — see below for the missing "if" ) the inverse image is finite (respectively, is at most countable) for almost-all in . For classical results on the - and -properties, see [a3].
Recently a powerful extension of these results has been given by G. Mokobodzki (cf. [a1], [a2]), allowing one to prove deep results in potential theory. Let and be two compact metrizable spaces, being equipped with a probability measure . Let be a Borel subset of and, for any Borel subset of , define the subset of by (if is the graph of a mapping , then ). The set is said to have the property (N) (respectively, the property (S)) if there exists a measure on (here depending on ) such that for all ,
(respectively, for all there is a such that for all one has
Now has the property (N) (respectively, the property (S)) if and only if the section of is at most countable (respectively, is finite) for almost-all .
References
[a1] C. Dellacherie, D. Feyel, G. Mokobodzki, "Intégrales de capacités fortement sous-additives" , Sem. Probab. Strasbourg XVI , Lect. notes in math. , 920 , Springer (1982) pp. 8–28 MR0658670 Zbl 0496.60076 [a2] A. Louveau, "Minceur et continuité séquentielle des sous-mesures analytiques fortement sous-additives" , Sem. Initiation à l'Analyse , 66 , Univ. P. et M. Curie (1983–1984) Zbl 0587.28003 [a3] S. Saks, "Theory of the integral" , Hafner (1952) (Translated from French) MR0167578 Zbl 1196.28001 Zbl 0017.30004 Zbl 63.0183.05 [a4] E. Hewitt, K.R. Stromberg, "Real and abstract analysis" , Springer (1965) MR0188387 Zbl 0137.03202
How to Cite This Entry:
Nikita2/sandbox.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Nikita2/sandbox&oldid=29215
|
Step 1: Note down the given values
Step 2: Set up the formula for arc length.
NOTE: The formula is arc length= 2 \pi (r)(\frac{\theta }{360}) ,
where {\displaystyle r} equals the radius of the circle and {\displaystyle \theta } equals the measurement of the arc’s central angle, in degrees.
or
Arc length = r * \theta
Step 3: Plug the length of the circle’s radius into the formula.
Step 4: Plug the value of the arc’s central angle into the formula.
Step 5: Simplify the equation to find the arc length
NOTE: Use multiplication and division to simplify the equation.
|
Pullback . category theory Collection
context $F:({a\rightarrow z\leftarrow b})\longrightarrow{\bf C}$ definition $\langle Fa\times_{Fz} Fb, \pi\rangle := \mathrm{lim}\,F$
Here we consider a functor $F$ from the category ${a\rightarrow z\leftarrow b}$, consisting of three object and two non-identity arrows $f_a$ and $f_b$, to a category ${\bf C}$.
Universal property
For readability, let's write $A\equiv{Fa}, B\equiv{Fb}, Z\equiv{Fz}, \alpha\equiv{f_a}$ and $\beta\equiv{f_b}$.
(In the picture we have $X\equiv{Fa}, Y\equiv{Fb}, Z\equiv{Fz}, f\equiv{f_a}, g\equiv{f_b}$ and the pullback object is $P\equiv X\times_Z Y$.)
Consider two arrows $\gamma:{\bf C}[X,A]$ and $\delta:{\bf C}[X,B]$, which fulfill the structural condition $\alpha\circ\gamma=\beta\circ\delta$. I.e. when forwarded to $Z$ via $\alpha$, resp. $\beta$, they collapse into a single arrow.
Such two arrows $\gamma,\delta$ can be partially glued together, in the sense that they can both be written as a unique arrow $u$ (with codomain $A\times_Z B$) followed by the projections $\pi_a,\pi_b$.
Discussion
The pullback object $A\times_Z B$ is the full solution to the equation posed by $\alpha$ and $\beta$. In ${\bf{Set}}$, it's literally the set of pairs $\langle x,y\rangle\in A\times_Z B\subseteq A\times B$, for which $\alpha(x)=\beta(y)$.
When the category contains a terminal object $1$ (where $\alpha$ and $\beta$ are trivial arrows and form a trivial condition), we have $A\times_1 B\cong A\times B$.
The universal property says that all other solution embed in this object, in this is what is meant by
full solution. Special cases If $\pi_a$ is an iso, then $A\times_Z B\cong A$. As $A$ is already the pullback, it alone fully determines the “full solution”. If moreover $\pi_b$ is an iso too, the projections, we can consider the equivalent pullback with $\pi_b=\pi_a=1_A$. The universal property now says that arrows $\gamma,\delta$ can be wholly glued together: Up to iso, $\alpha\circ\gamma=\beta\circ\delta\implies\gamma=\delta$. In ${\bf{Set}}$, if $\alpha=\beta$, the pullback definition says that its elements $\langle x,y\rangle$ fulfill $\alpha(x)=\alpha(y)$, i.e. here the pullback object is the full collection of pairs of term with give the same $\alpha$ value. If moreover $\pi_a$ is iso, any $x$ determines an $\langle x,y\rangle$ and hence an $y$ and the universal property translates to $\alpha(x)=\alpha(y)\implies x=y$. This is just the definition of an injection. Back to a general category. If the pullback of $\alpha$ along itself ($\alpha=\beta$) is such that a projection $\pi_a$ is iso, we call $\alpha$ a monomorphism. The associated condition reads $\alpha\circ\gamma=\alpha\circ\delta\implies\gamma=\delta$.
(In the picture, $f$ is $\alpha$ and $g,f$ are our $\gamma, \delta$.)
Examples
A finite pullback in ${\bf{Set}}$ that I just made up:
Generally: If $F(f_b)$ is the inclusion of a subset $Fb\subseteq{Fz}$ in ${Fz}$, the pullback is iso to (i.e. in bijection with) $F(f_a^{-1})Fb$. Further, if $F(f_a)$ is an inclusion too, this is in bijection with $Fa\cap{Fb}$. If the subset-interpretation doesn't apply, the function $F(f_b)$ from $Fb$ to $Fz$ should be viewed as defining fibre bundle over $Fz$ and the pullback gives a fibre bundle from $Fa\times_{Fz} Fb$ to $Fa$. A concrete example: Let
$Fa=\{2,4,6,8\},\ Fb=\{10,20\},\ Fz=\{77,88,99\}$
$F(f_a)(2)=77,\ F(f_a)(4)=77,\ F(f_a)(6)=88,\ F(f_a)(8)=99$
$F(f_b)(10)=88,\ F(f_b)(20)=77$
Then
$Fa\times_{Fz} Fb=\{\langle 2,20\rangle,\langle 4,20\rangle,\langle 6,10\rangle\}$
and $\pi$ are projections like for the product.
Two more prominent examples:
If $M$ is a manifold, $p:B\to M$ is a fibre bundle over $M$ and $f:X\to Y$ is an embedding of another manifold $X$ into $Y$, the the pullback object is a fibre bundle over $X$, namely the differential geometric pullback. In $\bf{Set}$, if the “central object” is $\Omega=\{0,1\}$, the right function comes from a singleton $1$ and if the left function $\chi:X\to \Omega$ is a characteristic function, then a pullback object is a subset of $X$: It's defined as collection of arguments where the characteristic function agrees that the value is $1$. This works also for more general “fuzzy” $\Omega$. These $\Omega$ are the “objects of truth values” and are called subobject classifiers.
Digression: The exponential object $B^A$ is a prominent example of an object which isn't a limit, but it can be specifies via universal morphism construction. For sets or types, that's the function space $A\to B$ and for propositions it's the implication. If a category has product, exponential object and a terminal object, then it's called Cartesian closed. A Cartesian closed category with subobject classifier is a topos. We see now how a topos is a general kind of set theory, and simultaneously defines an internal logic.
|
Yes, the two are intimately related. One way, as in QMechanic's answer, is via Wick rotations, but in general there is a lot more freedom once you allow integration contours to go over into the complex plane. In my area, strong field physics, the use of complex time to understand tunnelling problems is everyday bread and butter for many people, and it is the only way to use semiclassical models for tunnelling situations.
Tunnelling ionization is what happens when you hit an atom with a very strong laser field of very low frequency. The frequency $\omega$ of the field needs to be much smaller than the ionization potential $I_p=\tfrac12\kappa^2$ of the atom, which means that you need many photons to ionize it, but for such slowly-varying fields the physical picture is somewhat different. If the (so-called) Keldysh parameter$$\gamma=\frac{\kappa \omega}{E_0}$$(where $E_0$ is the peak electric field of the laser, and atomic units are assumed) is smaller than one, then it is more useful to think in terms of a quasistatic picture. That means that you consider the dipole potential of the laser, $V_L=-\mathbf E·\mathbf r$, as a fixed potential which is added to the atomic potential, and which varies slowly in time.
At the peak of the field, this added linear potential bends the total potential surface deep enough to make a barrier which atomic electrons (particularly, the ones on the highest occupied atomic orbital) can tunnel through.
Tunnelling rates depend very sensitively on the height and width of the barrier, which essentially means that the field needs to be very strong (i.e. on the order of $0.01\:\text{a.u.} \approx 5\times 10^9 \text V/\text m$) for this to happen.
The first to realize this were Keldysh,
L. V. Keldysh, Ionization in the field of a strong electromagnetic wave.
Sov. Phys. JETP 20 no. 5, 1307-1314 (1965) (pdf) [ Zh. Eksp. Teor. Fiz. 47, 1945 (1964)].
and the guys now known as PPT,
A.M. Perelomov, V.S. Popov, M.V. Terent'ev, Ionization of Atoms in an Alternating Electric Field.
Sov. Phys. JETP 20 no. 5, 924-934 (1966) (pdf) [ Zh. Eksp. Teor. Fiz. 50, 1393 (1966)].
their work doesn't make for particularly easy reading, but it's fairly along the semiclassical WKB lines you point out in your question.
More recently, though, this understanding has crystallized as the picture known as the
quantum orbit view of strong-field phenomena. A good review is
P. Salières
et al., Feynman's Path-Integral Approach for Intense-Laser-Atom Interactions. Science 292 no. 5518, 902-905 (2001).
and I'll try and give a taster for what the overall feel of the field is.
Consider, then, an atom that's initially in its ground state $|g⟩$ with energy $E_g=-I_p=-\tfrac12\kappa^2$, which is subjected to an oscillating potential $V_L=-E_0z\cos(\omega t)$, which is slow (so $\hbar\omega\ll I_p$) and strong enough to be in the tunnelling regime (so $\gamma=\kappa\omega/E_0<1$). In this situation one can usually ignore multi-electron effects and work in the Single Active Electron approximation, at least as a first treatment.
The problem, then, is to solve the time-dependent Schrödinger equation$$i\frac{\partial}{\partial t}|\psi(t)⟩=\left[\frac{\mathbf p^2}{2m}+V_a(\mathbf r) +V_L\right]|\psi(t)⟩$$under the initial condition that $|\psi⟩=|g⟩$ before the pulse starts. This is unfortunately impossible to do analytically in its full form, but one can separate the two pieces of the hamiltonian to get a pretty workable solution. This is known as the Strong Field Approximation, and it essentially means neglecting the effect of the ion's attraction once the electron has been ionized, and the influence of deeper orbitals is neglected. It means that you have two fairly good approximate solutions depending on whether your electron is still in the ground state,$$|\psi(t)⟩=e^{-iE_g t}|g⟩,\quad\text{with}\quad i\frac{\partial}{\partial t}|\psi(t)⟩=\left[\frac{\mathbf p^2}{2m}+V_a(\mathbf r)\right]|\psi(t)⟩,$$or has been ionized into a Volkov state,$$|\psi(t)⟩=e^{\frac i2 \int_t^\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}|\mathbf p+\mathbf A(t)⟩,\quad\text{with}\quad i\frac{\partial}{\partial t}|\psi(t)⟩=\left[\frac{\mathbf p^2}{2m}+V_L\right]|\psi(t)⟩,$$where $\mathbf A$ is the vector potential of the field and $|\mathbf p +\mathbf A(t)⟩$ is a plane wave with kinetic momentum $\mathbf k=\mathbf p+\mathbf A(t)$. I will calculate the ionization amplitude to an asymptotic drift momentum $\mathbf p$, so the quantity of interest is $⟨\mathbf p |\psi(\infty)⟩$.
In general, the electron's state will be some sort of superposition of these two solutions, so that you can write$$|\psi(t)⟩=a(t)e^{-iE_g t}|g⟩+\int\text d\mathbf p \,b(\mathbf p,t) e^{\frac i2 \int_t^\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}|\mathbf p+\mathbf A(t)⟩.$$You then substitute this into the TDSE, and cancel out the obvious terms, which leaves you with the equivalent form$$\left\{\begin{align}i\frac{d}{dt}a(t)&=a⟨g|V_L|g⟩+\int\text d\mathbf p \,b(\mathbf p,t) e^{+iE_g t}e^{\frac i2 \int_t^\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}⟨g|V_a|\mathbf p+\mathbf A(t)⟩\\i\frac{\partial}{\partial t}b(\mathbf p,t) & =a(t)e^{-iE_g t}e^{-\frac i2 \int_t^\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}⟨\mathbf p+\mathbf A(t)|V_L|g⟩\\&\qquad +\int\text d\mathbf p'\,b(\mathbf p',t) e^{-\frac i2 \int_t^\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}e^{\frac i2 \int_t^\infty (\mathbf p'+\mathbf A(\tau))^2\text d\tau}⟨\mathbf p+\mathbf A(t)|V_a|\mathbf p'+\mathbf A(t)⟩.\end{align}\right.$$This can be further simplified by neglecting continuum-continuum transitions (i.e. the integral on the second equation) and ground state depletion (i.e. setting $a(t)=1$ in the second equation). (Both of these can be lifted, but it just makes everything uglier.) If you do that, the TDSE finally becomes something doable,$$i\frac{\partial}{\partial t}b(\mathbf p,t) =e^{-iE_g t}e^{-\frac i2 \int_t^\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}⟨\mathbf p+\mathbf A(t)|V_L|g⟩$$and you can integrate it to get$$b(\mathbf p,\infty)=⟨\mathbf p|\psi(\infty)⟩ =-i\int_{-\infty}^\infty\text dte^{iI_p t}e^{+\frac i2 \int^t_\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}⟨\mathbf p+\mathbf A(t)|V_L(t)|g⟩$$Now, this integral is perfectly fine and it can be done numerically if needed, but doing that is pretty painful because it is highly oscillatory. A typical example looks like this:
(Reasonable parameters are $E_0=0.05$, $\omega=0.055$ and $I_p=0.5$ in atomic units. This is for $p_{||}=1$ over 3/2 of a laser cycle.)
This is bad because you need very high accuracy on each of the positive and negative lobes of the integrand to get only mediocre accuracy on their difference, so even in this simplified version the problem is numerically tough. This oscillatory behaviour is driven by the fact that the $e^{iI_p t}$ term oscillates much faster than the laser-cycle timescales (~$2\pi/\omega$) at which the integration takes place.
The way to get out of this is to use the saddle point method, which is where complex times come in. The idea is to deform the integration contour into the complex plane to look for something which is numerically nicer, by turning the oscillating imaginary exponential into nice, decaying real exponentials. If this is done well enough, one can even skip the integration entirely, and just use the contributions from the top of the resulting gaussian-like bumps.
The way to do this is to look for times $t_s$ where the derivative of the exponent vanishes:$$0=\frac d{dt}\left[I_p t+\frac 12 \int^t_\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau\right]_{t_s}=I_p+\frac12(\mathbf p+\mathbf A(t_s))^2.$$This evidently cannot happen for real times, so you need a complex saddle-point time for this to work.
The final expression for the ionization amplitude, then, is of the form$$b(\mathbf p,\infty)=-i\sum_j\sqrt{\frac{2\pi}{i\mathbf(\mathbf p+\mathbf A(t_s^{(j)}))·\mathbf E(t_s^{(j)})}}e^{-\frac i2 \int_{t_s^{(j)}}^\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}⟨\mathbf p+\mathbf A(t_s^{(j)})|V_L(t_s^{(j)})|g⟩e^{iI_p t_s^{(j)}},$$where you sum over all the relevant saddle points - typically one for every field maximum.
The upshot of all this is that the ionization amplitude can now be intuitively understood in a semiclassical picture:
The electron sits happily in the ground state, accumulating phase, until the saddle-point time $t_s$, and it accumulates a phase $e^{iI_p t_s}$ until then.
The saddle-point time is easily interpreted as the ionization time, at which the electron makes a dipole transition to the continuum state $|\mathbf p+\mathbf A(t_s)⟩$, with a transition amplitude $$\sqrt{\frac{2\pi}{i\mathbf(\mathbf p+\mathbf A(t_s))·\mathbf E(t_s)}}⟨\mathbf p+\mathbf A(t_s)|V_L(t_s)|g⟩.$$
After that, the electron is free in the laser field, and it goes on to accumulate the phase $e^{-\frac i2 \int_{t_s}^\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}$.
Even better, once it's liberated the electron simply whisks away from the origin along the semiclassical trajectory$$\mathbf r_\text{cl}(t)=\int_{t_s}^t (\mathbf p+\mathbf A(\tau))\,\text d\tau.$$
So everything is nice and shiny, and it works perfectly, except that... the barrier has mysteriously disappeared. Even though this is a tunnelling problem, the electron seems to simply skip past the region where the barrier should be.
The solution is exactly what you describe in the question: at the tunnelling time $t_s$, and for some time afterwards, the kinetic energy $\tfrac12(\mathbf p+\mathbf A(t))^2$ is negative (and equal to $-I_p$ at $t_s$ itself), which means that the velocity is imaginary, but the time is also imaginary and the two combine to make a (mostly) real displacement. Once the time gets down to the real axis, you are essentially out of the barrier.
One thing to notice is that when I say "phase" in the bullet points above I'm mostly lying through my teeth. Because the saddle-point time $t_s$ is complex, the 'phases' $e^{iI_p t_s}$ and $e^{-\frac i2 \int_{t_s}^\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}$ are not purely complex exponentials, so their exponents have sizable and negative real parts, which makes them very small in absolute value. This is where the unlikeliness of tunnelling is expressed in this formalism, and the main controlling factor on the ionization rate.
Now, as has been pointed out in the comments, this use of complex time can definitely be seen simply as a mathematical trick, without any physical significance. This is certainly the view of parts of the strong field community, and there is a healthy debate over the matter; at the least one can say that we don't really understand this as well as we'd like.
However, there is a certain niceness about it, and it does seem to sort-of fit. What does the complex time mean? If you split it into its real and imaginary parts as $t_s=t_0+i\tau_T$, then they each have a separate and distinct role. If you integrate from $t_s$ down to its real part $t_0$, it turns out that the semiclassical position $\mathbf r_\text{cl}(t_0)$ is largely real and it lies just outside of the tunnelling barrier, so that it can be seen as the time when it pops up into the continuum. (Indeed, one can make very successful classical models by simply taking this as the ionization time, disregarding the imaginary part of the semiclassical position, and propagating classically from there.)
The imaginary part $\tau_T$, on the other hand, directly appears in the ionization amplitudes, and it is well identified as the 'time spent under the barrier' if such a thing makes sense. For example, the transverse momentum distribution after ionization is of the form $e^{-\tfrac12\tau_T p_\perp^2}$, which ties in very well with the fact that borrowing an extra energy $\tfrac12p_\perp^2$ for a time $\tau_T $ will make the process less likely by the product of the two. The two legs of the integration contour, from $t_s$ to $t_0$ and from there through the real axis, have very intuitive interpretations as 'under the barrier' and 'outside of the barrier'
It's important to keep in mind, though, that once you go into the complex plane then time does become a much more complicated concept. The very same contour choice freedom that allows you to go for a complex saddle-point time also makes
any contour between $t_s$ and the final detection time at $t=\infty$ valid. This holds essentially any time you go into complex times, and it does make quantum orbits a bit of a handful to grasp.
I'll stop here, but I hope this is enough to show that, putting aside the questions about its physical reality, complex time is indeed an important and useful tool for dealing with tunnelling problems.
|
I am working on a proof where given a continuous surjection f:X$\to$Y, if X is connected then Y is connected. I am doing so by contraposition so I assumed Y was disconnected and am attempting to show X is disconnected.
By definition of disconnected, there exist open sets A' & B' in $\tau_Y$ such that A'$\cup$B'=Y and A'$\cap$B'=$\emptyset$. Since f is continuous, f$^{-1}$(A')=A & f$^{-1}$(B')=B are nonempty & in $\tau_X$. We wish to show A$\cup$B=X.
Since f(A$\cup$B)=f(A)$\cup$f(B) (by image of union) we have f(A$\cup$B)=A'$\cup$B' (by definition of A & B) which is equivalent to f(A$\cup$B)=Y and finally by surjection f(A$\cup$B)=f(X).
This is where I am stuck. How can I conclude that A$\cup$B=X?
|
I have been reading Srednicki from the beginning and doing all the exercises, and I hit a big roadblock at Q10.4, as I can't seem to figure out what Srednicki is doing in his solution. Luckily, I found an additional resource (http://hep.ucsb.edu/people/cag/qft/QFT-10.pdf), which details a formula for finding vertex factors for generic interaction terms in scalar field theory. What I am wondering is how I would go about deriving this formula? It's on page 18 of the source I linked, along with a brief (but not sufficient, at least for me,) explanation of why it is true. I'll repeat it here:
$$ \text{Vertex Factor} = i \prod_{i} \frac{\delta}{\delta\widetilde{\phi}(k_{i})} \mathcal{L}_{int} $$
Where the $\frac{\delta}{\delta\tilde{\phi}}$ is a functional derivative, and $\mathcal{L}_{int}$ is the interacting part of the Lagrangian, for example a term like $\frac{1}{3!}g\phi^{3}$. The vertex factor is the factor applied to a Feynman diagram in momentum space at a given vertex (ie. $iZ_{g}g$ for $\phi^{3}$ theory or $iZ_{\lambda}\lambda$ for $\phi^{4}$ theory).
In exercise 10.4 of Srednicki, the interaction term that was giving me trouble was $\frac{1}{2}g\phi\partial^{\mu}\phi\partial_{\mu}\phi$; but sure enough the above formula spat out the correct vertex factor after some algebra. My question then is the following:
How would I derive this formula? Does it have a more general form for multiple scalar fields/gauge fields/fermion fields? For example, the vertex factor for the theory for $\mathcal{L_{int}} = g\chi\phi^{\dagger}\phi$ given by $i(\frac{\delta}{\delta\tilde{\chi}(k_{3})}\frac{\delta}{\delta\tilde{\phi}^{\dagger}(k_{1})}\frac{\delta}{\delta\tilde{\phi}(k_{2})})\mathcal{L_{int}} = ig$, which is actually the correct vertex factor. So it seems quite general for scalar fields, but I don't know if it applies to fermion/gauge fields.
|
I have the following code:
g[u_,p_] = Graphics{ Circle[{2 Sqrt[u], u}, u], Circle[{-2 Sqrt[u], u}, u], Point[{2 Sqrt[u], u}], Point[{-2 Sqrt[u], u}], Inset[p,{0,1},{0,0},{15,10}] }
and I want to draw the parabola that goes through the centers of both of these circles, for every $u \neq 0$ with its value in $0$ equal to $1$. We can see that, if we let $y$ to be the needed curve, that $y(2\sqrt{u}) = u, \forall u \neq 0$ and $y(0) = 1$. By making $u:= u^2$, we see that $y(2u) = u^2$, so by making again $u := u/2$ we see that $y(u) = u^2/4,$ and we also need to subtract one because we map $(0,1)$ to $(0,0)$.
However, if I try to plot this by saying
f[u_] = g[u, Plot[u^2/4 - 1,{u,-100,100}]
I don't really get what I want. I suspect this is because of the resizing done by Graphics and Plot. I have tried using PlotRange and PlotRangeClipping, but nothing works.
Is there any way I can do this or do I need another method to draw this curve? As a matter of fact, can it be done without Inset?
|
Vector space basis
Set
context $V$…$\ \mathcal F$-vector space
definiendum $B\in \mathrm{basis}(V)$
$B'\subseteq B$ $B'$…finite range $n\equiv\left|B'\right|$
$v_1,\dots,v_n\in B'$ $c_1,\dots c_n\in \mathcal F$ $x\in V$
postulate $\sum_{k=1}^n c_k\cdot v_k=0\ \Rightarrow\ \forall j.\ c_j=0$
All finite subsets of the base are linearly independed. It's maybe more clear when written in the contrapositive: “$\exists j.\ c_j\ne 0\ \Rightarrow\ \sum_{k=1}^n c_k\cdot v_k\ne 0$.”
postulate $\exists c_1,\dots,c_n.\ (x=\sum_{k=1}^n c_k\cdot v_k)$
For each basis $B$, every vector $x\in V$ has representation as linear combination.
Discussion
We call the vector space
finite if it has a finite basis.
The difficulty in defining the basis of a general vector space above, and the reason why one must consider finite subsets $B'$ of the base $B$, is that an infinite sum would require more structure than just what a general vector space provides (e.g. a metric w.r.t. which the series converges).
The zero vector space has an empty base. Its vector space dimension is zero.
Reference Parents
Context
|
In the literature on social welfare functionals, the only example I've seen of a functional which meets all of Arrow's conditions–––or at least utility analogues of Arrow's conditions–––plus invariance regarding ordinal level comparability is Rawls' maximin. E.g. Sen in
On Weights and Measures (1977, p. 1544) cites maximin as his case of a functional meeting all of these conditions. Maximin orders the alternatives by the welfare of individual who is worst off. I assume that the inverse of maximin–––i.e. the alternatives are ordered by the welfare of individual who is best off–––would also meet these conditions.
Is there any work on other social welfare functionals which meet all these conditions? (I'm aware that if we tweak these conditions slightly we can derive other functionals, but I'm interested in the case in which we keep them unaltered.)
If not, is this evidence that maximin, and its inverse, are the only normatively sensible social welfare functionals that meets all these conditions? Or is it just evidence that people aren't so interested in this set of conditions? (If there is a clear reason why this set of conditions is uninteresting, I'd love to hear it).
Thanks for any help!
Utility analogues of Arrow’s conditions:
Utility analogues of Arrow’s conditions are Arrow’s conditions redefined for Sen’s welfare functional framework. Instead of taking a profile of orderings as input, Sen's functional takes a profile of utility functions as input: $U \ = \ <u_{i_1}(X), \ u_{i_2}(X), \ \dots \ , \ u_{i_n}(X)>$. $U$ is defined on $X \times N$; each individual, $i \in N $, is paired with each alternative, $x \in X$, and the result of each pairing is the utility derived by $i$ from $x$. $\mathcal{U} \ = \ \{U^1, \ U^2, \ \dots \ , \ U^n \}$ is the set of all possible utility profiles. $\mathcal{U^*}$ is the set of all utility profiles which meet a particular domain restriction. $\mathcal{R}$ is the set of all possible orderings of $X$. A social welfare functional can then be defined as: $f: \ \mathcal{U^*} \longrightarrow \mathcal{R}$. The final ordering given by profile $U^1$, $f(U^1)$, is denoted: $R_{U^1}$. We can then define utility analogues of Arrow's conditions:
Unrestricted Domain$’$: The domain of $f$ is the set of all possible utility profiles: $\mathcal{U}^* \ = \ \mathcal{U}$. Weak Pareto$’$: $\forall x, y \in X$, $\forall i \in N$: $( \ u_i(x) \ > \ u_i(y) \ ) \ \Longrightarrow \ (xPy)$. Non-Dictatorship$’$: $f$ does not single out one individual $i \in N$ such that, $\forall U \in \mathcal{U^*}, \ \forall x, y \in X$: $( \ u_i(x) \ > \ u_i(y) \ ) \ \Longrightarrow \ (xPy)$. Independence of Irrelevant Utilities: $\forall U^1$ and $U^2$ $\in \mathcal{U^*}, \ \forall x, y \in X$: $(\forall i \in N \ (( \ u^1_i(x) = u^2_i(x) \ ) \land ( \ u^1_i(y) = u^2_i(y) \ )) \ \Longrightarrow \ (( \ x R_{U^1} y \ ) \ \Longleftrightarrow \ ( \ x R_{U^2} y \ ))$.
|
Simple Interest = $\dfrac{{PTR}}{{100}}$ Simple interest formula:
Here,
P = Principal
T = Time
R = Rate of Interest
Simple interest for 1 year = Principal × (Interest %) = P × (R%)
If simple interest for more than 1 year the Simple interest = $P \times R\% \times T = SI$
Interest % = $\displaystyle\frac{\text{Interest}}{{{\rm{Principal}}}} \times 100 = \displaystyle\frac{\text{SI}}{{\text{P}}} \times 100$
Concept of Installments:Why someone gives installment paying for a buyer? It gives some flexibility for the buyer if he gets income on a monthly basis. Is there any advantage for the seller? Let us see how the installment works!
Suppose, there is a mobile phone for sale at Rs.10,000. There are two options for the seller. Accepting full money of Rs.10000 in one go, or take some down payment and receiving the remaining amount in "equated monthly/yearly installments".
If the seller receives the total amount on the purchase, he gets some interest on the sale money. If he sells it at monthly installment, he makes sure that he sets the installments such that he recovers the interest in several installments.
So the logic works like this: If the seller sold a product for t months, Total amount + interest he gets for t months on sale price = Total installments + interests generated on this installments for the remaining period.
For example, if the total loan for 5 months, the seller gets 4 months interest on the 1st installment, 3 months interest on the 2nd installment, 2 months interest on the 3rd installment, 1 month interest on the 4th installment, no interest on 5th installment.
$P + \dfrac{{P \times t \times R}}{{100 \times 12}}$ = $\left( {x + \dfrac{{x \times (t - 1) \times R}}{{12 \times 100}}} \right)$ + $\left( {x + \dfrac{{x \times (t - 2) \times R}}{{12 \times 100}}} \right)$ + . . . . . + $\left( {x + \dfrac{{x \times R \times 2}}{{12 \times 100}}} \right)$ + $\left( {x + \dfrac{{x \times R \times 1}}{{12 \times 100}}} \right)$ + $x$
Note: The above formula is for monthly installments. If installment is asked per year, then no need to divide those terms with 12
General formula for installment calculation: \(x = \dfrac{{P\left( {1 + \dfrac{{n \times r}}{{100}}} \right)}}{{n + \dfrac{{n(n - 1)}}{2} \times \dfrac{r}{{100}}}}\)
(Note: If you calculating monthly installments, replace "n" by number of months, and "r" by "r/12"
After simplification of the above formula, you get \(x = \dfrac{{P\left( {100 + nr} \right)}}{{100n + \dfrac{{n(n - 1)r}}{2}}}\)
|
Compressive Sensing - Recovery of Sparse Signals (Part 1)
The amount of data that is generated has been increasing at a substantial rate since the beginning of the digital revolution. The constraints on the sampling and reconstruction of digital signals are derived from the well-known Nyquist-Shannon sampling theorem. To review, the theorem states that a band-limited signal, with the highest frequency of $f_{max}$, can be completely reconstructed from its samples if the sampling rate, $f_{s}$, is at least twice the signal bandwidth. If the Nyquist-Shannon criteria is not satisfied then any frequency component greater than $f_{s}/2$ will become indistinguishable from a lower frequency component (i.e. aliasing).
Although the Nyquist-Shannon criteria gives a concrete formula for recovery of bandlimited signals, it is becoming increasingly impractical to implement due to ever-increasing size of data being generated. For example, high-resolution images taken from modern image sensors cause a lot of burden on communication channels, or in some cases, the communications channels are inadequate for efficient data transfer [1]. Due to increasing impracticality, a lot of research in the past decade has been focused on the recovery of signals from far fewer samples than required by the Nyquist-Shannon sampling criteria.
Before going any further, it is important to formulate a mathematical definition of signal acquisition and recovery process. Signal acquisition can be stated as a classical linear algebra problem
$$y = Ax$$
where $y \in \mathbb{R}^{m}$ or $y \in \mathbb{C}^{m}$ is the measured signal vector, $x \in \mathbb{R}^{m}$ or $x \in \mathbb{C}^{m}$ is the unknown signal vector, and $A \in \mathbb{R}^{m \times n}$ or $A \in \mathbb{C}^{m \times n}$ is measurement or "sensing'' matrix. For example, $y$ could be a vector obtained through spectral (fourier) measurement of audio signal, in which case the vector would consist of fourier coefficents of the signal $x$, and $A$ would be a fourier transformation matrix. In the signal recovery process, the objective is to find to unknown signal vector $x$ given $y$ and $A$. So the signal recovery requires solving the inverse problem
$$x = A^{-1}y$$
In traditional signal processing, $A$ is a full rank matrix, and $m$ is typically much larger than $n$ so that $x$ can be recovered with high fidelity (i.e. over-determined system). However, the problem of interest is to recover $x$ from far fewer samples than required by the Nyquist-Shannon criteria, or when $m << n$ (i.e. under-determined system). It turns out that if signals are structured in a certain way, then it is possible, at least in theory, to recover signals from fewer samples than suggested by the Nyquist-Shannon theorem. One set of signals that can theoretically be recovered from far fewer samples are signals that are
$$\Sigma_{k} = {x: |supp(x)| \leq k}$$
where $supp(x)$ is the support of vector $x$ and $|supp(x)|$ denotes the cardinality of the set of points where x is non-zero. An example of $k$-sparse signal is the fourier transform of the Shepp-Logan phantom image, which is a standard test image in medical signal processing. The fourier transform of the test image is given below.
Figure 1: Shepp-Logan test image.
Figure 2: FFT of the Shepp-Logan test image. Due to the sparse nature of the fourier transform, the fourier coefficients are non-zero over a very small area.
If the signal $x$ is k-sparse then it only has $k$ degrees of freedom and it could, in principle, be uniquely determined from $m << n$ measurements if certain requirements are met. The complete mathematical form of these requirements can be found in [4]. For brevity, I will just paraphrase the requirements as follows
$x$ can be uniquely reconstructed from $y$ if the $m \times n$ matrix $A$ is such that every set of $2k$ columns of $A$ are linearly independent. The lower bound on dimensionality of $y$ is $m \geq Ck\log(\frac{n}{k})$, where constant $C \approx 0.28$.
$x$ can be uniquely reconstructed by solving the following well-known convex optimisation problem
$$minimise \; |x|_{1}\\subject \;to \;Ax=y $$
where $|x|_{1}$ is l-1 norm. The problem is also known as basis pursuit.
This is a very brief theoretical background of sparse signal recovery, which is also known as "compressed" or "compressive" sensing (CS). In my next blog post, I will demonstrate sparse recovery by providing a simulation. [Hopefully, that would be more interesting.]
[Side-note: I realise that it's not easy to wrap one's head around the formulation of signal acquisition and recovery as a classical linear algebra problem ($y=Ax$), since $x$ is typically an analog signal and I am representing it using a discrete vector. I will clarify this in a future post, but for now, the reader can get some background information from lectures on compressive sensing available at [2,3].]
[1] DARPA ARGUS super high-resolution camera (https://en.wikipedia.org/wiki/ARGUS-IS)
[2] Yonina Elder's lecture on CS ()
[3] Richard Burbank's lecture CS (http://youtube.com/watch?v=RvMgVv-xZhQ)
[4] Eldar, Yonina C., and Gitta Kutyniok, eds. Compressed sensing: theory and applications. Cambridge University Press, 2012.
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.
|
I am trying to solve a system of two equations with two unknowns. In these equations I have, a part from constants:
Unknown nr 1, $$D_{\perp}$$ Unknown nr 2, $$\omega_C$$ Known function of r: $$\mu(r)$$
The full system looks like:
equation1: $$ D_{||}= \frac 1 {2 \omega_0}(-\alpha-1)\sqrt{(-\omega_C+\mu(r)D_{||})^2+(\mu(r)D_{\perp})^2}+ \frac{\alpha}{2\omega_0}\sqrt{(-2\omega_0-\omega_C+\mu(r)D_{||})^2+(\mu(r)D_{\perp})^2}+ \frac{1}{2\omega_0}\sqrt{(2\omega_0-\omega_C+\mu(r)D_{||})^2+(\mu(r)D_{\perp})^2} $$ and
equation2: $$ 1= \mu(r) \frac{\alpha}{2\omega_0} \ln{\left[\frac{-2\omega_0-\omega_C+\mu(r)D_{||}+\sqrt{(-2\omega_0-\omega_C+\mu(r)D_{||})^2+(\mu(r)D_{\perp})^2}}{-\omega_C+\mu(r)D_{||}+\sqrt{(-\omega_C+\mu(r)D_{||})^2+(\mu(r)D_{\perp})^2}}\right]}+ \frac{\mu(r)}{2\omega_0} \ln{\left[\frac{2\omega_0-\omega_C+\mu(r)D_{||}+\sqrt{(2\omega_0-\omega_C+\mu(r)D_{||})^2+(\mu(r)D_{\perp})^2}} {-\omega_C+\mu(r)D_{||}+\sqrt{(-\omega_C+\mu(r)D_{||})^2+(\mu(r)D_{\perp})^2}}\right]} $$
So the solution of the system of equations will be $$D_{\perp}(r),\;\omega_C(r)$$.
What I've tried to do is simply
Solve[{equation1, equation2},{Dorthogonal, omegaC}]
but Mathematica keeps on running forever without any output. I have also tried:
DorthFun[r_]:=Solve[{equation1, equation2},{Dorthogonal, omegaC}][[1,1]] omegaCFun[r_]:=Solve[{equation1, equation2},{Dorthogonal, omegaC}][[1,2]]
and it just keeps on running... It doesn't return any errors. Just...eternal running. Forrest Gump Syndrome...
I have also tried to solve the system putting $$\mu(r)=1$$ without any change.
I have given Mathematica about 20 minutes. Should I give it more time or does this mean that Mathematica cannot solve this? Or is there something I could do differently?
Thank you for your help!
My code looks like this:
NSolve[{ Dparallel==1/(2 omega0) Sqrt[(-omegaC + Dparallel)^2 + (Dorth)^2] (-alpha-1)+alpha/(2omega0)Sqrt[(-2omega0-omegaC+Dparallel)^2+(Dorth)^2]+1/(2omega0)Sqrt[(2omega0-omegaC+Dparallel)^2+(Dorth)^2], 1 == alpha/(2omega0)Log[(-2omega0-omegaC+Dparallel+Sqrt[(-2omega0-omegaC+Dparallel)^2+(Dorth)^2])/(-omegaC+Dparallel+Sqrt[(-omegaC+Dparallel)^2+(Dorth)^2])]+1/(2omega0)Log[(2omega0-omegaC+Dparallel+Sqrt[(2omega0-omegaC+Dparallel)^2+(Dorth)^2])/(-omegaC+Dparallel+Sqrt[(-omegaC+Dparallel)^2+Sqrt[(-omegaC+Dparallel)^2+(Dorth)^2]])] }, {Dorth, omegaC}]
|
Ìrànwọ́:Báwo lẹṣe le ṣe àtúnṣe ojúewé
Àyọkà yìí únfẹ́ ìyílédèdà láti Geesi sí Yorùbá. Tí ẹ bá gbọ́ Geesi ẹ ran Wikipedia lọ́wọ́ ṣàtúnṣe sí ìyílédèdà
Wikipedia is a wiki, meaning that anyone can easily
edit any unprotected page, and save those changes immediately to that page, making the alterations visible to every other reader. You do not even need to register to do this. After your first edit, you will be a Wikipedia editor! Àwọn àkóónú 1 Editing 2 Sections 3 Wiki markup 4 More information on editing wiki pages Editing[àtúnṣe | àtúnṣe àmìọ̀rọ̀]
Editing most Wikipedia pages is easy. Simply click on the "
edit this page" tab at the top of a Wikipedia page (or on a section-edit link). This will bring you to a new page with a text box containing the editable text of the original page. If you add information to a page, please provide references, as unreferenced facts are subject to removal. When you are finished with an edit, you should write a short edit summary in the small field below the edit-box. You may use shorthand to describe your changes, as described in the legend. To see how the page looks with your edits, press the " Show preview" button. To see the differences between the page with your edits and the previous version of the page, press the " Show changes" button. If you're satisfied with what you see, be bold and press the " Save page" button. Your changes will immediately be visible to all Wikipedia users. You can also click on the " Discussion" tab to see the corresponding talk page, which contains comments about the page from other Wikipedia users. Click on the "new section" tab to start a new section, or edit the page in the same way as an article page.
You should also remember to sign your messages on talk pages and some special-purpose project pages with four tildes (~~~~), but you should
not sign edits you make to regular articles. In page history, the MediaWiki software automatically keeps track of which user makes each change. Minor edits[àtúnṣe | àtúnṣe àmìọ̀rọ̀]
A check to the "minor edit" box signifies that only superficial differences exist between the version with your edit and the previous version: typo corrections, formatting and presentational changes, rearranging of text without modifying content, etc. A
minor edit is a version that the editor believes requires no review and could never be the subject of a dispute. The "minor edit" option is one of several options available only to registered users. Major edits[àtúnṣe | àtúnṣe àmìọ̀rọ̀]
All editors are encouraged to be bold, but there are several things that a user can do to ensure that major edits are performed smoothly. Before engaging in a major edit, a user should consider discussing proposed changes on the article discussion/talk page. During the edit, if doing so over an extended period, the {{Inuse}} tag can reduce the likelihood of an edit conflict. Once the edit has been completed, the inclusion of an edit summary will assist in documenting the changes. These steps will all help to ensure that major edits are well received by the Wikipedia community.
A major edit should be reviewed to confirm that it is consensual to all concerned editors. Therefore, any change that affects the
meaning of an article is major (not minor), even if the edit is a single word.
There are no necessary terms to which you have to agree when doing major edits, but the recommendations above have become best practice. If you do it your own way, the likelihood of your edits being reedited may be higher.
Occasionally your browser will crash. When doing a large edit it is suggested you copy the code of the article you are working on and placing it in a text editor (preferably one without formatting, such as Notepad) periodically. This ensures that in the case of a browser crash you will not lose your work. It may also be a good idea to save the page after performing a substantial amount of work before adding additional content to the article.
Sections[àtúnṣe | àtúnṣe àmìọ̀rọ̀]
Editors are recommended to divide any article, which is more than a few paragraphs long, into sections and subsections. In the wikimarkup the beginning of every section is marked by the section title bracketed by "==" symbols, whereas the subsections inside a section are introduced by the subsection title bracketed by "===" (or more) symbols. For instance,
Structured document heading of this section looks like:
==Structured document==
The wikimarkup supports six levels of headings. The
Structured document is a level-two heading; the level one headings are normally not used. The details can be found in the table below. Wiki markup[àtúnṣe | àtúnṣe àmìọ̀rọ̀] Links and URLs[àtúnṣe | àtúnṣe àmìọ̀rọ̀]
The anchor element,
<a>, is not allowed. The following are used instead:
[[ ]], [ ], ~~~~, ~~~, http, ISBN, RFC & {{ }}. See the table below.
What it looks like What you type
London has public transport. (Article link)
San Francisco also has public transportation. (Renamed link)
San Francisco also has public transportation. (Blend link)
See the Wikipedia:Manual of Style. (Other page link)
#Links and URLs is a link to another section on the current page.
Italics is a piped link to a section within another page.
(Example renamed links)
Automatically hide stuff in parentheses: kingdom.
Automatically hide namespace: Village pump.
Or both: Manual of Style
But not: [[Wikipedia:Manual of Style#Links|]]
(Create page link)
Links to pages that don’t exist yet look red: Wikipedia:Community portal/Opentask/Requested articles
(Navigation links)
Wikipedia:How to edit a page is a link to this page.
(Signing comments)
The character
Adding three tildes (~~~) will add just your user name:
and adding five tildes (~~~~~) gives the date/time alone:
(Redirects)
(Wikimedia link)
To connect, via interwiki link, to a page on the same subject in another language, put a link of the form: [[language code:Title]] near the bottom of the article. For example, in article "Plankton", which is available on a lot of other wikis, the interwiki link to the German Wikipedia looks like:
where the "de" is the language-code for
(Linked and why)
(User edits)
(Categorize)
(Category page link)
(External links)
Three ways to link to external (non-wiki) sources:
(Wikimedia text link)
Linking to other wikis:
Linking to another language's wiktionary:
(Book sources)
Link to a book using alternate text, such as its title.
(RFC number)
Text mentioning an RFC number anywhere, e.g. RFC 4321.
(“As of” tag)
“As of” tags like "As of April 2009" and "as of April 2009" categorize info that will need updating.
(Media link)
Some uploaded sounds are listed at Commons:Sound.
(Edit links)
Link directly to
Images[àtúnṣe | àtúnṣe àmìọ̀rọ̀]
What it looks like What you type A picture: A picture: [[Image:Wikipedia-logo-v2-yo.svg]] With alternative text: With alternative text: [[Image:Wikipedia-logo-v2-yo.svg|Wikipedia, The Free Encyclopedia.]] Floating to the right side of the page using the frame attribute and a caption: Floating to the right side of the page using the ''frame'' attribute and a caption: [[Image:Wikipedia-logo-v2-yo.svg|frame|alt=Puzzle globe logo|Wikipedia Encyclopedia]] Floating to the right side of the page using the thumb attribute and a caption: Floating to the right side of the page using the ''thumb'' attribute and a caption: [[Image:Wikipedia-logo-v2-yo.svg|thumb|alt=Puzzle globe logo|Wikipedia Encyclopedia]] Floating to the right side of the page without a caption: Floating to the right side of the page ''without'' a caption: [[Image:Wikipedia-logo-v2-yo.svg|right|Wikipedia Encyclopedia]] A picture resized to 30 pixels... A picture resized to 30 pixels... [[Image:Wikipedia-logo-v2-yo.svg|30 px|Wikipedia Encyclopedia]] Linking directly to the description page of an image: Linking directly to the description page of an image: [[:Image:Wikipedia-logo-v2-yo.svg]] Linking directly to an image without displaying it: Linking directly to an image without displaying it: [[Media:Wikipedia-logo-v2-yo.svg|Image of the jigsaw globe logo]] Using the span and div tag to separate images from text (note that this may allow images to cover text): Example: <div style="display:inline; width:220px; float:right;"> Place images here </div> Using wiki markup to make a table in which to place a vertical column of images (this helps edit links match headers, especially in Firefox browsers): Example: {| align=right |- | Place images here |}
See the Wikipedia's image use policy as a guideline used on Wikipedia.
For further help on images, including some more versatile abilities, see the picture tutorial.
Headings[àtúnṣe | àtúnṣe àmìọ̀rọ̀]
For a heading, put it on a separate line. A level-two heading, the highest level editors use in an article, for example:
== Introduction ==
Editing most Wikipedia pages is easy.
Subheadings use '===', '====', and so on,
down to level-six.
Level-one headings are automatically generated for the article's title, which is not available in the edit box.
Character formatting[àtúnṣe | àtúnṣe àmìọ̀rọ̀]
What it looks like What you type ''Italicized text'' '''Bold text''' '''''Italicized & Bold text'''''
Syntax highlighting for source code.
Computer code has a colored backgroundand more stringent formatting. Suppose wewant to define
#include <iostream>int main ( int argc, char **argv ) { std::cout << "Hello World!"; return 0;}
Computer code has a colored background and more stringent formatting. Suppose we want to define <code>int main()</code>: <source lang=cpp>#include <iostream> int main ( int argc, char **argv ) { std::cout << "Hello World!"; return 0; }</source>
You can use small text for captions.
You can use <small>small text</small> for captions.
Better stay away from big text, unless it's within small text.
Better stay away from <big>big text</big>, unless <small> it's <big>within</big> small</small> text.
You can
You can also mark
You can <s>strike out deleted material</s> and <u>underline new material</u>. You can also mark <del>deleted material</del> and <ins>inserted material</ins> using logical markup. For backwards compatibility better combine this potentially ignored new <del>logical</del> with the old <s><del>physical</del></s> markup. <nowiki>Link → (''to'') the [[Wikipedia FAQ]]</nowiki> <!-- comment here -->
Mary had a little lamb.
Mary {{pad|4em}} had a little lamb. À Á Â Ã Ä Å Æ Ç È É Ê Ë Ì Í Î Ï Ñ Ò Ó Ô Õ Ö Ø Ù Ú Û Ü ß à á â ã ä å æ ç è é ê ë ì í î ï ñ ò ó ô œ õ ö ø ù ú û ü ÿ ¿ ¡ § ¶ † ‡ • – — ‹ › « » ‘ ’ “ ” ™ © ® ¢ € ¥ £ ¤ x<sub>1</sub> x<sub>2</sub> x<sub>3</sub> or x₀ x₁ x₂ x₃ x₄ x₅ x₆ x₇ x₈ x₉ x<sup>1</sup> x<sup>2</sup> x<sup>3</sup> or x⁰ x¹ x² x³ x⁴ x⁵ x⁶ x⁷ x⁸ x⁹ ε<sub>0</sub> = 8.85 × 10<sup>−12</sup> C² / J m. 1 [[hectare]] = [[1 E4 m²]] α β γ δ ε ζ η θ ι κ λ μ ν ξ ο π ρ σ ς τ υ φ χ ψ ω Γ Δ Θ Λ Ξ Π Σ Φ Ψ Ω ∫ ∑ ∏ √ − ± ∞ ≈ ∝ ≡ ≠ ≤ ≥ × · ÷ ∂ ′ ″ ∇ ‰ ° ∴ ℵ ø ∈ ∉ ∩ ∪ ⊂ ⊃ ⊆ ⊇ ¬ ∧ ∨ ∃ ∀ ⇒ ⇐ ⇓ ⇑ ⇔ → ↓ ↑ ← ↔ <math>\,\! \sin x + \ln y</math><br> {{math|sin ''x'' + ln ''y''}} <math>\mathbf{x} = \mathbf{0}</math><br> {{math|<b>x</b> {{=}} <b>0</b>}} Obviously, {{math|''x<''<sup>2</sup> ≥ 0}} is true when {{math|<VAR >x</VAR >}} is a real number. : <math>\sum_{n=0}^\infty \frac{x^n}{n!}</math> (see also: Chess symbols in Unicode) No or limited formatting—showing exactly what is being typed[àtúnṣe | àtúnṣe àmìọ̀rọ̀]
A few different kinds of formatting will tell the Wiki to display things as you typed them—what you see, is what you get!
What it looks like What you type <nowiki> tag:
The nowiki tag ignores [[Wiki]] ''markup''. It reformats text by removing newlines and multiple spaces. It still interprets special characters: →
<nowiki> The nowiki tag ignores [[Wiki]] ''markup''. It reformats text by removing newlines and multiple spaces. It still interprets special characters: → </nowiki> <pre> tag: The pre tag ignores [[Wiki]] ''markup''. It also doesn't reformat text. It still interprets special characters: → <pre> The pre tag ignores [[Wiki]] ''markup''. It also doesn't reformat text. It still interprets special characters: → </pre> Leading space:
Leading spaces are another way to preserve formatting.
Putting a space at the beginning of each line stops the text from being reformatted. It still interprets Wiki Leading spaces are another way to preserve formatting. Putting a space at the beginning of each line stops the text from being reformatted. It still interprets [[Wiki]] ''markup'' and special characters: → Invisible text (comments)[àtúnṣe | àtúnṣe àmìọ̀rọ̀]
It's uncommon, but on occasion acceptable, to add a hidden comment within the text of an article. The format is this:
<!-- This is an example of text that won't normally be visible except in "edit" mode. --> Table of contents[àtúnṣe | àtúnṣe àmìọ̀rọ̀]
At the current status of the wiki markup language, having at least four headers on a page triggers the table of contents (TOC) to appear in front of the first header (or after introductory sections). Putting __TOC__ anywhere forces the TOC to appear at that point (instead of just before the first header). Putting __NOTOC__ anywhere forces the TOC to disappear. See also Compact TOC for alphabet and year headings.
Tables[àtúnṣe | àtúnṣe àmìọ̀rọ̀]
There are two ways to build tables:
in special Wiki-markup (see Table) with the usual HTML elements: <table>, <tr>, <td> or <th>.
For the latter, and a discussion on when tables are appropriate, see When to use tables.
Variables[àtúnṣe | àtúnṣe àmìọ̀rọ̀] (See also Variable)
Code Effect {{CURRENTWEEK}} 38 {{CURRENTDOW}} 0 {{CURRENTMONTH}} 09 {{CURRENTMONTHNAME}} Oṣù Kẹ̀sán {{CURRENTMONTHNAMEGEN}} Oṣù Kẹ̀sán {{CURRENTDAY}} 22 {{CURRENTDAYNAME}} Ọjọ́àìkú {{CURRENTYEAR}} 2019 {{CURRENTTIME}} 06:08 {{NUMBEROFARTICLES}} 31,952 {{NUMBEROFUSERS}} 19,215 {{PAGENAME}} Ìrànwọ́:Báwo lẹṣe le ṣe àtúnṣe ojúewé {{NAMESPACE}} {{REVISIONID}} - {{localurl:pagename}} /wiki/Pagename {{localurl: Wikipedia:Sandbox|action=edit}} /w/index.php?title=Wikipedia:Sandbox&action=edit {{fullurl:pagename}} //yo.wikipedia.org/wiki/Pagename {{fullurl:pagename| query_string}} //yo.wikipedia.org/w/index.php?title=Pagename&query_string {{SERVER}} //yo.wikipedia.org {{ns:1}} Ọ̀rọ̀ {{ns:2}} Oníṣe {{ns:3}} Ọ̀rọ̀ oníṣe {{ns:4}} Wikipedia {{ns:5}} Ọ̀rọ̀ Wikipedia {{ns:6}} Fáìlì {{ns:7}} Ọ̀rọ̀ fáìlì {{ns:8}} MediaWiki {{ns:9}} Ọ̀rọ̀ mediaWiki {{ns:10}} Àdàkọ {{ns:11}} Ọ̀rọ̀ àdàkọ {{ns:12}} Ìrànlọ́wọ́ {{ns:13}} Ọ̀rọ̀ ìrànlọ́wọ́ {{ns:14}} Ẹ̀ka {{ns:15}} Ọ̀rọ̀ ẹ̀ka {{SITENAME}} Wikipedia NUMBEROFARTICLES is the number of pages in the main namespace which contain a link and are not a redirect, in other words number of articles, stubs containing a link, and disambiguation pages. CURRENTMONTHNAMEGEN is the genitive (possessive) grammatical form of the month name, as used in some languages; CURRENTMONTHNAME is the nominative (subject) form, as usually seen in English.
In languages where it makes a difference, you can use constructs like {{grammar:case|word}} to convert a word from the nominative case to some other case. For example, {{grammar:genitive|{{CURRENTMONTHNAME}}}} means the same as {{CURRENTMONTHNAMEGEN}}.
Templates[àtúnṣe | àtúnṣe àmìọ̀rọ̀]
The MediaWiki software used by Wikipedia has support for templates. This means standardized text chunks (such as boilerplate text), can be inserted into articles. For example, typing {{stub}} will appear as "
This article is a stub. You can help Wikipedia by expanding it." when the page is saved. See Template messages for the complete list. Other commonly used templates are: {{disambig}} for disambiguation pages and {{sectstub}} like an article stub but for a section. There are many subject-specific stubs for example: {{Geo-stub}}, {{Hist-stub}}, and {{Linux-stub}}. For a complete list of stubs see Stub types. More information on editing wiki pages[àtúnṣe | àtúnṣe àmìọ̀rọ̀]
You may also want to learn about:
Getting started[àtúnṣe | àtúnṣe àmìọ̀rọ̀] Helpful tips[àtúnṣe | àtúnṣe àmìọ̀rọ̀] Naming and moving[àtúnṣe | àtúnṣe àmìọ̀rọ̀] Style and layout[àtúnṣe | àtúnṣe àmìọ̀rọ̀] Tools[àtúnṣe | àtúnṣe àmìọ̀rọ̀] See also[àtúnṣe | àtúnṣe àmìọ̀rọ̀] WikiProjects, if you are writing an article about something that belongs to a group of objects, check here first!
Àyọkà yìí tàbí apá rẹ̀ únfẹ́ àtúnṣe sí.
|
In mean-field approximation we replace the interaction term of the Hamiltonian by a term, which is quadratic in creation and annihilation operators. For example, in the case of the BCS theory, where
$$ \sum_{kk^{\prime}}V_{kk^{\prime}}c_{k\uparrow}^{\dagger}c_{-k\downarrow}^{\dagger}c_{-k^{\prime}\downarrow}c_{k^{\prime}\uparrow}\to\sum_{k}\Delta_{k}c_{k\uparrow}^{\dagger}c_{-k\downarrow}^{\dagger} + \Delta_{k}^{\star}c_{-k\downarrow}c_{k\uparrow}\text{,} $$
with $\Delta_{k}=\sum_{k^{\prime}}V_{kk^{\prime}}\langle c_{-k^{\prime}\downarrow}c_{k^{\prime}\uparrow}\rangle\in\mathbb{C}$. Then, in books, like this by Bruss & Flensberg, there is always a sentence like "the fluactuations around $\Delta_{k}$ are very small", such that the mean-field approximation is a good approximation. But we known for example in the case of the 1D Ising model the mean-field approximation is very bad.
My question: Is there a inequality or some mathematical conditions which says something about the validity of the mean-field approach? Further, is there a mathematical rigoros derivation of the mean-field approximation and the validity of it?
|
Group Theory: Important Definitions and Results
These notes are made and shared by Mr. Akhtar Abbas. We are really very thankful to him for providing these notes and appreciates his effort to publish these notes on MathCity.org. These notes contains important definitions with examples and related theorem, which might be helpful to prepare interviews or any other written test after graduation like PPSC, FPSC or etc.
Name Group Theory: Important Definitions and Results Author Mr. Akhtar Abbas Pages 27 pages Format PDF (see Software section for PDF Reader) Size 2.21 MB Contents & Summary A non-empty set $G$ with binary operation * is called group if the binary operation * is associative and (1) for all $a\in G$, $\exists$ $e\in G$ s.t $a\text{*} e= e\text{*} a =$ (2) for each $a\in G$, $\exists$ $a^{-1}\in G$ s.t $a\text{*} a^{-1}=a^{-1}\text{*} a =e$. In a group $G$, there is only one identity element. In a group $G$, the inverse of the element is unique. Every element of $A_n$ is a product of 3-cycles, $n\geq 3$.
Please click on View Online to see inside the PDF.
Download or View online Notes of other subjects Advanced Analysis: Handwritten Notes Algebra II by Syed Sheraz Asghar Complex Analysis (Easy Notes of Complex Analysis) Complex Analysis (Quick Review) Differential Geometry by M Usman Hamid Differential Geometry by Syed Hassan Waqas Differential Geometry: Handwritten Notes Fluid Mechanics I by Dr Rao Muzamal Hussain Fluid Mechanics II by Dr Rao Muzamal Hussain Functional Analysis by Mr. Tahir Hussain Jaffery Functional Analysis by Prof Mumtaz Ahmad Fundamental of Complex Analysis (Solutions of Some Exercises) General Topology by Raheel Ahmad Group Theory: Important Definitions and Results Groups (Handwritten notes) Linear Algebra: Important Definitions and Results Mathematical Method by Sir Muhammad Awais Aun Mathematical Statistics I by Muzammil Tanveer Mathematical Statistics II by Sir Haidar Ali Measure Theory Notes by Anwar Khan Mechanics (Easy Notes of Mechanics) Mechanics by Sir Nouman Siddique Metric Spaces (Notes) Number Theory by Dr Muhammad Umer Shuaib Number Theory: Handwritten Notes Number Theory: Notes by Anwar Khan Numerical Analysis by M Usman Hamid Numerical Analysis II Operation Research by Sir Haidar Ali Operation Research: Handwritten Notes Partial Differential Equations Real Analysis (Notes by Prof. Syed Gul Shah) Ring (Notes) by Prof. M. Dabeer Mughal Rings (Handwritten notes) Special Functions by Dr. Muhey-U-Din Theory of Optimization by Ma'am Iqra Razzaq Theory of Relativity & Analytic Dynamics: Handwritten Notes Topology Notes by Azhar Hussain Topology: Handwritten Notes Vector Spaces (Handwritten notes) msc/notes/group-theory-important-definitions-and-results Last modified: 8 weeks ago by Administrator
|
Proof of Theorem 1 \((i)\)
Since \(||b_t||=1\)
and introducing \(\alpha _t=w_t m({b}_t,{b}_T)\)
, we have:
$$\begin{aligned} \sum _{t=1}^T \alpha _t\cos ^2(b_t,v)&= \sum _{t=1}^T \alpha _t (b_t'v)^2\\&= \sum _{t=1}^T \alpha _t v'b_tb_t'v\\&= v'\left( \sum _{t=1}^T \alpha _t b_tb_t'\right) v\\&= v'M_Tv. \end{aligned}$$
Thus the maximization problem (4
) can be rewritten as
$$\begin{aligned} \max _{v\in {\mathbb {R}}^p} \frac{v'M_Tv}{v'v}. \end{aligned}$$
(16)
The solution of (16
) is clearly the normalized principal eigenvector of \(M_T\)
.
\((ii)\) Assuming the linearity condition \((LC)\) and model (1) for each block \(t\), all the vectors \(b_t\) are collinear with \(\beta \). The rank of the symmetric matrix \({M}_T\) is therefore one. The eigenvector \({v}_T\) associated with the non-null eigenvalue of \({M}_T\) is also collinear with \(\beta \): thus \({v}_T\) is a normalized EDR direction (\(||{v}_T||=1\)). \(\square \)
Proof of Theorem 2
\((i)\) For each block \(t\) and under the assumptions \((LC)\), (A1)-(A3), from the SIR theory of Li (1991) each estimated EDR direction \(\widehat{b}_t\) converges to \(b_t\) at root \(n_t\) rate: that is, for \(t=1,\ldots ,T\), \(\widehat{b}_t=b_t+O_p(n_t^{-1/2})\). It can be shown that \(\cos ^2(\widehat{b}_t,\widehat{b}_T)= \cos ^2(b_t,b_T)+O_p(\underline{n}^{-1/2})=1+O_p(\underline{n}^{-1/2})\), and thus \(\widehat{M}_T=M_T+O_p(\underline{n}^{-1/2})\). Therefore the principal eigenvector of \(\widehat{M}_T\) converges to that corresponding to \(M_T\) at the same rate: \(\widehat{v}_T=v_T+O_p(\underline{n}^{-1/2})\). Since \(v_T\) is collinear with \(\beta \), the estimated EDR direction \(\widehat{v}_T\) converges to an EDR direction at root \(\underline{n}\) rate.
\((ii)\) Let \(C_1 \otimes C_2\) denote the Kronecker product of the matrices \(C_1\) and \(C_2\) (see for instance Harville 1999, for some useful properties of the Kronecker product). Let \(C=[c_1,\ldots ,c_q]\) be a \((p \times q)\) matrix, where the \(c_k\)’s are \(p\)-dimensional column vectors. Let \(\text {vec}(C)\) denote the \(pq\)-dimensional column vector: \( \text {vec}(C)=\left( c_1',\ldots ,c_q'\right) '.\) We shall denote by \(N^+\) the Moore-Penrose generalized inverse of the square matrix \(N\). In the sequel, let \(B=[b_1,\ldots ,b_T]\) be the matrix which contains all the EDR directions obtained from all \(T\) blocks. Let us also define the matrix \(\widehat{B}=[\widehat{b}_1,\ldots ,\widehat{b}_T]\). The proof involves three steps.
Step 1: Asymptotic distribution of \(\text{ vec }(\widehat{B})\).
Under (A1)-(A3), asymptotic theory of SIR gives us the following result for each block \(t=1,\ldots ,T\)
: \(\sqrt{\overline{n}}(\widehat{b}_{t}-b_{t}) \longrightarrow _d U_{t} \sim \mathcal {N}(0,V_{t} ),\)
where the expression of \(V_{t}\)
can be found in Saracco (1997
), for instance. Then, it follows that:
$$\begin{aligned} \sqrt{\overline{n}} (\text {vec}(\widehat{B})-\text {vec}(B)) \longrightarrow _d \text {vec}\left( \begin{array}{c} U_{1} \\ \vdots \\ U_{T} \end{array}\right) \sim \mathcal {N} \left( 0,\Gamma _U \right) \text { where } \Gamma _U =\begin{pmatrix} V_{1} &{} &{} 0 \\ &{} \ddots &{} \\ 0 &{} &{} V_{T} \end{pmatrix} \end{aligned}$$
(17)
Step 2: Asymptotic distribution of \(\text{ vec }(\widehat{M}_T)\)
.
Standard properties of “vec” operator yield:
$$\begin{aligned} \text{ vec }(\widehat{M}_T)= \sum _{t=1}^T{w}_t\text{ vec }(\widehat{b}_t\widehat{b}_t') (\widehat{b}_t'\widehat{b}_T)^2 =f(\text{ vec }(\widehat{B})), \end{aligned}$$
with \(||\widehat{b}_t||=1,\ \forall t=1,\ldots ,T\)
and where the function \(f\)
is defined as:
$$\begin{aligned} \begin{array}{llll} f: &{}\mathbb {R}^{p\times T} &{}\rightarrow &{} \mathbb {R}^{p^2} \\ &{}\text{ vec }(B) &{} \mapsto &{} \sum _{t=1}^T{w}_t\text{ vec }(b_tb_t')(b_t'b_T)^2. \end{array} \end{aligned}$$
Let \(K_{1,p}\)
be the vec-permutation matrix given by \(K_{1,p}= \sum _{j=1}^{p} (E_{1j} \otimes E_{1j}')\)
with \(E_{1j}=e_{j,p}'\)
and \(e_{j,p}\)
is the \(j\)
th column of \(I_p\)
. The \((p^2\times pT)\)
Jacobian matrix \(J=[J_1|\ldots |J_T]\)
associated with \(f\)
is defined by the concatenation of the \(p^2\times p\)
matrices \(J_t\)
, where, for \(t=1,\ldots ,T-1\)
,
$$\begin{aligned} J_t&=\frac{\partial f(\text{ vec }(B))}{\partial b_t'}=\frac{\partial {w}_t\text{ vec }(b_tb_t')(b_t'b_T)^2}{\partial b_t'} \\&={w}_t (K_{1,p} \otimes I_{p})[b_t \otimes I_{p} + I_{p} \otimes b_t](b_t'b_T)^2+{w}_t\text{ vec }(b_tb_t')2(b_t'b_T)b_T', \end{aligned}$$
and \(J_T\)
is defined by:
$$\begin{aligned} J_T&=\frac{\partial f(\text{ vec }(B))}{\partial b_T'}=\frac{\partial \sum _{t=1}^T {w}_t\text{ vec }(b_tb_t')(b_t'b_T)^2}{\partial b_t'}\\&=\sum _{t=1}^{T-1}{w}_t \text{ vec }(b_tb_t')2(b_t'b_T)b_t'+ \frac{\partial {w}_T \text{ vec }(b_Tb_T')(b_T'b_T)^2}{\partial b_T'}\\&=\sum _{t=1}^{T-1}{w}_t \text{ vec }(b_tb_t')2(b_t'b_T)b_t'+{w}_T (K_{1,p} \otimes I_{p})[b_T \otimes I_{p} + I_{p} \otimes b_T](b_T'b_T)^2\\&\quad +{w}_T\text{ vec }(b_Tb_T')4(b_T'b_T)b_T'. \end{aligned}$$
Then, using (17
) and applying Delta-method entail
$$\begin{aligned} \sqrt{\overline{n}}(\text{ vec }(\widehat{M}_T)-\text{ vec }({M}_T)) \longrightarrow _d V \sim \mathcal {N}(0,\Gamma _V =J \Gamma _U J'). \end{aligned}$$
(18)
Step 3: Asymptotic distribution of \(\widehat{b}\)
.
The vector \(\widehat{v}_T\)
(resp. \(v_T\)
) is the eigenvector associated with the largest eigenvalue \(\widehat{\lambda }\)
(resp. \(\lambda \)
) of \(\widehat{M}_T\)
(resp. \({M}_T\)
). Since \(\widehat{M}_T={M}_T+O_p(1/\sqrt{\overline{n}})\)
and using (18
), Lemma 1 of Saracco (1997
) yields:
$$\begin{aligned} \sqrt{\overline{n}}(\widehat{v}_T-v_T) \longrightarrow _d W=({M}_T-\lambda I_p)^{+}Vv_T \sim \mathcal {N}(0,\Gamma _W) \end{aligned}$$
with
\(\square \)
$$\begin{aligned} \Gamma _W=[v_T' \otimes ({M}_T-\lambda I_p)^{+} ]\Gamma _V [v_T \otimes ({M}_T-\lambda I_p)^{+}]. \end{aligned}$$
(19)
Proof of Theorem 3 \((i\)
) Since the bases \({\mathbb {A}}\)
, \({\mathbb {B}}_1, \ldots , {\mathbb {B}}_T\)
are assumed to be \(I_p\)
-orthonormal and introducing \({\alpha }_t=w_t m({\mathbb {B}}_t,{\mathbb {B}}_T)\)
, we have:
$$\begin{aligned} Q({\mathbb {A}},{\mathbb {B}}_1,\ldots ,{\mathbb {B}}_T)&= \sum _{t=1}^T {\alpha }_t m({\mathbb {A}},{\mathbb {B}}_t)\\&= \sum _{t=1}^T {\alpha }_t\mathrm{Trace}({\mathbb {A}}{\mathbb {A}}'{\mathbb {B}}_t{\mathbb {B}}_t')/K\\&= \sum _{t=1}^T {\alpha }_t\mathrm{Trace}({\mathbb {A}}'{\mathbb {B}}_t{\mathbb {B}}_t'{\mathbb {A}})/K\\&= \mathrm{Trace}( {\mathbb {A}}' \{\sum _{t=1}^T {\alpha }_t{\mathbb {B}}_t{\mathbb {B}}_t'\} {\mathbb {A}})/K\\&= \mathrm{Trace}\left( {\mathbb {A}}' \left\{ \sum _{t=1}^T w_t \frac{{\mathbb {B}}_t{\mathbb {B}}_t'}{K}m({\mathbb {B}}_t,{\mathbb {B}}_T)\right\} {\mathbb {A}}\right) \\&= \mathrm{Trace}({\mathbb {A}}' {\mathbb {M}}_T {\mathbb {A}}). \end{aligned}$$
Let \({\mathbb {V}}_T=\arg \max _{\mathbb {A}}Q({\mathbb {A}},{\mathbb {B}}_1,\ldots ,{\mathbb {B}}_T)\)
. Since it is well known that \({\mathbb {V}}_T\)
is given by the \(p\times K\)
matrix formed by the \(K\)
eigenvectors \({\mathbb {V}}_T\)
associated with the \(K\)
largest eigenvalues of \({\mathbb {M}}_T\)
, the proof is complete.
\((ii)\) Since the column vectors of \({\mathbb {B}}_t\) form an \(I_p\)-orthonormal basis of \(E\), we have \(\text {Span}({\mathbb {B}}_t)=E\) for each block \(t\). Then the eigenvectors associated with the \(K\) largest eigenvalues of \({\mathbb {B}}_t {\mathbb {B}}_t'\) form an \(I_p\)-orthonormal basis of \(E\). The assumptions of the theorem imply that \(m({\mathbb {B}}_t,{\mathbb {B}}_T)=1\). Then it follows that the eigenvectors associated with the \(K\) largest eigenvalues of \({\mathbb {M}}_T\) form an \(I_p\)-orthonormal basis of the EDR subspace \(E\). \(\square \)
Proof of Theorem 4
From SIR theory, one can derive \(\widehat{{{\mathbb {B}}}}_t={{{\mathbb {B}}}}_t+O_p(n_t^{-1/2})\) for each block \(t\). Then the eigenvectors associated with the \(K\) largest eigenvalues of the matrix \(\widehat{{\mathbb {B}}}_t \widehat{{\mathbb {B}}}_t'\) converge at the same rate to the corresponding eigenvectors associated with the \(K\) non-null eigenvalues of \({\mathbb {B}}_t {\mathbb {B}}_t'\). Under the assumptions of the theorem, we have \(m(\widehat{{\mathbb {B}}}_t,\widehat{{\mathbb {B}}}_T)=1+O_p(\underline{n}^{-1/2})\). As a consequence \(\widehat{{\mathbb {M}}}_T={\mathbb {M}}_T+O_p(\underline{n}^{-1/2})\), and finally \(\widehat{v}_{k,T}=v_{k,T} +O_p(\underline{n}^{-1/2}),~k=1,\ldots ,K\), which completes the proof. \(\square \)
|
Electronic Journal of Probability Electron. J. Probab. Volume 23 (2018), paper no. 60, 35 pp. The argmin process of random walks, Brownian motion and Lévy processes Abstract
In this paper we investigate the argmin process of Brownian motion $B$ defined by $\alpha _t:=\sup \left \{s \in [0,1]: B_{t+s}=\inf _{u \in [0,1]}B_{t+u} \right \}$ for $t \geq 0$. The argmin process $\alpha $ is stationary, with invariant measure which is arcsine distributed. We prove that $(\alpha _t; t \geq 0)$ is a Markov process with the Feller property, and provide its transition kernel $Q_t(x,\cdot )$ for $t>0$ and $x \in [0,1]$. Similar results for the argmin process of random walks and Lévy processes are derived. We also consider Brownian extrema of a given length. We prove that these extrema form a delayed renewal process with an explicit path construction. We also give a path decomposition for Brownian motion at these extrema.
Article information Source Electron. J. Probab., Volume 23 (2018), paper no. 60, 35 pp. Dates Received: 22 August 2017 Accepted: 6 June 2018 First available in Project Euclid: 20 June 2018 Permanent link to this document https://projecteuclid.org/euclid.ejp/1529460158 Digital Object Identifier doi:10.1214/18-EJP185 Mathematical Reviews number (MathSciNet) MR3827967 Zentralblatt MATH identifier 06924672 Keywords arcsine law argmin process Brownian extrema Feller semigroup Brownian excursion theory jump process Lévy process Lévy system Markov property space-time shift process path decomposition random walks renewal property sample path property stable process stationary process Citation
Pitman, Jim; Tang, Wenpin. The argmin process of random walks, Brownian motion and Lévy processes. Electron. J. Probab. 23 (2018), paper no. 60, 35 pp. doi:10.1214/18-EJP185. https://projecteuclid.org/euclid.ejp/1529460158
|
Consider the drift diffusion equation
$$\dfrac{\partial}{\partial t}\psi=\mu\dfrac{\partial}{\partial x}\psi+\kappa^2\dfrac{\partial^2}{\partial x^2}\psi.$$
Dimensional analysis tells us that $\mu$ is a characteristic length per time (drift velocity) while $\kappa$ is a characteristic length per
square root of time. This small factoid has curious consequences.
In statistical physics, $\kappa^2=2D$ is the diffusion coefficient. What follows also applies to non relativistic quantum mechanics, except the diffusion coefficient is imaginary, $\kappa^2=\frac{i\hbar}{2m}$.
Given the value $x(t)$ of a curve/stochastic process at time $t$, for
any time interval $\Delta t > 0$, we can test for $x(t+\Delta t)$ and the increment $\Delta x\equiv x(t+\Delta t)-x(t)$ is probabilistic and dependents on $\Delta t$ (and possibly on $t$ or even on $x(t)$). For example, in the case of a Brownian motion each new $\Delta x$ takes values according to the distribution
$P(\Delta x)=\dfrac{1}{\kappa\sqrt{\Delta t}\sqrt{2\pi}}\exp \left( -\dfrac{1}{2}\dfrac{(\Delta x)^2}{\kappa^2\,\Delta t} \right)$.
(I set $\mu=0$ and note that usually one uses a variable $\sigma=\kappa\sqrt{\Delta t}$)
The Gauss curve distribution for $\Delta x$ says that even for very small $\Delta t$, there is a non-vanishing change that $x(t+\Delta t)$ is far away from $x(t)$. For bigger $\Delta t$, the distribution flattens out and the chance for bigger net deviation grows.
(Sidenote:Note that this weight also arises in the quantization of $L(q,{\dot q})\propto {\dot q}^2$:
$\frac{(\Delta x)^2}{\Delta t}=\left(\frac{\Delta x}{\Delta t}\right)^2\Delta t\approx \int_0^{\Delta t} \left(\frac{{\mathrm d}x}{{\mathrm d}t}\right)^2{\mathrm d}t$.)
Now, for the above $P$, we have:
$\langle \Delta x\rangle=0$
$\langle \left|\Delta x\right| \rangle=\sqrt{\tfrac{2}{\pi}}\,\kappa\,\sqrt{\Delta t}$
$\langle (\Delta x)^2\rangle=\kappa^2\,\Delta t$
This says that the movement has no preferred direction, but for a finite waiting time $\Delta t$ and if $x(t)$ is some mean path, we expect $x(t+\Delta t)=x(t)+\kappa\sqrt{\Delta t}$, see picture. The intuition is that for a very small waiting time, you could possibly already have a big deviation and the longer the wait the farther you get away from the center - however this movement is sub-linear because with more time, more and more cancellation occur as well. A non-differentiability of the curve in the model manifests itself here: While we know the overall deviation goes as $\sqrt{\Delta t}$, we can't make a good estimate for the instantaneous growth, because at $\Delta t=0$ the slope of the square root function $\frac{∂}{∂\Delta t}\sqrt{\Delta t}\propto\frac{1}{\sqrt{\Delta t}}$ isn’t finite! There is no $x'(t)$!
The accumulation of the values of a function $F$ along a smooth path $x(t)$ is
$\int_{t_0}^{t_1} F(x(s))\, {\mathrm d}x(s)$,
which is
$\int_{t_0}^{t_1} F(x(s))\, x'(s)\, {\mathrm d}s$,
where
$x'(t)=\lim_{\Delta t\to 0}\frac{x(t+\Delta t)-x(t)}{\Delta t}$.
Stochastic integrals are a means of computing the accumulation of a function along a path in cases where the above isn't defined. An Itō process is a stochastic process $X_t$ which is the sums of a Lebesgue and an Itō integral:
$X_t = X_0 + \int_0^t \mu_s(X_s, s)\,\mathrm ds + \int_0^t \sigma_s(X_s, s) \,\mathrm dW_s$
An Itō integral is roughly a Riemann integral of random variables. The norm in which the limit of partial sums converges is not the norm on $\mathbb R$, but instead the result is defined as a random variable for which the probability of being different than the limit goes to zero.
One writes
${\mathrm d}X_t = \mu_t(X_s, s) \, {\mathrm d}t + \sigma_t(X_s, s) \, {\mathrm d}W_t$
for the integral above. If $X_t$ isn't known, this is called a stochastic differential equation in $X_t$. Being an Itō process is the stochastic analog of being differentiable.If $\mu_t$ and $B_t$ are time independent, we speak of Itō diffusion. A geometric Brownian motion is characterized via $\mu_t(X_s, s)=X_s\,\mu$ and $\sigma_t(X_s, s)=X_s\,\sigma$, i.e. both are "just" $\propto X_s$.
The famous Itō lemma is
${\mathrm d}f(t,X_t) = \left(\dfrac{\partial f}{\partial t} + \dfrac{\sigma_t^2}{2}\dfrac{\partial^2f}{\partial x^2}\right){\mathrm d}t + \dfrac{\partial f}{\partial x}\,{\mathrm d}X_t$
The second derivative term comes from stochastic diffusion, a non-local flavor if you will.
As this really is an integral relation, it corresponds to a version of the fundamental theorem of calculus. If we know how to integrate against $X_t$, we can compute $f(t,X_t)$ as such an integral (plus an ordinary integral).
Note that for $f(x,t)=\frac{1}{2}x^2$ and $\frac{\sigma_t^2}{2}=\kappa^2$ we get
${\mathrm d}\left(\frac{m}{2}X_t^2\right) = m\,\kappa^2{\mathrm d}t + X_t\,m\,{\mathrm d}X_t$
The next part is on the commutation relations $xp$ minus $px$, a version of the last equation which characterizes $x$ in the second term $px$ as detecting a diffusion effect. That's basically part of what Maimon writes about on the Wikipedia page on the path integral formulation of (quantum) mechanics:
Let
$p_{\Delta t}(t)=m\frac{x(t+{\Delta t})-x(t)}{{\Delta t}}$
If the limit $\lim_{\Delta t\to 0}p_{\Delta t}(t)$ exists, then for axillary $\delta$, we have $\lim_{\Delta t\to}x(t+\delta^2{\Delta t})=x(t)$.
Hence, for ever smaller time grid size ${\Delta t}$, e.g. an expression like
$x(t+\delta_1^2{\Delta t})\,x(t+\delta_2^2{\Delta t})\,x(t+\delta_3^2{\Delta t})$
converges to $x(t)^3$.
However, for
$x(t+\Delta t)\approx x(t)+\kappa{\sqrt{\Delta t}}$
with the square root, we find
$x(t+\delta^2{\Delta t})\,p_{\Delta t}(t)=\delta^2\,m\,\kappa^2+x(t)\,p_{\Delta t}(t)$.
The result says that two naively equivalent approximation schemes (e.g. $\delta=0$ vs. $\delta=1$) systematically differ by an additive diffusion term (e.g. $m\kappa^2$ here). In quantum mechanics, that's $m\kappa^2=m\frac{i\hbar}{2m}=\frac{i\hbar}{2}$.
So we had
$\frac{\partial}{\partial t} \psi = \kappa^2 \frac{\partial^2}{\partial x^2} \psi$
(note the imbalance of dimensions, $t$ vs. $x^2$) and in turn
$P(\Delta x) \propto \exp\left(c\frac{(\Delta x)^2}{\Delta t}\right)$
as next-step distribution, and then
$\langle |\Delta x|\rangle\propto (\Delta t)^{1/2}$
gives the non-smooth curve.
You may want to look at other next-step distributions,effectively giving the theories with $\langle |\Delta x|\rangle\propto (\Delta t)^{1/\alpha}$. We can see that the proportionality coefficient (the analog of $\kappa$ or the velocity) must be fractional and in turn we expect a fractional differential operator $\frac{\partial^\alpha}{\partial x^\alpha}$ in the corresponding diffusion equation. You get a Laplacian to the power of $\frac{\alpha}{2}$. The $\alpha\neq 2$-deformed theory with complex $\kappa$ is what's termed „fractional quantum mechanics“, though I don't know of really notable results, besides better understanding of the known cases.
To answer your question, associate the power of the operators to the above expectation value $\langle |\Delta x|\rangle$. The $\alpha$ it characterizes fast the system parameter in your model stochastically moves away from the center. So called Levy-flights might propagate in a rough way, the current/velocity becomes something more complicated (not proportional to the momentum, i.e. the thing that gets multiplied by $x$ in the plane wave solution). The Brownian normal-derivative case is friendliest diffusion. You wouldn't gain that intuition from that other question on fractional derivatives, because there the guy cooks up an equation with a fractional derivative of time and we're used to see all of space but only one time (the now).
|
Can Black-Scholes option values be derived via the Capital Asset Pricing Model,
without resort to the use of a risk-free portfolio being created from the option and a Delta determined quantity of the underlying instrument?
This derivation, originally due to Cox & Rubinstein (1985) starts from the
Capital Asset Pricing Model in continuous time. In particular it uses the result that there is a linear relationship between the expected return on a financial instrument and the covariance of the asset with the market. The latter term can be thought of as compensation for taking risk. But the asset and its option are perfectly correlated, so the compensation in excess of the risk-free rate for taking unit amount of risk must be the same for each. For the stock, the expected return (dividing by $dt$) is $\mu$. Its risk is $\sigma$. From Ito we have $$dV = \frac{\partial V}{\partial t}dt + \frac{1}{2}\sigma^2S^2\frac{\partial ^2V}{\partial S^2}dt + \frac{\partial V}{\partial S}dS$$ Therefore the expected return on the option is $$\frac{1}{V}\left( \frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2S^2\frac{\partial ^2V}{\partial S^2} + \mu S \frac{\partial V}{\partial S}\right)$$ and the risk is $$\frac{1}{V} \sigma S \frac{\partial V}{\partial S}$$ Since both the underlying and the option must have the same compensation, in excess of the risk-free rate, for unit risk $$\frac{\mu-r}{\sigma}= \frac{\frac{1}{V}\left( \frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2S^2\frac{\partial ^2V}{\partial S^2} + \mu S \frac{\partial V}{\partial S}\right)}{\frac{1}{V} \sigma S \frac{\partial V}{\partial S}}$$ Now rearrange this. The $\mu$ drops out and we are left with the Black–Scholes equation.
Yes, see page 16 from the below paper:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.