text
stringlengths
256
16.4k
Does electric field caused by time varying magnetic field form closed loops(electric field starts from a positive charge and ends at a negative charge)? and are they conservative or non-conservative fields? The field lines certainly can form closed loops, for example in a transformer. But they don't have to either, an example being a plane EM wave. For a specific example, consider the following long solenoid of radius $a$ and turn density $n$ with a time-varying current running through it: $I(t)$. The solenoid is oriented along the x-axis, and current runs in the direction of $-x$ to $+x$. This current induces a magnetic field inside the solenoid: $$\mathbf{B}=\mu_0 nI(t) \mathbf{i}$$ Now, by Faraday's Law: $$\int \mathbf{E} \cdot \mathrm{d} \mathbf{s}=-\frac{\partial}{\partial t} \int \mathbf{B} \cdot \mathrm{d} \mathbf{A}$$ Which gives us a relationship the time derivative of magnetic flux through a loop and the electric field along that loop. Using a loop that is concentric with the solenoid gives: $$2\pi r E=-\frac{\partial}{\partial t} \int \mu_0 n I(t) \mathrm{d}A=-\mu_0 n I'(t) \pi a^2$$ $$\Rightarrow E=-\frac{1}{2}\mu_0 n a^2~ \frac{I'(t)}{r}$$ where $I'(t)=\partial I / \partial t$, and we know $E$ is oriented along a closed loop.
2018-09-11 04:29 Proprieties of FBK UFSDs after neutron and proton irradiation up to $6*10^{15}$ neq/cm$^2$ / Mazza, S.M. (UC, Santa Cruz, Inst. Part. Phys.) ; Estrada, E. (UC, Santa Cruz, Inst. Part. Phys.) ; Galloway, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; Gee, C. (UC, Santa Cruz, Inst. Part. Phys.) ; Goto, A. (UC, Santa Cruz, Inst. Part. Phys.) ; Luce, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; McKinney-Martinez, F. (UC, Santa Cruz, Inst. Part. Phys.) ; Rodriguez, R. (UC, Santa Cruz, Inst. Part. Phys.) ; Sadrozinski, H.F.-W. (UC, Santa Cruz, Inst. Part. Phys.) ; Seiden, A. (UC, Santa Cruz, Inst. Part. Phys.) et al. The properties of 60-{\mu}m thick Ultra-Fast Silicon Detectors (UFSD) detectors manufactured by Fondazione Bruno Kessler (FBK), Trento (Italy) were tested before and after irradiation with minimum ionizing particles (MIPs) from a 90Sr \b{eta}-source . [...] arXiv:1804.05449. - 13 p. Preprint - Full text Αναλυτική εγγραφή - Παρόμοιες εγγραφές 2018-08-25 06:58 Charge-collection efficiency of heavily irradiated silicon diodes operated with an increased free-carrier concentration and under forward bias / Mandić, I (Ljubljana U. ; Stefan Inst., Ljubljana) ; Cindro, V (Ljubljana U. ; Stefan Inst., Ljubljana) ; Kramberger, G (Ljubljana U. ; Stefan Inst., Ljubljana) ; Mikuž, M (Ljubljana U. ; Stefan Inst., Ljubljana) ; Zavrtanik, M (Ljubljana U. ; Stefan Inst., Ljubljana) The charge-collection efficiency of Si pad diodes irradiated with neutrons up to $8 \times 10^{15} \ \rm{n} \ cm^{-2}$ was measured using a $^{90}$Sr source at temperatures from -180 to -30°C. The measurements were made with diodes under forward and reverse bias. [...] 2004 - 12 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 533 (2004) 442-453 Αναλυτική εγγραφή - Παρόμοιες εγγραφές 2018-08-23 11:31 Αναλυτική εγγραφή - Παρόμοιες εγγραφές 2018-08-23 11:31 Effect of electron injection on defect reactions in irradiated silicon containing boron, carbon, and oxygen / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Yakushevich, H S (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) Comparative studies employing Deep Level Transient Spectroscopy and C-V measurements have been performed on recombination-enhanced reactions between defects of interstitial type in boron doped silicon diodes irradiated with alpha-particles. It has been shown that self-interstitial related defects which are immobile even at room temperatures can be activated by very low forward currents at liquid nitrogen temperatures. [...] 2018 - 7 p. - Published in : J. Appl. Phys. 123 (2018) 161576 Αναλυτική εγγραφή - Παρόμοιες εγγραφές 2018-08-23 11:31 Αναλυτική εγγραφή - Παρόμοιες εγγραφές 2018-08-23 11:31 Characterization of magnetic Czochralski silicon radiation detectors / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) Silicon wafers grown by the Magnetic Czochralski (MCZ) method have been processed in form of pad diodes at Instituto de Microelectrònica de Barcelona (IMB-CNM) facilities. The n-type MCZ wafers were manufactured by Okmetic OYJ and they have a nominal resistivity of $1 \rm{k} \Omega cm$. [...] 2005 - 9 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 548 (2005) 355-363 Αναλυτική εγγραφή - Παρόμοιες εγγραφές 2018-08-23 11:31 Silicon detectors: From radiation hard devices operating beyond LHC conditions to characterization of primary fourfold coordinated vacancy defects / Lazanu, I (Bucharest U.) ; Lazanu, S (Bucharest, Nat. Inst. Mat. Sci.) The physics potential at future hadron colliders as LHC and its upgrades in energy and luminosity Super-LHC and Very-LHC respectively, as well as the requirements for detectors in the conditions of possible scenarios for radiation environments are discussed in this contribution.Silicon detectors will be used extensively in experiments at these new facilities where they will be exposed to high fluences of fast hadrons. The principal obstacle to long-time operation arises from bulk displacement damage in silicon, which acts as an irreversible process in the in the material and conduces to the increase of the leakage current of the detector, decreases the satisfactory Signal/Noise ratio, and increases the effective carrier concentration. [...] 2005 - 9 p. - Published in : Rom. Rep. Phys.: 57 (2005) , no. 3, pp. 342-348 External link: RORPE Αναλυτική εγγραφή - Παρόμοιες εγγραφές 2018-08-22 06:27 Numerical simulation of radiation damage effects in p-type and n-type FZ silicon detectors / Petasecca, M (Perugia U. ; INFN, Perugia) ; Moscatelli, F (Perugia U. ; INFN, Perugia ; IMM, Bologna) ; Passeri, D (Perugia U. ; INFN, Perugia) ; Pignatel, G U (Perugia U. ; INFN, Perugia) In the framework of the CERN-RD50 Collaboration, the adoption of p-type substrates has been proposed as a suitable mean to improve the radiation hardness of silicon detectors up to fluencies of $1 \times 10^{16} \rm{n}/cm^2$. In this work two numerical simulation models will be presented for p-type and n-type silicon detectors, respectively. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 2971-2976 Αναλυτική εγγραφή - Παρόμοιες εγγραφές 2018-08-22 06:27 Technology development of p-type microstrip detectors with radiation hard p-spray isolation / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) ; Díez, S (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) A technology for the fabrication of p-type microstrip silicon radiation detectors using p-spray implant isolation has been developed at CNM-IMB. The p-spray isolation has been optimized in order to withstand a gamma irradiation dose up to 50 Mrad (Si), which represents the ionization radiation dose expected in the middle region of the SCT-Atlas detector of the future Super-LHC during 10 years of operation. [...] 2006 - 6 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 566 (2006) 360-365 Αναλυτική εγγραφή - Παρόμοιες εγγραφές 2018-08-22 06:27 Defect characterization in silicon particle detectors irradiated with Li ions / Scaringella, M (INFN, Florence ; U. Florence (main)) ; Menichelli, D (INFN, Florence ; U. Florence (main)) ; Candelori, A (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Bruzzi, M (INFN, Florence ; U. Florence (main)) High Energy Physics experiments at future very high luminosity colliders will require ultra radiation-hard silicon detectors that can withstand fast hadron fluences up to $10^{16}$ cm$^{-2}$. In order to test the detectors radiation hardness in this fluence range, long irradiation times are required at the currently available proton irradiation facilities. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 589-594 Αναλυτική εγγραφή - Παρόμοιες εγγραφές
Why can we not have non-integer powers of fields in a QFT Lagrangian, eg. $\phi^{5/2}$? Or if we wanted a charged $\phi^3$ theory, could we not have a $(\phi^*\phi)^{3/2}$ term? This is a property of renormalization group fixed points---- in usual circumstances, with local Lagrangians, noninteger powers get smoothed out to integer powers. To understand qualitatively why, consider the simplest example of renormalization--- the central limit theorem. One dimensional fields--- Central Limit Theorem Suppose I have a quantity X(t) which is determined by the sum of many random quantities over time. The quantities have some mean, which leads X to drift up or down, but subtract out a linear quantity $X'(t) = X(t) - mt$ and fix m by requiring that X(t) have the same mean now as at a later time (in finance/probability language, make X a Martingale). Then all that is left are the fluctuations. It is well known that these fluctuations average out by the central limit theorem, so that X at a later time $t+\epsilon$ is a gaussian centered around the value of X(t), as long as $\epsilon$ is large enough to include many independent quantities which add up to a gaussian. So the probability density of $X(t+\epsilon)$ is: $$ \rho(X(t+\epsilon)|X(t)) = e^{-a(x(t+\epsilon)-x(t))^2}$$ Further, the increments are independent at different times, so that the total probability of a path is the product of these independent increments $$ P(X(t)) = e^{-\int \dot{X}^2}$$ Where you need to choose the scaling of the fluctuations of X appropriately with $\epsilon$ to get to the continuum limit (a trivial case of renormalization), and this P(X) is the weight function in a path integral. Notice that you get an exactly quadratic Lagrangian without any fine-tuning. This is the generic situation. Further, the analytic continuation of this is the particle action for nonrelativistic quantum mechanics, and if you reinterpret time as proper time, this quadratic action is the relativistic propagator as well. If you make a Lagrangian which has the wrong power, not two, then it will renormalize to the quadratic Lagrangian under very general conditions. For instance, suppose $$ P(X(t+\epsilon)|X(t) ) = e^{ -|X(t+\epsilon)-X(t)|^{16.7} }$$ This leads to a boxy short-scale motion, but if you average over a few $\epsilon$'s, you recover a quadratic Lagrangian again. To see why, there is a useful heuristic. Suppose I add a term to the action of the form $$ \dot X^2 + |(\dot X)|^n $$ If I define the scale of fluctuations in X by setting the $\dot X$ term to 1 (meaning that I normalize the scale of X by the random-walk fluctuations left over when the mean drift is zero), then X has scale dimension -1/2. The new term, integrated in time, has scale dimension which is determined by dimensional analysis to be -n/2 + n-1, so that if n>2, the scale dimension is negative. This is obvious, because the n=2 term sets the scaling, so any higher power must be more negative. The scale dimension tells you how important the perturbation is at long scales, because it determines how fast the correlations in the quantity fall off. So for integer n, all the correlations disappear at long distances, and you are free to conclude that the theory is perfectly Brownian at long distances. This is true for integer n. For noninteger n, there are subtleties. If you have exponentials of powers, the resulting distribution for $X(t+\epsilon)$ always has a finite variance, and you always recover the central limit theorem at long distances. But if you make the distribution Levy, you can get new central limit fixed points. These are not exponentials of fractional powers, but the distributions themselves have fractional power tails (the Lagrangian has logarithms of fractional powers). Since there is no rigorous mathematical theory of quantum fields, these heuristics have to be the guide. You can see them work with lattice models, so they are certainly correct, but proving them is just out of reach of current mathematical methods. Free field Lagrangians are (free) Central Limit Fixed Points The central limit points are given by free field Lagrangians. These are quadratic, so they are central-limit stable. If you average a free field at short distances to get a longer distance description, you get the same free field, except that you can recover certain symmetries, like rotational or Lorentz invariance, which might not be there originally. But ignore this. The fundamental renormalization criterion is that you start from a renormalization fixed point, and if you want interactions, you perturb away from this free fixed point by adding nonlinearities. These fixed points are not all stable to nonlinear interactions, they get perturbed by generic polynomial interactions. If you start with a scalar field $$\int (\partial\phi)^2 + P(\phi) d^dx$$ The scale dimension of $\phi$ is (d-2)/2 (zero in 2d, 1/2 in 3d, 1 in 4d), which is found by dimensional analysis. This dimensional analysis is assuming that the coefficient of the gradient term is 1, which is exactly for the same reason as in 1d, the coefficient of the gradient term is analogous to the coefficient of the time-derivative term in Brownian motion--- setting it to one normalizes the fluctuations of the field to be those of the free field at long distances. You should be confident that any free field action which is reasonably close to this one will converge to this when you look at it at long distances. This perhaps requires a few discrete symmetries to ensure that you recover rotational invariance, but the intuition is that it is a central limit theorem again). With this scaling, higher derivative terms have negative scale dimension, so they should disappear when you look at long distances--- their contribution only alters the long-distance central-limit fluctuations, and this just changes the normalization of the field. Each polynomial interaction also has a scale dimension, which gives you a first order view of how important it is. You can check that in 4d, the naive scale dimension of a term of the form $\phi^n$ is positive only for n=1,2,3, it is zero for n=4, and it is negative for higher n. This means that there are three coefficients which alter the fluctuations at long distances, which are the linear, quadratic, cubic and quartic terms. The linear term is analogous to the drift in the Brownian motion, and it can be absorbed into a field redefinition. The quadratic term is the mass, and must be tuned to close to zero to approach a continuum limit. The cubic term is a relevant interaction, and the quartic term is a marginal interaction. The space of scalar theories is defined by the quadratic/cubic/quartic couplings. This is not a theorem, but it should be, and it will be one day. For now, it suffices to say that if you consider such theories, and you look at them at longer distance scales, these coefficients are the only ones that blow up or stay stable. For any other polynomial, the coefficients fall away to zero (I am neglecting polynomial terms of the form $\phi|\nabla\phi|$, or which break rotational invariance, but have positive scale dimension) If you impose $\phi$ to $-\phi$ symmetry, cubic terms are forbidden, and you only get quadratic/quartic terms. I will assume your model treats positive and negative values symmetrically from now on, although the general case is just slightly more complicated. The expectation is that adding a term of the form $(|\phi|)^{3/2}$ should renormalize at long distances to a quadratic/quartic coupling of some sort. The reason is that the short distance fluctuations of the field are wild, the $|\phi|^{3/2}$ will get averaged over many neighbors, and if you just look at the correlations of fields at relatively long distances, you will see a RG fixed point, and we know the polynomial ones, and we conjecture that they exhaust the space. If central limit analogies are not persuasive, one can justify this conjecture using the particle path formulation of field theory. Each polynomial term is a particle point-interaction, of k-particles with each other, and we assume that at long distances, the particles are far apart and free. Whatever the short distance interaction, even if it is a complicated thing, at long distances it should look like the scattering of asymptotic particles, 2->2, 2->3 1->2, and these are all polynomial terms. If there is a complicated superposition of many-particle states which looks like a 2->3.5 interaction, at long distances, you expect the particles to separate, and you recover a polynomial interaction. None of this is rigorous, and it is very likely that there are counterexamples. In two dimensions, this is completely false, because the scale dimension of phi is zero, so every function of $\phi$ is as good as any other. The condition of being a RG fixed point can be fruitfully analyzed by assuming conformal invariance, and then there is a whole world of examples which differ from the higher dimensional intuition. But in 2d (one space one time dimension) particles don't separate with time, they move together forever, so the idea that you should recover a decomposed scattering picture of interactions at long-distance fails. Fine Tunings The scalar example above required fine-tuning of the quadratic term to a special value to reach a nontrivially flucutating limit, because there were terms with positive mass dimension. This is a problem for physics, because to tune things precisely is not physically plausible. The systems which do not require fine tuning with no space-time symmetry beyond Lorentz invariance exclude scalars, and only have chiral fermions and spin-1 gauge fields, where the fixed points are self-tuning--- there are no terms which have positive scale dimensions for these guys. The standard model is build entirely out of these, with one scalar Higgs field which has a fine tuned. If you allow supersymmetry, you get a whole world of fluctuating scalar limits, which are protected. These theories are studied widely. Fractional powers It is possible to make Levy fields in higher dimensions, by having a fractional power propagator. Such Levy fields have a Lagrangian which has fractional powers of momentum, but not fractional powers of the fields. As far as I know, these nonlocal fields are not studied in the literature, but they appear in a formal way (no physics) in the program of "analytic regularization" of the 1960s, which became dimensional regularization in the 1970s, and in Banks-Zaks fixed points or "Unparticle" theories. There is a recent paper of Haba which analyzes these theories. There is no full classification of renormalization group fixed points, and it is likely that there are many more that we have missed. But these are not likely to be Lorentz invariant theories. Once you leave the world of unitary quantum theories, you can get statistical field theories with all sorts of potentials. In particular, the Lifschitz statistical field theory $$ \int |\nabla^2\phi|^2 + Z|\nabla\phi|^2 + V(\phi) d^4x$$ Gives a dimensionless $\phi$, and should be as rich a statistical theory in 4d for choices of V as conformal theories in 2d. But this is not a unitary theory, and it has been only studied in limited ways. Near eight dimensions, it has an $\epsilon$ expansion which was worked out by Mukhamel in the late 1970s, and which is very close to the normal scalar $\epsilon$ expansion. But in 4d, where it is most interesting, nothing is known. Even studying this numerically is a challenge, because tuning Z to zero requires a lot of work. These type of Lifschitz points are in vogue right now, because Horava has speculatively connected theories of this sort to quantum gravity. But they are interesting in their own right, even for the nonunitary purely statistical case, because every symmetric renormalization group fixed point is mathematically special and should be understood exhaustively.
Would a Dyson sphere make a red dwarf appear to be a Brown Dwarf? Would it disguise a star enough to misidentify it what size it is? I'm just wondering if it could be possible that some Dyson spheres are out there drifting around, camouflaged like a rather innocuous and innocent star? No, you could not. Temperature probably isn't an issue, but a Dyson sphere shouldn't show the proper spectral lines. The-best case scenario This site gives the formula for the temperature of a Dyson Sphere as $$T=\left( \frac{E}{4 \pi r^2 \eta\sigma} \right)^{\frac{1}{4}}$$ where $E$ is the star's energy output, $r$ is the Dyson Sphere's radius, $\eta$ is the emissivity and $\sigma$ is the Stefan-Boltzmann constant. The Stefan-Boltzmann law says that the energy output (luminosity, $L$) of a star is $$L=4 \pi \sigma R^2 T_*^4$$ The energy output is $L$, so substituting this into the first expression gives $$T=\left( \frac{4 \pi \sigma R^2 T_*^4}{4 \pi r^2 \eta\sigma} \right)^{\frac{1}{4}}$$ Which is $$T=\left( \frac{R^2 T_{\odot}^4}{r^2\eta} \right)^{\frac{1}{4}}$$ $$T=T_*\left( \frac{R^2}{r^2\eta} \right)^{\frac{1}{4}}$$ Wikipedia gives the emissivity of concrete - my Dyson-Sphere-building-material of choice - as $0.91$. Let's say the Dyson Sphere has a radius of $1.5$ times that of the star. That gives me $$T=T_*(1.5^2 \times 0.91)^{-\frac{1}{4}} \approx 0.836 T_{\odot}$$ Wikipedia and Wikipedia say that a temperature of a red dwarf could be as low as 2300 K, and a brown dwarf could have a temperature of about 1900 K - $0.826$ times the temperature of a red dwarf and so in the acceptable range for our Dyson Sphere. A more realistic radius A more commonly-used radius is $r\approx1\text{ AU}$ - the distance from Earth to the Sun. when substituted in, this gives $r=215R$ and $T=0.07T_*$, a much lower value. Interestingly, this fits with previous results. Slysh (1985) looked at things from the perspective of thermodynamic efficiency. The efficiency, $\eta_T$, is given by$$\eta_T=1-\frac{T}{T_*}$$It should be expected, at best, that $\eta_T\approx0.95$, so we get $T=0.05T_*$ - pretty close to what our result above was. As Serban Tanasa rightfully pointed out, there are some problems with concrete. Steel or iron would be a better choice. Their emissivities are given here: $$ \begin{array}{|c|c|}\hline \text{Material} & \text{Emissivity}\\\hline \text{Concrete} & 0.81\\\hline \text{Cement} & 0.54\\\hline \text{Galvanized steel} & 0.88\\\hline \text{Iron} & 0.87\text{-}0.95\\\hline \end{array} $$ Note the lower value for concrete than the one I used above. The difference in values turns out to have little effect. At any rate, if we use iron, and choose the lower limit for $\eta$, we get $T=0.071T_*$ - essentially the same as above. Let's do some recalculations, using both the derivation from scratch and Slysh's results. We'll use a number of stars: $$ \begin{array}{|c|c|c|c|c|}\hline \text{Star} & \text{Spectral type} & T_*\text{ (K)} & T\text{ (K)}\text{ (via emissivity)} & T\text{ (K)}\text{ (Slysh)}\\\hline \text{Zeta Puppis} & \text{O4} & 40000 & 2840 & 2000\\\hline \text{Eta Aurigae} & \text{B3} & 17200 & 1220 & 860\\\hline \text{Fomalhaut} & \text{A3} & 8590 & 610 & 430\\\hline \text{Tau Boötis} & \text{F6} & 6360 & 450 & 320\\\hline \text{Sun} & \text{G2} & 5770 & 410 & 290\\\hline \text{Alpha Centauri B} & \text{K1} & 5260 & 370 & 260\\\hline \text{Gliese 581} & \text{M3} & 3480 & 250 & 170\\\hline \end{array} $$ Here, I assume $r=1\text{ AU}$ and $\eta=0.87$. These temperatures are reasonable values. If we accept a lower temperature limit of $300\text{-}400\text{ K}$ for a brown dwarf, Slysh's rule lets us choose stars roughly as hot as the Sun, or hotter. The emissivity calculations let us choose, in general, any star hotter than a red dwarf. From a temperature perspective alone, there shouldn't be serious issues. The spectral line problem There have been questions as to whether or not the emission spectrum of a Dyson Sphere would match that of a brown dwarf. It is certainly the case that the peak wavelengths would match that of a brown dwarf, with the most light radiated in the infrared. In other words, if you looked at a Dyson Sphere and a brown dwarf with an infrared telescope, you would see two similar sources. If you measured the emission lines, though, you would definitely see different materials in the two objects - there's no way around that. And yes, the radius of the Dyson Sphere would be much larger than that of a red dwarf - so certainly larger than that of a brown dwarf, as pointed out by JDlugosz. Here are some lines you'd expect to see in a brown dwarf: Not all of these are necessarily going to be present in a brown dwarf's spectrum, but the absence of all of them in the spectrum of a Dyson sphere is going to raise some red flags. That's you main problem. Thanks to all those who commented and pointed out inaccuracies and errors; the answer is the better for that. No. A Dyson sphere would emit something closely matching black-body radiation. A star, while also emitting something close to black-body radiation, has tell-tale spikes in its spectrum. Below is the sun's spectrum compared to what its ideal black-body spectrum would look like: Brown and red dwarfs have their own "fingerprint" signatures, which differ from that of both the sun and an ideal black body. This fingerprint is the first thing astronomers look at, so I would not expect them to be fooled for long. I don't think so, because a dyson sphere would not have the same emission spectrum of a star. Consider two cases: we can have a translucent dyson sphere that lets out some of the light of the star, or we can have a dyson sphere that is opaque and emits light as blackbody radiation due to being heated by the star. The light initially emitted by the star will have an emission spectrum that is dependent on its temperature. Elements which are either fully ionized in a star or which are too cold to absorb energy will not absorb light. Based on this, we can identify the temperature of a star, not based on its luminosity, but based on its emission spectrum. Now consider the light emitted by the dyson sphere. If we emit as a black body, we will not have the same emission spectrum as our star unless the dyson sphere has the same elemental makeup as the star in question. Since stars are made up mostly of gasses, this would be difficult to achieve. If we emit through transparency, we will still have the same emission spectrum as a red dwarf, but appear less luminous. Most materials also have a transparency that varies based on spectrum, so we'll see the spectrum of the red dwarf reduced at different frequencies dependent on the material of the sphere. A search for Dyson spheres has actually been carried out under a similar theoretical framework. Under the assumption that most terrestrial objects radiate mostly in the infrared spectrum, astronomers searched for stars that had a spectrum shifted more towards that part of the spectrum than the star would otherwise be expected to emit under. The search did not successfully find anything that looked like even a partial Dyson sphere. Even if we manage to get our Dyson sphere to emit the same spectrum of a brown dwarf, though, it will still appear too big and too massive to be a brown dwarf. Brown dwarves are smaller than red dwarves, and a Dyson sphere would need to be significantly bigger than a red dwarf. Not an expert on blackbody radiation, so you might get a more competent answer on (Astro)Physics, but conceptually if you have a matrioshka system, where each layer captures and uses the radiation of the more inward layer, you can bring the blackbody radiation level down to an arbitrarily low level >= CMB. The only "disguise" issue would be that brown dwarfs are generally capped around $80 M_J$ (Jupiter Masses), while red dwarfs are generally at $[0.1-0.5] M_S =[100-500]M_J$. There might be a small overlap at the lowest of the low red dwarfs. But generally, your apparent stellar radii might mismatch, and the orbits of any remaining planets would be anomalous, upon closer inspection. First - Brown Dwarfs are small. A Dyson Sphere for our Solar System to duplicate Earth like environments would have to be about 2 AU in diameter to give Earth-like solar radiation incoming. A Dyson Sphere would be BIG. The mass of the Dyson Sphere would be large - the sun itself and the mass of the sphere - so in a double star would look very different than a light brown dwarf. The Dyson Sphere, in the end, would have to re-radiate all the incoming solar energy out the shell as heat. So the total luminosity would be the equivalent of the star inside, not the small amount reported. If you trapped heat in, you bake your sphere until it glows on its own. EDIT: I just thought of another test. A Dyson sphere, due to its large size, will rotate very slowly if at all. A planet or sun rotates about its axis in hours or days. This speed difference between the limbs of the object give splits to the spectral lines that can be seen. So a visible Dyson sphere would look anomalous due to its slow rotation. I'd think that to get a sphere to revolve about its axis in months or less would require unreasonably strong materials. Look on YouTube for the weekly SETI seminars. Not too long ago they discussed exactly that, and what instruments are needed to be able to tell if that were the case! I seem to recall that it's not brown dwarfs that it looks like (a brown dwarf is only just larger than Jupiter) but some types of dusty systems or star formation. Models of spectra show that a particular spectral band that is not distinguished in current readings would show the difference between Dyson spheres and imposters. Most of the answers above assume a shell of material located at a radius around a star which would support Earth-like habitats. However, colonization by robotic life would produce a significantly different architecture. And, the other respondents assume that a dyson sphere (or Dyson swarm, rather) would be orbiting a luminous star - if, instead, a swarm was harvesting material from a Jupiter, it may be dim enough to go unnoticed. If I was a space-faring artificial life form, I would prefer to orbit and mine a Jupiter - lower gravity would make siphoning gas much simpler, and there is no risk of a nova (with a star, loss of gasses would lead to cessation of fusion at its core, followed by collapse, then explosion as fusion re-ignites...). We do not have the sophistication to detect cool, wayward Jupiters, so this possibility may fit your goal of a 'hidden Dyson'. For example, the recently discovered super-magnetic Jupiter: https://phys.org/news/2018-08-vla-extrasolar-planetary-mass-magnetic-powerhouse.html
Basic Operations on Euclidean n-Space Recall from the Euclidean n-Space page that for each positive integer $n$, the Euclidean n-space $\mathbb{R}^n$ is the set of all points $\mathbf{x} = (x_1, x_2, ..., x_n)$. In a moment we will look at some operations defined on Euclidean n-space that the reader should already be familiar with. Before we do though, the reader should note that all of the operations defined below are in compliance to the field axioms of the real numbers in that all of the operations below are all in conjunction with the operations $+$ of addition and $\cdot$ of multiplication of reals. Definition: if $\mathbb{x} = (x_1, x_2, ..., x_n), \mathbb{y} = (y_1, y_2, ..., y_n) \in \mathbb{R}^n$ then we define Equality $\mathbf{x} = \mathbf{y}$ if and only if $x_k = y_k$ for all $k \in \{ 1, 2, ..., n \}$. For example, if $\mathbf{x} = (1, 4, 7)$ and $\mathbf{y} = (1, 3, 7)$ then $\mathbf{x} \neq \mathbf{y}$ since $4 \neq 3$. Definition: If $\mathbf{x} = (x_1, x_2, ..., x_n), \mathbf{y} = (y_1, y_2, ..., y_n) \in \mathbb{R}^n$ then Addition is defined to be $\mathbf{x} + \mathbf{y} = (x_1 + y_1, x_2 + y_2, ..., x_n + y_n)$ and Subtraction is defined to be $\mathbf{x} - \mathbf{y} = (x_1 - y_1, x_2 - y_2, ..., x_n - y_n)$. For example, consider the points $\mathbf{x} = (1, 4, 2, 6), \mathbf{y} = (3, -2, 0.5, \pi) \in \mathbb{R}^4$. Then:(1) And furthermore we have that:(2) Note that in general $\mathbf{x} + \mathbf{y} \neq \mathbf{y} + \mathbf{x}$ which we are already familiar with in the case when $n = 1$. Definition: If $\mathbf{x} = (x_1, x_2, ..., x_n) \in \mathbb{R}^n$ then Scalar Multiplication by the scalar $a \in \mathbb{R}$ is defined to be $a \mathbf{x} = a(x_1, x_2, ..., x_n) = (ax_1, ax_2, ..., ax_n)$. For example, consider the point $\mathbf{x} = (1, 2, 3, 4, 5) \in \mathbb{R}^5$ and $a = 2 \in \mathbb{R}$. Then:(3)
Geometric Series Definition: A Geometric Series is a series in the form $\sum_{n=1}^{\infty} ar^{n-1} = a + ar + ar^2 + ar^3 + ...$ whose $n^{\mathrm{th}}$ term can be obtained by the formula $a_n = ar^{n-1}$. The value $r$ is called the Common Ratio of the geometric series as $\frac{a_{n+1}}{a_{n}} = \frac{ar^n}{ar^{n-1}} = r$ for any $n \in \mathbb{R}$. The $n^{\mathrm{th}}$ partial sum of a geometric series is $s_n = a + ar + ar^2 + ... + ar^{n-1} = \frac{a(1 - r^n)}{1 - r}$. One such example of a geometric series $\sum_{n=1}^{\infty} 2\left (\frac{1}{3} \right )^{n-1} = 2 + \frac{2}{3} + \frac{2}{9} + \frac{2}{27} + ...$. Before we begin examining geometric series, let's first confirm that the $n^{\mathrm{th}}$ partial sum of a geometric series can be given by the formula $s_n = \frac{a(1 - r^n)}{(1 - r)}$ which we asserted in the definition above. Lemma 1: The $n^{\mathrm{th}}$ partial sum of a geometric series with a common ratio $r$ is given by $s_n = \frac{a(1 - r^n)}{(1 - r)}$. We will show two proofs of lemma 1. The first proof is a simple direct proof, while the second proof uses the principle of mathematical induction. (1) Proof 1 of Lemma 1: Let $s_n = a + ar + ar^2 + ... + ar^{n-1}$ be the $n^{\mathrm{th}}$ partial sum of the geometric series $\sum_{n=1}^{\infty} ar^{n-1}$. Now multiply this equation by the common ratio to get $rs_n = ar + ar^2 + ... + ar^{n-1} + ar^n$. Subtracting these two equations we get that: \begin{align} \quad s_n - rs_n = (a + ar + ar^2 + ... + ar^{n-1}) - (ar + ar^2 + ... + ar^{n-1} + ar^n) \\ \quad (1-r)s_n = a - ar^n \\ \quad s_n = \frac{a(1 - r^n)}{1 - r} \blacksquare \end{align} Proof 2 of Lemma 1: We note that the $n^{\mathrm{th}}$ partial sum of a geometric series can be obtained also by the formula $s_n = a + ar + ar^2 + ... + ar^{n-1}$. Therefore we want to show that $a + ar + ar^2 + ... + ar^{n-1} = \frac{a(1 - r^n)}{(1 - r)}$. We will first factor out an $a$ so that it becomes sufficient that we need to only prove that $1 + r + r^2 + ... + r^{n-1} = \frac{(1 - r^n)}{(1 - r)}$. We will show this by induction. Let $P(n)$ be the statement that $1 + r + r^2 + ... + r^{n-1} = \frac{1 - r^n}{1 - r}$. First consider the base step when $n = 1$. Then $1 = \frac{1 - r}{1 - r} = 1$ and so $P(1)$ is true. (2) Now suppose that for some $k \in \mathbb{N}$ that the statement $P(k) : 1 + r + r^2 + ... + r^{k-1} = \frac{1 + r^k}{1 - r}$ is true. We want to show that the truth of $P(k)$ implies the truth of the statement $P(k +1) = 1 + r + r^2 + ... + r^{k-1} + r^k = \frac{1 + r^{k+1}}{1 - r}$. \begin{align} \underbrace{1 + r + r^2 + ... + r^{k-1}} + r^k \\ \overset{IH} = \frac{1 - r^k}{1 - r} + r^k \\ = \frac{1 - r^k + (1 - r)r^k}{1 - r} \\ = \frac{1 - r^k + r^k - r^{k+1}}{1 - r} \\ = \frac{1 - r^{k+1}}{1 - r} \end{align} Therefore $P(k+1)$ is true. So for $n \in \mathbb{N}$, $P(n)$ is true. Now if we multiply all terms by $a$ we get that $a + ar + ar^2 + ... + ar^{n-1} = \frac{a(1 - r^n)}{1 - r}$, and our proof is complete $\blacksquare$ Convergence and Divergence of Geometric Series We will now look at some very important properties of geometric series regarding whether they converge or diverge which will allow us to compute the sums of geometric series. Lemma 2: The geometric series $\sum_{n=1}^{\infty} ar^{n-1}$ converges to $0$ if $a = 0$. Proof: Suppose that $a = 0$ and let $r$ be any common ratio. Then the sequence $\{ ar^{n-1} \} = \{ 0, 0, 0, ... \}$, and it is obvious that $\sum_{n=1}^{\infty} ar^{n-1}$ converges to $0$ since this is simply the zero sequence. $\blacksquare$ Lemma 3: The geometric series $\sum_{n=1}^{\infty} ar^{n-1}$ converges to $\frac{a}{1 - r}$ if $\mid r \mid < 1$. (3) Proof: Suppose that $\mid r \mid ≤ 1$. Recall that a series is said to be convergent if the sequence of partial sums $\{ s_n \}$ converges. We note that the sequence of partial sums is $\left \{ \frac{a(1 - r^n)}{1 - r} \right \}$ by lemma 1 and we know that this sequence converges since $\mid r \mid ≤ 1$ and as $n \to \infty$, the numerator of the general term for the $n^{\mathrm{th}}$ partial sum $a(1 - r^n) \to a$ and the denominator stays fixed at $1 - r$. We now must evaluate $\lim_{n \to \infty} \frac{1 - r^n}{1 - r} = L$. \begin{align} \quad \lim_{n \to \infty} \frac{1 - r^n}{1 - r} = \lim_{n \to \infty} \frac{a}{1 - r} - \lim_{n \to \infty} \frac{ar^n}{1 - r} = \frac{a}{1 - r} - \lim_{n \to \infty} \frac{ar^n}{1 - r} \end{align} (4) Now since $\mid r \mid < 1$, we note that as $n \to \infty$, $ar^n \to 0$ and so $\lim_{n \to \infty} \frac{ar^n}{1 - r} = 0$, therefore: \begin{align} \lim_{n \to \infty} \frac{a(1 - r^n)}{1 - r} = \frac{a}{1 - r} \quad \blacksquare \end{align} Lemma 4: The geometric series $\sum_{n=1}^{\infty} ar^{n-1}$ diverges to positive infinity if $r ≥ 1$ and $a > 0$. Proof: Suppose that $r ≥ 1$ and $a > 0$. Therefore $\frac{1 - r^n}{1 - r} > 0$, so every term in the sequence of partial sums $\{ s_n \}$ will be positive. Now since $r ≥ 1$, then as $n \to \infty$, $\frac{a(1 - r^n)}{1 - r} \to \infty$. Therefore $\lim_{n \to \infty} \frac{a(1 - r^n)}{1 - r} = \infty$. $\blacksquare$ Lemma 5: The geometric series $\sum_{n=1}^{\infty} ar^{n-1}$ diverges to negative infinity if $r ≥ 1$ and $a < 0$. Proof: Suppose that $r ≥ 1$ and $a < 0$. Once again $\frac{1 - r^n}{1 - r} > 0$, so every term in the sequence of partial sums $\{ s_n \}$ will be negative. Now since $r ≥ 1$ then as $n \to \infty$, $\frac{a(1 - r^n)}{1 - r} \to -\infty$. Therefore $\lim_{n \to \infty} \frac{a(1 - r^n)}{1 - r} = -\infty$. $\blacksquare$ Lemma 6: The geometric series $\sum_{n=1}^{\infty} ar^{n-1}$ diverges if $r ≤ -1$ and $a \neq 0$. Proof: Suppose that $r ≤ -1$ and $a ≠ 0$. We note that $\frac{1 - r^n}{1 - r}$ will have terms alternating between being positive and negative since the denominator is always positive and the numerator is positive (if $n$ is even) or negative (if $n$ is odd). Furthermore we note that $\mid 1 - r^n \mid$ is increasing as $n \to \infty$, so $\lim_{n \to \infty} s_n$ does not exist. $\blacksquare$
Riemann Integrable Functions as Upper Functions Riemann Integrable Functions as Upper Functions We will now classify a bunch of very important upper functions. In the following theorem we will see that the set of Riemann integrable functions are also upper functions. Theorem 1: Let $f$ be a function defined on the closed and bounded interval $I = [a, b]$. Then if $f$ is bounded and if $f$ is continuous almost everywhere on $I$ then $f$ is an upper function on $I$ and furthermore $\displaystyle{\int_I f(x) \: dx = \int_a^b f(x) \: dx}$. Proof:For each $n \in \mathbb{N}$ denote the partition $P_n = \{ a_0 = x_0, x_1, ..., x_{2^n} = b \} \in \mathscr{P}[a, b]$ as the partition that subdivides $[a, b]$ into $2^n$ subintervals of equal length, that is, for every $n$, $x_0 = a$ and for every $k \in \{ 1, 2, ..., 2^n \}$: \begin{align} \quad x_k = a + k \left ( \frac{b - a}{2} \right ) \end{align} Then for every $n \in \mathbb{N}$, the next partition $P_{n+1}$ of this form can be obtained by equally subdividing the $2^n$ subintervals created by $P_n$ to obtain $2^{n+1}$ subintervals created by $P_{n+1}$. Now since $f$ is bounded on the interval $[a, b]$ we have that $f$ is also bounded on any subinterval of $[a, b]$. For each fixed $n$ let $k \in \{ 1, 2, ..., 2^n \}$ let: \begin{align} \quad m_k = \inf \{ f(x) : x \in [x_{k-1}, x_k] \} \end{align} Define the step function $f_n(x)$ as follows: \begin{align} \quad f_n(x) = \left\{\begin{matrix} a & \mathrm{if} \: x = a \\ m_k & \mathrm{if} \: x \in (x_{k-1}, x_k], k \in \{ 1, 2, ..., 2^n\}) \end{matrix}\right. \end{align} Then $(f_n(x))_{n=1}^{\infty}$ is a sequence of step functions. Every step function $f_n(x) \leq f(x)$, and the sequence $(f_n(x))_{n=1}^{\infty}$ is clearly an increasing sequence of step functions as illustrated below: We need to show that $(f_n(x))_{n=1}^{\infty}$ converges to $f(x)$ almost everywhere on $I$. Let $x_0$ be any point of continuity of $f$. Since $f$ is continuous at $x_0$ we have that for $\epsilon > 0$ that there exists a $\delta > 0$ such that if $\mid x - x_0 \mid < \delta$ then: \begin{align} \quad \mid f(x) - f(x_0) \mid < \epsilon \end{align} Now choose $N$ sufficiently large such that $\left ( \frac{b - a}{2^N} \right ) < \delta$. Then for $n \geq N$ we see that: \begin{align} \quad \left ( \frac{b - a}{2^n} \right ) \leq \left ( \frac{b - a}{2^N} \right ) < \delta \end{align} So for $n \geq N$ we have that for the partitions $P_n$ that if $x \in (x_{k-1}, x_k]$ for some $k \in \{ 1, 2, ..., 2^n \}$ then $\mid x - x_0 \mid \leq x_k - x_{k-1} < \delta$ and so: \begin{align} \quad \mid f(x) - f(x_0) \mid < \epsilon \end{align} So surely $\mid m_k - f(x_0) \mid \leq \epsilon$. But we defined $f_n(x) = m_k$ for all $x \in (x_{k-1}, x_k]$ and so for all $n \geq N$ we see that $\mid f_n(x_0) - f(x_0) \mid \leq \epsilon$ Thus $\lim_{n \to \infty} f_n(x_0) = f(x_0)$ at every point of continuity $x_0$ in $I$ of $f$. In other words, the sequence $(f_n(x))_{n=1}^{\infty}$ converges to $f$ at every point of continuity of $f$. But $f$ is continuous almost everywhere which implies that $(f_n(x))_{n=1}^{\infty}$ converges to $f$ almost everywhere on $I$. We now show that $\displaystyle{\lim_{n \to \infty} \int_I f_n(x) \: dx}$ is finite. Note that for all $n \in \mathbb{N}$ and for $M$ as any upperbound to $f$ on $I$ that: \begin{align} \quad \int_I f_n(x) \: dx = \sum_{k=1}^{2^n} m_k(x_k - x_{k-1}) \leq \sum_{k=1}^{2^n} M(x_k - x_{k-1}) = M \sum_{k=1}^{2^n} (x_k - x_{k-1}) = M(b - a) \end{align} Therefore the increasing sequence $\displaystyle{\left ( \int_I f_n(x) \: dx \right )_{n=1}^{\infty}}$ is bounded above and converges to a finite number. Furthermore, if $L(P_n, f, x)$ denotes the lower Riemann-Stieltjes sum of $f$ associated with the partition $P_n$ then: \begin{align} \quad \int_I f(x) \: dx = \lim_{n \to \infty} \int_I f_n(x) \: dx = \lim_{n \to \infty} \sum_{k=1}^{2^n} m_k(x_k - x_{k-1}) = \lim_{n \to \infty} L(P_n, f, x) \end{align} We know that $f$ is Riemann integrable on $[a, b]$ since $f$ is continuous almost everywhere on $I$ and so the set of discontinuities of $f$ on $I$ has measure $0$. So by Riemann's condition $\displaystyle{ \underline{\int_a^b} f(x) \: dx = \int_a^b f(x) \: dx}$. But $\displaystyle{\lim_{n \to \infty} L(P_n, f, x) = \underline{\int_a^b} f(x) \: dx}$ since as $n \to \infty$, $P_n$ gets finer and $\| P_n \| \to 0$. This shows that: \begin{align} \quad \int_I f(x) \: dx = \int_a^b f(x) \: dx \quad \blacksquare \end{align}
$ \newcommand{\ra}[1]{\kern-1.5ex\xrightarrow{\ \ #1\ \ }\phantom{}\kern-1.5ex} \newcommand{\ras}[1]{\kern-1.5ex\xrightarrow{\ \ \smash{#1}\ \ }\phantom{}\kern-1.5ex} \newcommand{\da}[1]{\bigg\downarrow\raise.5ex\rlap{\scriptstyle#1}}$ Suppose we have $CW$-complex $X$. All the self-homotopy equivalences form a monoid, denote it by $G$. Question: is there any good way to construct another space $\widetilde X$, such that $\widetilde X$ is homotopy equivalent to $X$ and there exists a homomorphism from $G$ to the group $\widetilde G$ of all self-homeomorphisms of $\widetilde X$? A good way means that, (besides the functoriality) for any homotopy between elements of $G$ we should have an isotopy between corresponding elements of $\widetilde G$. Also, for every $f\in G$ the following diagram should be homotopy-commutative: $$ \begin{array}{c} X & \ra{f} & X \\ \da{e} & & \da{e} \\ \widetilde X & \ra{\widetilde f} & \widetilde X\end{array} $$ Here $e$ is a fixed homotopy equivalence between $X$ and $\widetilde X$. (I have one idea how to change all the homotopy equivalence to the homeomorphisms using mapping telescope, but I don't know what to do with homotopies and isotopies)
I have been trying to understand a more or less geometric derivation of the Lorentz transformation, and I'm getting stuck at one spot. The wikipedia article for the Lorentz transformation for frames in standard configuration lists the following equations: $$x^{\prime} = \frac{x-vt}{\sqrt{1-\frac{v^2}{c^2}}}$$ $$y^{\prime} = y$$ $$z^{\prime} = z$$ $$t^{\prime} = \frac{t-(v/c^2)x}{\sqrt{1-\frac{v^2}{c^2}}}$$ I've been able to work everything out except for $-(v/c^2)x$ in the $t^{\prime}$ equation. I haven't seen any explanations for this, which makes me feel like I'm missing something simple. Where does this part of the equation come from? Shouldn't $t^{\prime} = \gamma \cdot t$? EDIT:Ok, so I reviewed the idea I was using to derive the Lorentz factor and thus the transformation for $t^{\prime}$. Suppose you have the two frames I've described, and you have a light wave moving perpendicular to the X axis in the second ($\prime$) frame. Using basic trig with the diagram, you can derive: $$t^{\prime}=t\cdot\sqrt{1 - \frac{v^2}{c^2}}$$ Obviously this would contradict the transformation provided by wikipedia. What step am I missing here? I don't really want a proof that I'm wrong or that the equation I've derived is incorrect - I'm already pretty convinced of that. What I would really like is an intuitive explanation as to why mine is invalid and how I would go about deriving the correct equation through similar means.
The scattering amplitude can be obtained in QFT from the $n$-point Green's function by the LSZ reduction formula. The $n$-point Green's function are a correlation function of a string of fields : $$G(q_1,..,q_n)=\int dx_1...dx_n\,e^{-iq_1x_1}...e^{-iq_nx_n}\langle 0|\mathcal{T}\{A_1(x_1)...A_n(x_n)\}|0\rangle$$ Now, we can factorize the time-ordering using the theta function, giving terms like: $$\int dx_1...dx_n\,e^{-iq_1x_1}...e^{-iq_nx_n}\langle 0|\mathcal{T}\{A_1(x_1)...A_r(x_r)\}\mathcal{T}\{A_{r+1}(x_{r+1})...A_n(x_n)\}|0\rangle \times$$$$\times\theta(\min[x_1^0...x_r^0]-\max[x_{r+1}^0...x_n^0])$$ The idea is that very old idea in quantum mechanics when we insert the identity operator $I$ in a given representation: $$I=\sum_{p,\sigma}|p,\,\sigma\rangle\langle p,\,\sigma|+...$$ where the first term is one-particle states projector and $...$ are multi-particle states projectors. Keeping just the one particle terms, we have: $$\int dx_1...dx_n\,e^{-iq_1x_1}...e^{-iq_nx_n}\int d^3p\sum_\sigma\langle 0|\mathcal{T}\{A_1(x_1)...A_r(x_r)\}|p,\,\sigma\rangle\times$$$$\times\langle p,\,\sigma|\mathcal{T}\{A_{r+1}(x_{r+1})...A_n(x_n)\}|0\rangle \,\theta(\min[x_1^0...x_r^0]-\max[x_{r+1}^0...x_n^0])$$ we get a factorization of the the correlation function into two pieces, connected just by the theta function. To get ride of this we can use translation symmetry to make: $$\langle 0|\mathcal{T}\{A_1(x_1)...A_r(x_r)\}|p,\,\sigma\rangle=e^{ip.x_1}\langle 0|\mathcal{T}\{A_1(0)...A_r(y_r)\}|p,\,\sigma\rangle$$$$\langle p,\,\sigma|\mathcal{T}\{A_{r+1}(x_{r+1})...A_n(x_n)\}|0\rangle=e^{-ip.x_{r+1}}\langle p,\,\sigma|\mathcal{T}\{A_{r+1}(0)...A_n(y_{n})\}|0\rangle$$ and under this new variables the theta function becomes: $$\theta(x_1^0-x_{r+1}^0+\min[0...y_r^0]-\max[0...y_n^0])$$ using the Fourier representation: $$\theta (\tau)=-\frac{1}{2\pi i}\int_{-\infty}^{+\infty}\frac{d\omega e^{-i\omega \tau}}{\omega + i\varepsilon}$$ now we can perform the integration over $x_1$, $x_{r+1}$ and $p$. Some delta Dirac will shows up enforcing conservation of the momentum between the two blobs and an extra delta enforcing $\omega$ being equal to the energy transferred between the blobs minus the energy $E_p$ of the one-particle state. Then, the pole $(\omega +i\varepsilon)^{-1}$ that comes from the theta function will give rise to a pole $(q^{0}-E_p+i\varepsilon)^{-1}$ where $q^0$ is the energy transferred between the blobs. Around the pole, we can make $(q^{0}-E_p+i\varepsilon)^{-1}\rightarrow 2E_p (q^2+m^2-i\varepsilon)^{-1}$, with $\vec{q}=\vec{p}$. The term $2E_P$ is absorbed by the integrals to form a relativistic invariant measure. This is how the pole $(q^2+m^2-i\varepsilon)^{-1}$ show up. Now let us look at the residue. After the LSZ reduction formula, the residue will be precisely the product of two new amplitudes: $$\lim_{q^2\rightarrow-m^2}(q^2+m^2-i\varepsilon)A(q_1,...,q_n)=A(q,q_2,...,q_r)\times A(q,q_{r+2},...,q_n)$$ where $q=q_1+...+q_r=-(q_{r+1}+...+q_{n})$. This means that we have a pole whatever the amplitudes $A(q,q_2,...,q_r,p)$ and $A(q,q_{r+2},...,q_n)$ are non-zero. For more detailed explanation and calculation see Weinberg, QFT, volume 1, chapter 10.
In HA, remark 7.4.1.12, Lurie writes that the cotangent complex of $A\to B$ is the fiber of the multiplication map $B\otimes_A B\to B$. The more classical definition of TAQ is given by the indecomposables of the augmentation ideal of $B\otimes_A B$. Lurie's definition seems to bypass the step of taking indecomposables. What am I missing? here's another confusion. HA 7.5.4.5 says that an étale map of connective E_infty algebras is TAQ-étale. On the other hand, 7.5.4.6 says that étale maps are THH-étale, i.e. the unit of THH is an equivalence (no connectivity hypothesis). But THH-étale implies TAQ-étale, so this would say that an étale map is TAQ-étale, with no connectivity hypothesis... which is not true What is true is that THH-étale is a stronger condition than TAQ-étale Let me retract what I wrote before and say that every THH-étale morphism is TAQ-étale (this is proved by Rognes in that paper) and for connective rings the two things coincide and coincide with the "strong" notion of étale But in general neither THH-étale nor TAQ-étale morphisms need to be flat I was recently going through this notes. However I didn't understand the proof of Proposition 1.11 (page 8). Specifically the following, "But then using a compatibility of $e_1$ and $ε_1$ we get that the composite natural transformation is $e_2$ as required." I don't understand how we can get $e_2$. Since I haven't understood this part I didn't go further. Can anyone help me here? @user170039 They're just using that the composition $\epsilon_1F\circ Fe_1$ is the identity, by the definition of adjoint equivalence. If you apply $G_2$ to that you see that the composition of the last two arrows is the identity, and so the whole composition coincides with the first arrow (i.e. $e_2$)
I've got the opportunity to play with a machine with some premium fonts installed. I ran some experiments with XeTeX and e.g. Palatino Linotype, which worked, but for some reason cyrillic breaks with Garamond Premier Pro. The result may be downloaded here. UPDATE: It turns out the machine had two versions of Garamond Premier Pro installed, one of which apparently has faulty Cyrillic support even though it advertises cyrillic in otfinfo -s. I disabled this version and everything now works fine. \documentclass[a4paper,12pt]{article}\usepackage[MnSymbol]{mathspec}\usepackage{xunicode,xltxtra}\setmainfont[Mapping=tex-text,Numbers={OldStyle},Ligatures{Common},Contextuals=Alternate]{Garamond Premier Pro}\begin{document}Queue stop acte effet shelfful shelf{\null}ful παιδείαὫ $\pi\alpha\phi\Omega$ Черникова ---\end{document}
This is kind of an odd thought I had while reviewing some old statistics and for some reason I can't seem to think of the answer. A continuous PDF tells us the density of observing values in any given range. Namely, if $X \sim N(\mu,\sigma^2)$, for example, then the probability that a realization falls between $a$ and $b$ is simply $\int_a^{b}\phi(x)dx$ where $\phi$ is the density of the standard normal. When we think about doing an MLE estimate of a parameter, say of $\mu$, we write the joint density of, say $N$, random variables $X_1 .. X_N$ and differentiate the log-likelihood wrt to $\mu$, set equal to 0 and solve for $\mu$. The interpretation often given is "given the data, which parameter makes this density function most plausible". The part that is bugging me is this: we have a density of $N$ r.v., and the probability that we get a particular realization, say our sample, is exactly 0. Why does it even make sense to maximize the joint density given our data (since again the probability of observing our actual sample is exactly 0)? The only rationalization I could come up with is that we want to make the PDF is peaked as possible around our observed sample so that the integral in the region (and therefore probability of observing stuff in this region) is highest.
I'm building a SHA-1 rainbow table to crack basic passwords (i.e. up to 10 character, 0-9,a-z), but I can't seem to calculate that golden ratio that would give me the most coverage. What ration of chain length to number of chains should be chosen? That choice is not really about coverage, but (mainly) about how much space you are willing to use. You should choose the number of chains based on how much space you can use, since each chain uses a constant amount independent of its length. Then you can choose the chain length to get the coverage you need or based on the amount of computational effort you can spend on creating the table. Also is there anyway to estimate the coverage of the table? Yes. You will find a formula for the probability of success in the seminal paper by Philippe Oechslin (pdf): The probability of success within a single table of size $m × t$ is given by: $$P_{table} = \Pi_{i=1}^{t} (1 - \frac{m_i}{N})$$ where $m_1 = m$ and $m_{n+1} = N (1 - e^{-\frac{m_n}{N}})$ Note that in your case you will likely not be able to generate a rainbow table with good coverage unless you invest quite a bit of effort. Your search space is larger than $35^{10} \approx 2^{51.3}$, so to get even 50% coverage you would need to compute more than $2^{50}$ hashes, taking several years of CPU time with a typical desktop CPU. As long as your coverage is low you can get a decent approximation by assuming you have no collisions, i.e. just divide the product of chain count and chain length by the size of the search space.
A Subgroup of Index a Prime $p$ of a Group of Order $p^n$ is Normal Problem 470 Let $G$ be a finite group of order $p^n$, where $p$ is a prime number and $n$ is a positive integer. Suppose that $H$ is a subgroup of $G$ with index $[G:P]=p$. Then prove that $H$ is a normal subgroup of $G$. ( Michigan State University, Abstract Algebra Qualifying Exam) Proof. Let $G/H$ be the set of left cosets of $H$. Then the group $G$ acts on $G/H$ by the left multiplication. This action induces the permutation representation homomorphism \[\phi: G\to S_{G/H},\] where $S_{G/H}$ is the symmetric group on $G/H$. For each $g\in G$, the map $\phi(g):G/H \to G/H$ is given by $x\mapsto gx$. By the first isomorphism theorem, we have \begin{align*} G/\ker(\phi) \cong \im(\phi) < S_{G/H}. \end{align*} This implies the order $|G/\ker(\phi)|$ divides the order $|S_{G/H}|=p!$. Since \[|G/\ker(\phi)|=\frac{|G|}{|\ker(\phi)|}=\frac{p^n}{|\ker(\phi)|}\] and $p!$ contains only one factor of $p$, we must have either $|\ker(\phi)|=p^n$ or $|\ker(\phi)|=p^{n-1}$. Note that if $g\in \ker(\phi)$, then $\phi(g)=\id:G/H \to G/H$. This yields that $gH=H$, and hence $g\in H$. As a result, we have $\ker(\phi) \subset H$. Since the index of $H$ is $p$, the order of $H$ is $p^{n-1}$. Thus we conclude that $|\ker(\phi)|=p^{n-1}$ and \[\ker(\phi)=H.\] Since every kernel of a group homomorphism is a normal subgroup, the subgroup $H=\ker(\phi)$ is a normal subgroup of $G$. Add to solve later
A condition for the Holder regularity of strong local minimizers of a nonlinear elastic energy in two dimensions Bevan, JJ (2017) A condition for the Holder regularity of strong local minimizers of a nonlinear elastic energy in two dimensions Archive for Rational Mechanics and Analysis. Text NLE_holder_submit_arxiv_02_Oct_2015.pdf Available under License Creative Commons Attribution. Download (289kB) | Preview Text NLE2015-ARMA_8_Oct.pdf - Accepted version Manuscript Restricted to Repository staff only Download (341kB) Abstract We prove the local Holder continuity of strong local minimizers of the stored energy functional \[E(u)=\int_\Omega \lambda|\nabla u|^{2}+h(\det \nabla u) \,dx\] subject to a condition of `positive twist'. The latter turns out to be equivalent to requiring that $u$ maps circles to suitably star-shaped sets. The convex function $h(s)$ grows logarithmically as $s\to 0+$, linearly as $s \to +\infty$, and satisfies $h(s)=+\infty$ if $s \leq 0$. These properties encode a constitutive condition which ensures that material does not interpenetrate during a deformation and is one of the principal obstacles to proving the regularity of local or global minimizers. The main innovation is to prove that if a strong local minimizer has positive twist a.e. on a ball then a variational inequality holds and a Caccioppoli inequality can be derived from it. The claimed Holder continuity then follows by adapting some well-known elliptic regularity theory. Item Type: Article Subjects : Mathematics Divisions : Faculty of Engineering and Physical Sciences > Mathematics Authors : Bevan, JJ Date : 15 May 2017 DOI : 10.1007/s00205-017-1104-5 Copyright Disclaimer : The final publication will be available at Springer via http://link.springer.com/journal/205 Related URLs : Depositing User : Symplectic Elements Date Deposited : 30 Oct 2015 09:25 Last Modified : 05 Jul 2019 16:15 URI: http://epubs.surrey.ac.uk/id/eprint/808817 Actions (login required) View Item Downloads Downloads per month over past year
Bancacy is an innovative and decentralized digital asset class that is establishing new form of Money. The ecosystem utilize BNY/XBNY Cryptocurrencies for Asset Solidification, Investments and Passive Income in aspiration to deliver fully independent and immutable Digital Money powerd by the Blockchain. Bancacy derives its nature from Hooke's law of physics: $$F = xK.$$ Where \(F\) is the force that is required to extend or compress a spring by some distance x scales linearly with respect to that distance. \(K\) is a constant factor characteristic of the spring, \(x\) is the total possible deformation of the spring. Unlike any other Cryptocurrencies, Bancacy's supply can "extend" or "compress" just like a spring: $$F =cd $$ Where \(F\) is the force to "extend" or "compress" the supply: positive value for extending the supply and negative value for compressing. \(c\) is the value of the price moving capital. Negative if the capital will be used for selling BNY and positive for buying. \(d\) is the demand of the token - can be referred to the buy and sell walls on the market. $$ \Delta d , \pm c , \pm F $$ The above is general explanation for understanding the Bancacy protocol. The equation below will be used to calculate the supply at any given point of time: Let \(A\) to be price ordered set that each object is an Ordered Pair of: BNY tokens that are in Solidification and the entry price of it. e.g: $$ A=\{(500 , 1.9$) , (100 , 2$) , (900,3$)...(800 , 15$)\} $$ \(P\) will be the current market price of BNY token. e.g: $$ P = 2.5$ $$ \(f\) is function on the objects in \(A\) that returns the product of each Ordered Pair. \(f\) wiil run twice, first on all the ordered pairs in which the second object is smaller than \(P\). Second run of \(f\) will be on the ordered pairs in which the second object is bigger than\(P\). The products will be inserted into 2 new sets \(B\) \(\subset\) \(\mathbb {R^+} \) and \(C\) \(\subset\) \(\mathbb {R^+} \) respectively. e.g: $$ B = \{950 , 200 , ... \} |B| = Objects In: B $$ $$ C = \{ 2.700 , ... , 12.000\}|C| = Objects In: C $$ Each object in \(B\) and \(C\) represent XBNY tokens amount. So if all of these XBNY tokens get converted back to BNY, again, The total supply at that moment will be: $$ S = (\sum_{i=1}^{|B|} \frac {x_i}P ) + (\sum_{i=1}^{|C|} \frac {x_i}P x_i) + T $$ \(T\) is the current total BNY supply. \(S\) is the BNY supply at any given point in time. Now we would like to express the relation between the price fluctuation and the supply, the equation that represent this relation need to take the sum of BNY in Solidification \(s_1\) (first object in the ordered pairs - set \(A\)) Sum of XBNY in Solidification \(s_2\) (sets \(B+C\)). $$ s_1 = (\sum_{i=1}^{|A_1|} x_i )\qquad s_2 = (\sum_{i=1}^{|B|} x_i +\sum_{i=1}^{|C|} x_i) $$ The division \(\frac {s_2}{s_1} = P_w\) is the weighted average "entry" price in $ for all the BNY in set \(A\). Now we can get the supply when the price \(P_v\) is variable: \(\frac{s_2}P_v\). This sequence in equations development will lead us to the equation for % price movement and the effect on the supply: $$T = (1-\frac{P}{P_w} ) *(\frac{s_2}{P})$$ Where: \(P\) is the future price of BNY. \(T\) is the increase or decrease of BNY tokens(positive or negative values). Bancacy is solving Two primary problems in our current Cryptocurrencies ecosystem: Asset Solidification Eliminating the fear of missing opportunities due to market instability — Asset Solidification smart contract will be deployed after BNY market and price reaches maturity. SELF-GOVERNANCE Exhaustive decentralization and independence from central authority after Asset Solidification smart contract deployment. Incentive Based Ecosystem Enabling maximum appeal directed toward investors and durable passive income aspirants. Asset Solidification—The fundamental aspect in Bancacy that will allow investors to solidate and bond thier BNY value in USD utilizing BNY in sake of eliminating any fiat backed Cryptocurrency or reliance on third party integrity. Three Smart Contracts will interact among each other in order to act as vicarious for fiat backed Cryptocurrency, accomplishing self-determined Asset Solidification adopting BlockChain technology for the sake of replacing existing centralized platforms. The BNY Cryptocurrency will implement the economic model for interest rate upon deposit in Two methods: Investments—User friendly feature that allow investors to invest their tokens and receive their initial investment + Dynamic interest rate upon investment term period passed. Passive Income— Complementary for the Investment function, Passive Income will entitle the user a daily basis stream of income branched into 365 days. Dynamic Interest Rate will ensure that the inflation rate for BNY will stay flat as well as gradually equitable toward all users via mechanism that compensate the early investors with ameliorated Interest Rate in distinction to the later investors . Investments is divided into 3 categories that are sorted by investment length and each will have its own Interest rate permissive larger rate for greater duration with starting point at 0.16% up to 12.8% per year. To eliminate minting of BNY, those functiltes will be restricted in a way that if theoretically all the BNY supply is invested in the platform, The interest Rate will be 0. That in purpose of establishing balanced ecosystem that guarantee profitability for all the Investors. Bancacy will be offered to the public via an autonomous descending price sale, The authors will receive a one-time retention of 17% (168.3 million) of the initial BNY supply, those will be used for long-term project development. The public ascending price sale will take place at Bancacy launch, 54% of the supply will be offered to the public. 6% of the total supply will be utilize in private sale for Bancacy partners. 23% of BNY supply will be provided for Initial Exchange Offering. A project as advanced and ambitious as Bancacy requires known, proven talent and leadership in cryptocurrencies—people who uniquely understand the engineering and marketplace challenges.
Newform invariants Coefficients of the \(q\)-expansion are expressed in terms of \(\beta = \frac{1}{2}(1 + \sqrt{13})\). We also show the integral \(q\)-expansion of the trace form. For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below. For more information on an embedded modular form you can click on its label. This newform does not admit any (nontrivial) inner twists. \( p \) Sign \(2\) \(-1\) \(7\) \(-1\) \(41\) \(-1\) This newform can be constructed as the intersection of the kernels of the following linear operators acting on \(S_{2}^{\mathrm{new}}(\Gamma_0(8036))\): \(T_{3}^{2} \) \(\mathstrut -\mathstrut 3 T_{3} \) \(\mathstrut -\mathstrut 1 \) \(T_{5}^{2} \) \(\mathstrut -\mathstrut 3 T_{5} \) \(\mathstrut -\mathstrut 1 \) \(T_{11} \) \(\mathstrut -\mathstrut 1 \)
The Adobe Photoshop image manipulation program has a feature called “snap to,” which snaps edges of a selection into alignment with grid lines superimposed over the image as users move the selection around. This feature is an invaluable tool for producing perfectly aligned, professional-quality graphics. An analogous operation in mathematics is what Alan Edelman calls “snap to structure.” Here, a mathematical object that is required to have a particular property, but fails to do so, is perturbed so that it has the property. An orthogonal projection onto the set of interest typically defines the perturbation. A ubiquitous example of snap to structure occurs in floating-point arithmetic. When we compute the sum or product of two double-precision floating-point numbers, the exact result may have more significant digits than we started with. The IEEE standard requires the exact product to be snapped back to double precision according to precisely defined rounding rules, with round to nearest the default. In recent work, Jörg Liesen and Robert Luce [1] consider the question of whether a given \(m \times n\) matrix is a Cauchy matrix — whether its \((i,j)\) element has the form \(1/(s_i-t_j)\) for all \(i\) and \(j\) and some \(m\)-vector \(s\) and \(n\)-vector \(t\). Various computations can be done asymptotically fast with such matrices (of which the famous Hilbert matrix is a special case). The authors also treat the problem of approximating a matrix by a Cauchy matrix, that is, snapping to Cauchy structure. Snapping to a matrix structure is commonly described as solving a matrix nearness problem. Here, I take distance to be measured in the Frobenius norm. One of the most familiar examples is forming the nearest rank-\(k\) matrix. The Eckart-Young theorem states that to solve this problem one simply computes a singular value decomposition (SVD) of the given matrix and sets all singular values beyond the \(k\)th largest to zero. In the context of solving a linear least squares problem, snapping the coefficient matrix to rank \(k\) corresponds to forming a truncated SVD solution. Loss of definiteness of a symmetric matrix is a common problem in many applications. One can find the nearest positive semidefinite matrix to a given matrix by computing a spectral decomposition and setting any negative eigenvalues to zero. Cartoon created by mathematician John de Pillis. Another example, which I first encountered in aerospace computations, concerns orthogonality. A \(3 \times 3\) direction cosine matrix drifts from orthogonality because of rounding errors and thus must be orthogonalized, with practitioners favoring the nearest orthogonal matrix over other orthogonalization techniques. For a complex scalar \(z=re^{i\theta}\), the nearest point on the unit circle is \(e^{i\theta}\). The matrix case generalizes this observation: if \(A=UH\) is a polar decomposition of a real, nonsingular matrix \(A\) (\(U\) orthogonal, \(H\) symmetric positive definite), then \(U\) is the nearest orthogonal matrix to \(A\). Snap to structure is also natural when a real solution is expected but the relevant algorithm has made an excursion into the complex plane, potentially leaving an imaginary part of rounding errors. In this case, snapping means setting the imaginary part to zero. This is done in the MATLAB function funm (for computing a function of a matrix) when the matrix is real and the result’s imaginary part is of the order of the unit roundoff. Of course, much mathematical research is concerned with preserving and exploiting known structure in a problem, making snapping to structure unnecessary. For example, geometric numerical integration is about structure-preserving algorithms for the numerical solution of differential equations. For a simple illustration, suppose we decide to plot circles by numerically solving the differential equations \(u'(t) = v(t)\) and \(v'(t) = -u(t)\) with initial values \(u(0) = 1\) and \(v(0) = 0\), for which \(u(t) = \cos t\) and \(v(t) = -\sin t\). The forward Euler method produces solutions spiralling away from the circle, while the backward Euler method produces solutions spiralling into the origin. We could apply one of these methods and project the solution back onto the circle at each step. However, a better strategy would be one that produces approximations guaranteed to lie on the circle; the trapezium method has this property. For this problem, \(u(t)^2 + v(t)^2\) is an invariant of the differential equations, which the trapezium method automatically preserves. Despite the large body of theory on structure-preserving algorithms, we live in a world of imperfect data, use finite-precision arithmetic, and have a limited choice of available software, all of which can result in errors that destroy structure. So enforcing structure at some point during a computation may seem sensible. But is it always a good thing to do? Consider the problem of computing the determinant of a matrix of integers in floating-point arithmetic. Using the definition of determinant by expansion by minors will give an integer result, excluding the possibility of overflow. But this approach is too costly except for very small dimensions. The determinant is normally computed via a pivoted LU factorization: \(PA = LU\) implies \(\det(A) = \pm \det(U)\). The computed determinant is likely to have a nonzero fractional part because of rounding errors. Rounding the result to the nearest integer might seem natural, but consider this MATLAB example involving a matrix whose elements are from Pascal’s triangle: >> n = 18; A = pascal(n); det(A) ans = 1.8502e+00. If we round to the nearest integer, we declare the determinant to be \(2\). But the determinant is \(1\) (as for all Pascal matrices) and \(A\) is formed exactly here, so the only errors are within the determinant evaluation. Early versions of MATLAB used to snap to structure by returning an integer result after evaluating the determinant of a matrix of integers. This behavior was changed because, as we have just seen, it can give a misleading result. Interesting rounding error effects can also occur in the evaluation of the determinant of a matrix with entries \(\pm 1\). Photoshop sensibly allows users to disable snap to structure, as one does not aways want elements to line up with a grid. Likewise, snapping to a mathematical structure is not always the correct way to handle a loss of that structure. Understanding why is one of the things that makes life interesting for an applied mathematician. Acknowledgments: I am grateful to Alan Edelman for stimulating discussions on the topic of this article. References [1] Liesen, J., & Luce, R. (2016). Fast Recovery and Approximation of Hidden Cauchy Structure. Linear Algebra Appl., 493, 261-280. Nicholas Higham is the Richardson Professor of Applied Mathematics at The University of Manchester. He is the current president of SIAM.
I am retreating back on this statement, after some explorations and calculationBow to Willie and others who were skeptical on this. Main difficulty can be seen in this reference. But I must mention that my quest for jump discontinuities has not seen a dead end but a new light after this failure. I am very much interested in this class of functions and this kind of jumps, and have found a new way to deal with them in 2 dimensions. My new pursuit: A direction dependent jump can be converted to jump of a 1-d function via Radon transform, reflecting jumps in different directions at some point into separate 1-d jumps via Radon transform. Via Fourier slice theorem, such 1-d jumps can be dealt with different slices of FT of the image using a 1-d technique ). Cant wait to lay my hands on image signal processing. The initial question was in $N$ dimensions which follows after the heading "Initial question". But in view of Willie's comments, I realized it has flaws, but I am still optimistic, and hope they are not fatal. So I'd like to formulate and pose the question for the simple case of $N=2$, with which I am at comfort, considering my limitations with higher math, and then hope it can be generalized to $N>2$ by mathematicians. I seek their help in this regard. Case of $N=2$ Main motivation is monochromatic image signal or any 2-d signal. Mathematically We represent it as a function $f:\mathbb{R}^2 \to \mathbb{R}$. I know image signal has compact support, but for mathematical convenience, we can always place zeros where ever it is not defined and extend its domain to $\mathbb{R}^2$. (Atleast I do this blindly just for mathematical beauty). Definition-1 A function $f:\mathbb{R}^2 \to \mathbb{R}$ is said to be of bounded variation, if for every rectifiable curve in $\mathbb{R}^2$, the 1-d function obtained byrestricting the domain of $f$ to the curve, is a function of bounded variation in 1-d sense. (Motivation for this definition : For the following definition to make sense, and for the limits defined in it to exist always.) Definition-2 : Directional limits. Given a point $\hat{x}_0 = (x,y) \in \mathbb{R}^2$, for every $\theta \in \mathbb{R}$ we define a directional limit as $$u_{\theta}(\hat{x}_0) = \lim_{r\to 0+} f(x+r\cos(\theta),y+r\cos(\frac{\pi}{2}-\theta))$$ where $r \in \mathbb{R}$. Definition-3 : Limit function. We define a limit function $J_{\hat{x}_0} : \mathbb{R} \to \mathbb{R}$ for every point $\hat{x}_0 \in \mathbb{R}^2$ given as $$J_{\hat{x}_0}(\theta) = u_{\theta}(\hat{x}_0)$$ Definition-4 Class of functions $\mathcal{V}$ consists of all functions of the form $f:\mathbb{R}^2\to\mathbb{R}$ which satisfy the following two conditions. $f$ should be a function of bounded variation as per Definition-1. For every $\hat{x}_0 \in \mathbb{R}^2$, the associated limit function $J_{\hat{x}_0}(\theta)$ (as in def-3) should be a function of bounded variation when its domain is restricted to $[0,2\pi)$. Problem Let $f \in \mathcal{V}$ be a square integrable function and let its Fourier transform be $\hat{f}$. Given any $\theta \in \mathbb{R}$ and $r\in\mathbb{R}$ we define a directional partial sum $S^{\theta}_r : \mathbb{R}^2 \to \mathbb{R}$ as $$S^{\theta}_r(\hat{x}) = \int\limits_{-rcos(\pi/2-\theta)}^{r\cos(\pi/2-\theta)} \int\limits_{-rcos\theta}^{r\cos\theta} \hat{f}(k_x,k_y) e^{i(xk_x+yk_y)} \mathrm{d}{k_x} \mathrm{d}{k_y} $$ where $\hat{x} = (x,y) \in \mathbb{R}^2$. Is the following statement true? Given any $\hat{x}_0 \in \mathbb{R}^2$ and any $\theta \in [0,\pi)$ $$\lim_{r\to\infty} S^{\theta}_r(\hat{x}_0) = \frac{1}{2}(u_{\theta}(\hat{x}_0) + u_{(\pi-\theta})(\hat{x}_0) )$$ Higher dimensional case revisited Definition 1 Let $\Sigma\subset \mathbb{R}^N$ be a smooth submanifold. A function $f:\Sigma\to \mathbb{R}$ is said to have bounded variation if for every rectifiable curve $\gamma\subset \Sigma$, the composition $f\circ\gamma$ has bounded variation in the usual one-dimensional sense. Now, let $\Sigma \subset\mathbb{R}^N$ be a smooth submanifold and let $x\in \Sigma$. We write $\exp_x: T_x\Sigma \to \Sigma$ to denote the exponential map. Definition 2 Let $\Sigma$ be a $d+1$ dimensional submanifold of $\mathbb{R}^N$. Let $f:\Sigma\to \mathbb{R}$. Fix $x\in \Sigma$ and $\omega\in T_x\Sigma$ a unit vector. We write the directional limit$$ f_\omega(x) := \lim_{t \to 0+} f\circ \exp_x(t\omega) $$whenever the limit on the right hand side exists. If for every unit vector $\omega\in \mathbb{S}^d \subset T_x\Sigma$ the directional limit $f_\omega(x)$ exists, we write $J_x: \omega \mapsto f_\omega(x)$. Note that $J_x:\mathbb{S}^d \to \mathbb{R}$. Definition 3 Let $\Omega$ be a smooth submanifold of $\mathbb{R}^N$. We say that a function $f$ belongs to the class $\mathcal{V}(\Omega)$ iff $f$ has bounded variation and that at every point $x\in \Omega$ its blow-up $J_x$ has bounded variation, both in the sense of Definition 1. Definition 4 Given $\omega\in \mathbb{S}^{N-1} \subset \mathbb{R}^N$, we write $\omega_i$ to be the $i$th coordinate value of $\omega$ relative to the standard rectangular coordinate system. We can define the rectangle $R_r^\omega$ for $r > 0$ to be$$ R_r^\omega = (-r |\omega_1|, r|\omega_1|) \times (-r |\omega_2|,r|\omega_2|) \times \cdots \times (-r|\omega_N|, r|\omega_N|)~,$$in other words the rectangle with sides parallel to the standard hyperplanes and with diagonal $r\omega$. We also define $\mathrm{sgn}(\omega) = \prod_{i = 1}^N \mathrm{sgn}(\omega_i)$. Now let $f\in \mathcal{V}(\mathbb{R}^N)\cap L^1(\mathbb{R}^N)$. Denote by $\hat{f}$ its Fourier transform. Fix $\omega\in \mathbb{S}^{N-1}$. Write $$ S_r^\omega f(x) = \int_{R_r^\omega} \hat{f}(\xi) \exp(i x\cdot \xi) \mathrm{d}\xi~. $$ Question Is it true that $$ \lim_{r \to\infty} S^\omega_r f(x) = \frac12 [J_x(\omega) + J_x(-\omega)] $$ for every $x\in \mathbb{R}^N$ and $\omega\in \mathbb{S}^{N-1}$? Initial question This is a question about pointwise convergence of a Fourier transform of functions of the form $f: \mathbb{R}^N \to \mathbb{R}$, which is potentially a $N$-dimensional generalization to pointwise convergence of $1$ dimensional Fourier transform. This question arose when I am trying to generalize this statement to $N$-dimensions. Definition 1 : Functions of bounded variation in $\mathbb{R}^N$. Given any rectifiable curve in $\mathbb{R}^N$, if the function $f:\mathbb{R}^N \to \mathbb{R}$ evaluated on this curve is a function of bounded variation in the 1-d sense, then we say $f$ is a function of bounded variation. Definition : Directional Limits. for a function $f:\mathbb{R}^N \to \mathbb{R}$, given a unit vector $\bf{\hat{a}}$, we define the directional limit of $f$ at a point $\bf{x_0} \in \mathbb{R}^N$ along $\bf{\hat{a}}$ as $u_{\bf{\hat{a}}}(\bf{x_0}) = \lim_{\alpha \to 0^+}f(\bf{x_0 + \alpha \hat{a}})$. Limit function at a point $x_0$ denoted as $J_{\bf{x_0}}:S^{N-1} \to\mathbb{R}$ is defined as $$J_{\bf{x_0}}(\bf{\hat{a}}) = u_{\bf{\hat{a}}}(\bf{x_0})$$. We denote this jump function in $\theta$-coordinates as $J^{\theta}_{\bf{x_0}}:[0,2\pi)^{N-1} \to \mathbb{R}$ Definition of a class of functions (This definition is recursive) Given $\Omega$ an open subset of $\mathbb{R}^N$, we define a set of functions $\mathcal{V}(\Omega)$ with the following properties. Iff $f \in \mathcal{V}(\Omega)$ then $f:\Omega \to\mathbb{R}$ is square integrable and function and of bounded variation as per Definition 1. With an additional constraint that the limit function in $\theta$-coordinates, at any point $P \in \Omega$ , denoted as $J^\theta_P: [0,2\pi)^{N-1}\to \mathbb{R}$ also belongs to the class of functions $\mathcal{V}([0,2\pi)^{N-1})$. Fourier partial sum Consider a function $f \in \mathcal{V}(\mathbb{R}^N))$, and let its Fourier transform be $\hat{f}$. Given any unit vector $\bf{\hat{a}} \in \mathbb{R}^N$, and a positive real number $R$, we define Fourier partial sum as $$S^{\bf{\hat{a}}}_R : \mathbb{R}^N \to \mathbb{R}$$ defined as $$S^{\bf{\hat{a}}}_R(\bf{x}) = \int_{-R\cos(\theta_1)}^{R\cos(\theta_1)} \int_{-R\cos(\theta_2)}^{R\cos(\theta_2)} ...\int_{-R\cos(\theta_{N-1})}^{R\cos(\theta_{N-1})} \int_{-R\cos(\phi)}^{R\cos(\phi)} \hat{f}(k_1,k_2,k_3,...k_N) e^{i(k_1x_1+k_2x_2+...+k_Nx_N)} \mathrm{d}{k_1}\mathrm{d}{k_2}...\mathrm{d}{k_N}$$ where $[\theta_1,\theta_2,...\theta_{N-1}]$ is $\bf{\hat{a}}$ expressed in $\theta$-coordinates, and $\phi = \frac{\Phi_{N-1}}{2^N} - \sum\limits_{j = 1}^{N-1} \theta_j$ where $\Phi_{N-1}$ is the total solid angle subtended by the full surface of a unit $(N-1)$-sphere given as $$\Phi_{N-1} = \frac{2\pi^{\frac{N-1}{2}}}{\Gamma(\frac{N-1}{2})}$$ $\bf{k} = [k_1,k_2,...k_N] \in \mathbb{R}^N$ and $\bf{x} = [x_1,x_2,...x_n] \in \mathbb{R}^N$ Statement Question is that whether the following statement is true? Given any point $\bf{x} \in \mathbb{R}^N$, and any unit vector $\bf{\hat{a}} \in \mathbb{R}^N$, $$\lim_{R\to \infty} S^{\bf{\hat{a}}}_R(\bf{x}) = \frac{u_{\bf{\hat{a}}}(\bf{x}) + u_{\bf{-\hat{a}}}(\bf{x})}{2}$$
Table of Contents Continuous Functions on Compact Sets of Metric Spaces We have looked quite a bit at continuous functions so far, and now we will put out attention on functions that are continuous on compact sets $X$. The following Theorem tells us that if $(S, d_S)$ and $(T, d_T)$ are metric spaces and $f : S \to T$ is a continuous function then for every subset $X$ which is compact in $S$ we will have that $f(X)$ is compact in $T$, that is, compact sets are mapped to compact sets under continuous $f$. Theorem 1: Let $(S, d_S)$ and $(T, d_T)$ be metric spaces and let $f : S \to T$ be continuous. If $X$ is a compact subset of $S$ then the image $f(X)$ is a compact subset of $T$. Proof:Let $X \subseteq S$ be a compact subset of $S$ and let $f : S \to T$ be continuous. Consider any open covering of $f(X)$, $\mathcal F = \{ A_{\alpha} : \alpha \in \Gamma \}$ for $\Gamma$ as some indexing set. Then since $\mathcal F$ is an open covering of $f(X)$ we have that: Since $\mathcal F$ is an OPEN covering, we have by definition that $A_{\alpha}$ is open in $T$ for all $\alpha \in \Gamma$. Since $f$ is continuous we have that $f^{-1}(A_{\alpha})$ is open in $X$ for all $\alpha \in \Gamma$ and furthermore: Therefore $\{ f^{-1}(A_{\alpha}) : \alpha \in \Gamma \}$ is an open covering of $X$. Since $X$ is compact in $S$ this open covering has a finite open subcovering, say $\{ f^{-1}(A_{\alpha_1}), f^{-1}(A_{\alpha_2}), ..., f^{-1}(A_{\alpha_n}) \}$ where $\alpha_1, \alpha_2, ..., \alpha_n \in \Gamma$ and also: But then: Therefore $\{ A_{\alpha_1}, A_{\alpha_2}, ..., A_{\alpha_n} \}$ is a finite open subcovering of $f(X)$. Therefore every open covering $\{ A_{\alpha} : \alpha \in \Gamma \}$ of $f(X)$ has a finite open subcovering, so $f(X)$ is compact in $T$. $\blacksquare$ Corollary 1: Let $(S, d_S)$ and $(T, d_T)$ be metric spaces and let $f : S \to T$ be continuous. If $X$ is a compact subset of $S$ then the image $f(X)$ is: closed, bounded, and every infinite subset of $f(X)$ has an accumulation point. Proof:By Theorem 1, $f(X)$ is a compact subset of $T$, and as we saw on the Boundedness of Compact Sets in a Metric Space, Closedness of Compact Sets in a Metric Space, and Every Infinite Subset of a Compact Set in a Metric Space Contains an Accumulation Point page we have the Corollary 1 follows immediately. $\blacksquare$ The following diagram illustrates the results of Theorem 1 and Corollary 1.
Normal Subgroups Recall from the Left and Right Cosets of Subgroups page that if $(G, \cdot)$ is a group, $(H, \cdot)$ is a subgroup, and $g \in G$ then the left coset of $H$ with representative $g$ is defined to be the set:(1) Similarly, the right coset of $H$ with representative $g$ is defined to be the set:(2) In general $gH$ need not equal $Hg$. That said, we can characterize special subgroups $(H, \cdot)$ of $(G, \cdot)$ for which their left and right cosets of $H$ with representative $g$ are equal for all $g \in G$. Definition: Let $(G, \cdot)$ be a group and $(H, \cdot)$ a subgroup. $(H, \cdot)$ is called a Normal Subgroup written $H \trianglelefteq G$ if $gH = Hg$ for all $g \in G$, that is, the left and right cosets of $H$ with representative $g$ are equal for all $g \in G$. Immediately we can characterize a wide class of normal subgroups with the following theorem. Theorem 1: Let $(G, \cdot)$ be a group and $(H, \cdot)$ a subgroup. If $(G, \cdot)$ is abelian then $(H, \cdot)$ is a normal subgroup of $(G, \cdot)$. Proof:Let $(G, \cdot)$ be an abelian group. Then for all $a, b \in G$ we have that $a \cdot b = b \cdot a$. Let $g \in G$. Then $gH = \{ g \cdot h : h \in H \} = \{ h \cdot g : h \in H \} = Hg$ for all $g \in G$. So $(H, \cdot)$ is a normal subgroup of $(G, \cdot)$. $\blacksquare$ As a consequence of Theorem 1, the groups $(\mathbb{R}, +)$, $(\mathbb{Z}, +)$, $(\mathbb{Z}_n, +)$, $(\mathbb{Z}_p, \cdot)$, $(n\mathbb{Z}, +)$, etc… are all abelian groups so any subgroup of these groups is a normal subgroup. Of course there exists nonabelian groups that have normal subgroups. For example, consider the group the symmetric group $(S_3, \circ)$ with $S_3 = \{ \epsilon, (12), (13), (23), (123), (132) \}$. This group is nonabelian. Let $H = \{ \epsilon, (123), (132) \}$. We claim that $(H, \circ)$ is a normal subgroup of $(S_3, \circ)$. Note that $\displaystyle{[S_3 : H] = \frac{\mid S_3 \mid}{\mid H \mid} = \frac{6}{3} = 2}$. The cosets of $H$ in $G$ are $\{ \epsilon, (123), (132) \}$ and $\{(12), (13), (23) \}$ and it's not hard to verify that $gH = Hg$ for all $g \in S_3$.
Python is available on any Linux/Unix machine including department machines and Macs. You can download and install python on your own computer by following instructions at http://www.python.org You can use the Python interpreter interactively by typing python at a terminal window.Ipython is a nicer front end to python that is invoked with ipython To quit, type control-d To run python code in a file code.py, either type run code.py in ipython, or type python code.py at the unix command line. When in ipython, you may type python statements or expressionsthat are evaluated, or ipython commands. See theVideotutorial on using ipython, in five parts by Jeff Rush, for helpgetting started with ipython. Documentation is immediately available for many things. For example: > ipython asa:~$ ipython Python 2.7.3 (v2.7.3:70274d53c1dd, Apr 9 2012, 20:52:43) Type "copyright", "credits" or "license" for more information. IPython 0.13.2 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. In [1]: list? Type: type Base Class: <type 'type'> String Form: <type 'list'> Namespace: Python builtin Docstring: list() -> new list list(sequence) -> new list initialized from sequence's items In [2]: help(list) Help on class list in module __builtin__: class list(object) | list() -> new list | list(sequence) -> new list initialized from sequence's items | | Methods defined here: . . . | append(...) | L.append(object) -- append object to end | . . . | | sort(...) | L.sort(cmp=None, key=None, reverse=False) -- stable sort *IN PLACE*; | cmp(x, y) -> -1, 0, 1 What is the value of $(100\cdot 2 - 12^2) / 7 \cdot 5 + 2\;\;\;$? In [301]: (100*2 - 12**2) / 7*5 + 2 Out[301]: 42 In order to compute something like $\sin(\pi/2)$ we first need to import the math module: In [303]: import math In [304]: math.sin(math.pi/2) 1.0 How do I find out what other mathematical functions are available? help("math") Let's get on to that all important step of visualizing data. We will be using the matplotlib Python package for that. Let's start by plotting the function $f(x) = x^2$. First, let's generate the numbers. Well, there are tons of ways to do so. First, using a for loop. In [3]: f = [] In [4]: for i in range(10) : ...: f.append(i**2) ...: In [5]: f Out[5]: [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] To plot the data, first import the pyplot module. In [6]: import matplotlib.pyplot as plt In [7]: plt.plot(range(10), f) Out[7]: [<matplotlib.lines.Line2D at 0x10549b590>] In order to actually see the plot you need to do: In [8]: plt.show() As an alternative, you can put matplotlib in interactive mode before plotting using the command plt.ion(). Python has some nifty syntax for generating lists. Watch this! A list comprehension!! In [9]: f = [i**2 for i in range(10)] In [10]: f Out[10]: [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] There's an alternative way of doing this using numpy: In [11]: import numpy as np In [12]: f = np.arange(10)**2 In [13]: f Out[13]: array([ 0, 1, 4, 9, 16, 25, 36, 49, 64, 81]) Note that plotting functions to accept either lists or numpy arrays, so a fast way of doing our plot is In [14]: plt.plot(np.arange(10), np.arange(10)**2) For a smoother plot: In [14]: x = np.arange(10, 0.1) In [15]: plt.plot(x, x**2, 'ob') Out[15]: [<matplotlib.lines.Line2D at 0x1054162d0>] We can add a second plot to the same axes by calling plot again: In [16]: plt.plot(x, x, 'dr') Out[16]: [<matplotlib.lines.Line2D object at 0x3608990>] Can I work with vectors and matrices in python? Of course! No data analysis tool is worth the bytes it burns if itdoesn't. The numpy package provides the required magic.Let's create an array that represents the following matrix:\[\left ( \begin{array}{cc} 1 & 2\\ 3 & 4\\ 5 & 6 \end{array} \right ) \]by doing In [17]: import numpy as np In [18]: m = np.array([[1,2], [3,4], [5,6]]) In [19]: m Out[19]: array([[1, 2], [3, 4], [5, 6]]) Let's construct the matrices \[a = \left ( \begin{array}{cc} 2 & 2 & 2\\ 2 & 2 & 2\\ 2 & 2 & 2 \end{array} \right ) \] and \[b = \left ( \begin{array}{cc} 1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 9 \end{array} \right ) \] In [16]: a = np.ones((3,3)) * 2 In [17]: a Out[17]: array([[ 2., 2., 2.], [ 2., 2., 2.], [ 2., 2., 2.]]) In [18]: b = np.resize(np.arange(9)+1,(3,3)) In [19]: b Out[19]: array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) What is the value of $a * b$? In [20]: a * b Out[21]: array([[ 2, 4, 6], [ 8, 10, 12], [14, 16, 18]]) The * operator does a component-wise multiplication. Use the numpy function dot to do matrix multiplication. In [22]: np.dot(a,b) Out[22]: array([[24, 30, 36], [24, 30, 36], [24, 30, 36]]) An array is transposed by In [23]: b.transpose() Out[23]: array([[1, 4, 7], [2, 5, 8], [3, 6, 9]]) In [24]: b.T Out[24]: array([[1, 4, 7], [2, 5, 8], [3, 6, 9]]) Elements and sub-matrices are easily extracted: In [25]: b Out[25]: array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) In [26]: b[0,0] Out[26]: 1 In [27]: b[0,1] Out[27]: 2 In [28]: b[0:2, 1:3] Out[28]: array([[2, 3], [5, 6]]) Let's multiply the first row of a $a$ by the second column of $b$. In [29]: np.dot(a[0], b[:,1]) Out[29]: 30.0 In [30]: np.dot(a[0],b.T[1]) Out[30]: 30.0 How do I find the inverse of a matrix? In [2]: z = np.array([[2,1,1],[1,2,2],[2,3,4]]) In [3]: z Out[3]: array([[2, 1, 1], [1, 2, 2], [2, 3, 4]]) In [4]: np.linalg.inv(z) Out[4]: array([[ 0.66666667, -0.33333333, 0. ], [ 0. , 2. , -1. ], [-0.33333333, -1.33333333, 1. ]]) In [5]: np.dot(z, np.linalg.inv(z)) Out[5]: array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]])
Tagged: module Problem 434 Let $R$ be a ring with $1$. A nonzero $R$-module $M$ is called irreducible if $0$ and $M$ are the only submodules of $M$. (It is also called a simple module.) (a) Prove that a nonzero $R$-module $M$ is irreducible if and only if $M$ is a cyclic module with any nonzero element as its generator. Add to solve later (b) Determine all the irreducible $\Z$-modules. Problem 432 (a) Let $R$ be an integral domain and let $M$ be a finitely generated torsion $R$-module. Prove that the module $M$ has a nonzero annihilator. In other words, show that there is a nonzero element $r\in R$ such that $rm=0$ for all $m\in M$. Here $r$ does not depend on $m$. Add to solve later (b) Find an example of an integral domain $R$ and a torsion $R$-module $M$ whose annihilator is the zero ideal. Problem 431 Let $R$ be a commutative ring and let $I$ be a nilpotent ideal of $R$. Let $M$ and $N$ be $R$-modules and let $\phi:M\to N$ be an $R$-module homomorphism. Prove that if the induced homomorphism $\bar{\phi}: M/IM \to N/IN$ is surjective, then $\phi$ is surjective.Add to solve later Problem 417 Let $R$ be a ring with $1$ and let $M$ be an $R$-module. Let $I$ be an ideal of $R$. Let $M’$ be the subset of elements $a$ of $M$ that are annihilated by some power $I^k$ of the ideal $I$, where the power $k$ may depend on $a$. Prove that $M’$ is a submodule of $M$. Problem 415 (a) Let $R$ be a commutative ring. If we regard $R$ as a left $R$-module, then prove that any two distinct elements of the module $R$ are linearly dependent. Add to solve later (b) Let $f: M\to M’$ be a left $R$-module homomorphism. Let $\{x_1, \dots, x_n\}$ be a subset in $M$. Prove that if the set $\{f(x_1), \dots, f(x_n)\}$ is linearly independent, then the set $\{x_1, \dots, x_n\}$ is also linearly independent. Read solution Problem 410 Let $R$ be a ring with $1$ and let $M$ be a left $R$-module. Let $S$ be a subset of $M$. The annihilator of $S$ in $R$ is the subset of the ring $R$ defined to be \[\Ann_R(S)=\{ r\in R\mid rx=0 \text{ for all } x\in S\}.\] (If $rx=0, r\in R, x\in S$, then we say $r$ annihilates $x$.) Suppose that $N$ is a submodule of $M$. Then prove that the annihilator \[\Ann_R(N)=\{ r\in R\mid rn=0 \text{ for all } n\in N\}\] of $M$ in $R$ is a $2$-sided ideal of $R$. Problem 409 Let $R$ be a ring with $1$. An element of the $R$-module $M$ is called a torsion element if $rm=0$ for some nonzero element $r\in R$. The set of torsion elements is denoted \[\Tor(M)=\{m \in M \mid rm=0 \text{ for some nonzero} r\in R\}.\] (a) Prove that if $R$ is an integral domain, then $\Tor(M)$ is a submodule of $M$. (Remark: an integral domain is a commutative ring by definition.) In this case the submodule $\Tor(M)$ is called torsion submodule of $M$. (b) Find an example of a ring $R$ and an $R$-module $M$ such that $\Tor(M)$ is not a submodule. Add to solve later (c) If $R$ has nonzero zero divisors, then show that every nonzero $R$-module has nonzero torsion element. Problem 408 Let $R$ be a ring with $1$ and $M$ be a left $R$-module. (a) Prove that $0_Rm=0_M$ for all $m \in M$. Here $0_R$ is the zero element in the ring $R$ and $0_M$ is the zero element in the module $M$, that is, the identity element of the additive group $M$. To simplify the notations, we ignore the subscripts and simply write \[0m=0.\] You must be able to and must judge which zero elements are used from the context. (b) Prove that $r0=0$ for all $s\in R$. Here both zeros are $0_M$. (c) Prove that $(-1)m=-m$ for all $m \in M$. Add to solve later (d) Assume that $rm=0$ for some $r\in R$ and some nonzero element $m\in M$. Prove that $r$ does not have a left inverse.
As I prefer outer parentheses to grow larger in nested expressions, I happen to insert \bigl and \bigr and its larger cousins a lot. I have always wondered whether there is a way to do this automagically. Inserting \left and \right prophylacticly before all opening and closing parentheses and hoping that it sorts things out doesn't do the trick in all cases as seen below. The left column is without any size changing commands. The middle column as a \left \right pair at every parentheses/brace/bracket and the rightmost column shows my preferred version. As seen in the middle column of the second and third line the \left and \right pair sometimes makes the outer parentheses larger, on the cost of an extra horizontal space between \Pr and (. The extra space is fine in case of really big opening parentheses but here I'd prefer the regular spacing. \documentclass{scrartcl}\usepackage{amsmath}\usepackage{amsfonts}\begin{document}\begin{align*}&1-(1-F(x))^n&&1-\left(1-F(x)\right)^n&&1-\bigl(1-F(x)\bigr)^n\\&\Pr(X_{(1)}\le x)&&\Pr\left(X_{(1)}\le x\right)&&\Pr\bigl(X_{(1)}\le x\bigr)\\&\mathbb{E}[\min\{X_1,X_2\}]&&\mathbb{E}\left[\min\left\{X_1,X_2\right\}\right]&&\mathbb{E}\bigl[\min\left\{X_1,X_2\right\}\bigr]\\&\left(\pi-\arccos(\frac{y}{r})\right)&&(\pi-\arccos\left(\frac{y}{r}\right))&&\left(\pi-\arccos \left(\frac{y}{r}\right)\right)\end{align*}\end{document} As mathematical expressions can be arbitrarly complex there is probably no general way to do this. But I’m not asking for a solution of the general case. A partial solution that works in the simple cases shown above would be a big help. The simple rule could be, that parentheses never shrink below the size of an inner pair. Of course you can immediately think of extensions like a \mid in a conditional expectation that grows with the size of the expectation’s brackets. What is your preference regarding the size of nested parentheses? Is there a better way to archieve your preferred style despite inserting \bigl \bigr manually? EDIT: A colleague pointed me to section 8.3 of Herbert Voß’ mathmode document where size problems with parentheses are solved by playing with two TeX parameters within a group around the expression in question. This led naturally to appendix G of the TeXbook where the mechanics of \delimitershortfall and \delimiterfactor are explained. \delimitershortfall specifies the maximum space not covered by a delimiter (default 5pt)and \delimiterfactor is the ratio for variable delimiters, times 1000 (default 901). I used them to implement the never shrink below a subformular size idea from above by setting the shortfall to 0pt and the ratio to 1.0. While it works nicely in line one and three, in line two and four the parentheses now grow too much, and still there is the extra horizontal space introduced by \left.
Difference between revisions of "De Bruijn-Newman constant" (→Threads) (→Threads) (One intermediate revision by the same user not shown) Line 95: Line 95: * [https://terrytao.wordpress.com/2018/05/04/polymath15-ninth-thread-going-below-0-22/ Polymath15, ninth thread: going below 0.22?], Terence Tao, May 4, 2018. * [https://terrytao.wordpress.com/2018/05/04/polymath15-ninth-thread-going-below-0-22/ Polymath15, ninth thread: going below 0.22?], Terence Tao, May 4, 2018. * [https://terrytao.wordpress.com/10725 Polymath15, tenth thread: numerics update], Rudolph Dwars and Kalpesh Muchhal, Sep 6, 2018. * [https://terrytao.wordpress.com/10725 Polymath15, tenth thread: numerics update], Rudolph Dwars and Kalpesh Muchhal, Sep 6, 2018. − * [https://terrytao.wordpress.com/2018/12/28/polymath-15-eleventh-thread-writing-up-the-results-and-exploring-negative-t/ + * [https://terrytao.wordpress.com/2018/12/28/polymath-15-eleventh-thread-writing-up-the-results-and-exploring-negative-t/ , eleventh thread: Writing up the results, and exploring negative t], Terence Tao, Dec 28, 2018 + . == Other blog posts and online discussion == == Other blog posts and online discussion == Latest revision as of 17:37, 30 April 2019 For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula [math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math] where [math]\Phi[/math] is the super-exponentially decaying function [math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math] It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as [math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math] or [math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math] In the notation of [KKL2009], one has [math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math] De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]). The Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math] When [math]t=0[/math], one has [math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math] where [math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \zeta(s)[/math] is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives [math]\displaystyle \left|N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})\right| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math] for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T. The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real. [math]t\gt0[/math] For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis, all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2]. It is known that [math]\xi[/math] is an entire function of order one ([T1986, Theorem 2.12]). Hence by the fundamental solution for the heat equation, the [math]H_t[/math] are also entire functions of order one for any [math]t[/math]. Because [math]\Phi[/math] is positive, [math]H_t(iy)[/math] is positive for any [math]y[/math], and hence there are no zeroes on the imaginary axis. Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-increasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have [math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math] for any [math]t[/math]. The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE [math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math] where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as [math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] where the dependence on [math]t[/math] has been omitted for brevity. In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic [math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math] as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that [math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math] as [math]k \to +\infty[/math]. Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Polymath15, third thread: computing and approximating H_t, Terence Tao and Sujit Nair, Feb 12, 2018. Polymath 15, fourth thread: closing in on the test problem, Terence Tao, Feb 24, 2018. Polymath15, fifth thread: finishing off the test problem?, Terence Tao, Mar 2, 2018. Polymath15, sixth thread: the test problem and beyond, Terence Tao, Mar 18, 2018. Polymath15, seventh thread: going below 0.48, Terence Tao, Mar 28, 2018. Polymath15, eighth thread: going below 0.28, Terence Tao, Apr 17, 2018. Polymath15, ninth thread: going below 0.22?, Terence Tao, May 4, 2018. Polymath15, tenth thread: numerics update, Rudolph Dwars and Kalpesh Muchhal, Sep 6, 2018. Polymath15, eleventh thread: Writing up the results, and exploring negative t, Terence Tao, Dec 28, 2018. Effective approximation of heat flow evolution of the Riemann xi function, and a new upper bound for the de Bruijn-Newman constant, Terence Tao, Apr 30, 2019. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Writeup Here are the Polymath15 grant acknowledgments. Test problem Zero-free regions See Zero-free regions. Wikipedia and other references Bibliography [A2011] J. Arias de Reyna, High-precision computation of Riemann's zeta function by the Riemann-Siegel asymptotic formula, I, Mathematics of Computation, Volume 80, Number 274, April 2011, Pages 995–1009. [B1994] W. G. C. Boyd, Gamma Function Asymptotics by an Extension of the Method of Steepest Descents, Proceedings: Mathematical and Physical Sciences, Vol. 447, No. 1931 (Dec. 8, 1994),pp. 609-630. [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [P1992] G. Pugh, The Riemann-Siegel formula and large scale computations of the Riemann zeta function, M.Sc. Thesis, U. British Columbia, 1992. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
Speaker: Affiliation: Indian Institute of Technology Department of Computer Science and Engineering Hauz Khas New Delhi 110016 Time: Venue: A-201 (STCS Seminar Room) Organisers: We develop new techniques for rounding packing integer programs using iterative randomized rounding. It is based on a novel application of multidimensional Brownian motion in $\mathbb{R}^n$. Let $\overset{\sim}{x} \in {[0,1]}^n$ be a fractional feasible solution of a packing constraint $A x \leq 1,\ \ $ $A \in {\{0,1 \}}^{m\times n}$ that maximizes a linear objective function. Our algorithm iteratively transforms $\overset{\sim}{x}$ to $\hat{x} \in {\{ 0,1\}}^{n}$ using a random walk, such that the expected values of $\hat{x}_i$'s are consistent with the Raghavan-Thompson rounding. In addition, it gives us intermediate values $x'$ which can then be used to bias the rounding towards a superior solution. Our algorithm gradually sparsifies $A$ to $A' \in {\{0,1 \}}^{m\times n}$ where each row in $A'$ has $\leq \log n$ non-zero coefficients with $A'\cdot x' \leq O(1)$. The reduced dependencies between the constraints of the sparser system can be exploited using {\it Lovasz Local Lemma}. Using the Moser-Tardos' constructive version, $x'$ converges to $\hat{x}$ in polynomial time to a distribution over the unit hypercube ${\cal H}_n = {\{0,1 \}}^n$ such that the expected value of any linear objective function over ${\cal H}_n$ equals the value at $\overset{\sim}{x}$. We discuss application of these techniques when $A$ is a random matrix and also for a more general situation of a $k$-column sparse matrix (joint work with Dhiraj Madan).
Under the auspices of the Computational Complexity Foundation (CCF) We present error-correcting codes that achieve the information-theoretically best possible trade-off between the rate and error-correction radius. Specifically, for every $0 < R < 1$ and $\eps > 0$, we present an explicit construction of error-correcting codes of rate $R$ that can be list decoded in polynomial time up to a fraction $(1-R-\eps)$ of {\em worst-case} errors. At least theoretically, this meets one of the central challenges in algorithmic coding theory. Our codes are simple to describe: they are {\em folded Reed-Solomon codes}, which are in fact {\em exactly} Reed-Solomon (RS) codes, but viewed as a code over a larger alphabet by careful bundling of codeword symbols. Given the ubiquity of RS codes, this is an appealing feature of our result, and in fact our methods directly yield better decoding algorithms for RS codes when errors occur in {\em phased bursts}. The alphabet size of these folded RS codes is polynomial in the block length. We are able to reduce this to a constant (depending on $\eps$) using ideas concerning ``list recovery'' and expander-based codes. Concatenating the folded RS codes with suitable inner codes also gives us polynomial time constructible binary codes that can be efficiently list decoded up to the Zyablov bound, i.e., up to twice the radius achieved by the standard GMD decoding of concatenated codes. For every $0 < R < 1$ and $\eps > 0$, we present an explicit construction of error-correcting codes of rate $R$ that can be list decoded in polynomial time up to a fraction $(1-R-\eps)$ of errors. These codes achieve the ``capacity'' for decoding from {\em adversarial} errors, i.e., achieve the {\em optimal}~ trade-off between rate and error-correction radius. At least theoretically, this meets one of the central challenges in coding theory. Prior to this work, explicit codes achieving capacity were not known for {\em any} rate $R$. In fact, our codes are the first to beat the error-correction radius of $1-\sqrt{R}$, that was achieved for Reed-Solomon codes in \cite{GS}, for all rates $R$. (For rates $R < 1/16$, a recent breakthrough by Parvaresh and Vardy improved upon the $1-\sqrt{R}$ bound;~ for $R \to 0$, their algorithm can decode a fraction $1-O(R \log(1/R))$ of errors.) Our codes are simple to describe --- they are certain {\em folded Reed-Solomon codes}, which are in fact {\em exactly} Reed-Solomon (RS) codes, but viewed as a code over a larger alphabet by careful bundling of codeword symbols. Given the ubiquity of RS codes, this is an appealing feature of our result, since the codes we propose are not too far from the ones in actual use. The main insight in our work is that some carefully chosen folded RS codes are ``compressed" versions of a related family of Parvaresh-Vardy codes. Further, the decoding of the folded RS codes can be reduced to list decoding the related Parvaresh-Vardy codes. The alphabet size of these folded RS codes is polynomial in the block length. This can be reduced to a (large) constant using ideas concerning ``list recovering'' and expander-based codes. Concatenating the folded RS codes with suitable inner codes also gives us polytime constructible binary codes that can be efficiently list decoded up to the Zyablov bound.
Let $\mathcal{C}$ be a small category equipped with a terminal object $1$ and a Grothendieck topology. (Assume $\mathcal{C}$ also has pullbacks, if it is more convenient.) The following is a simplicial version of Verdier's hypercovering theorem: Let $X$ be a locally fibrant simplicial presheaf on $\mathcal{C}$, let $\hat{X}$ be an associated hypersheaf (i.e. a fibrant replacement in the local Jardine model structure), let $\mathbf{Hc}$ be the category of hypercovers (of $1$), and let $\operatorname{Ho} \mathbf{Hc}$ be the same category modulo simplicial homotopy. $\operatorname{Ho} \mathbf{Hc}^\mathrm{op}$ is a filtered category and admits a small cofinal subcategory. There is a canonical bijection $$\mathop{\varinjlim_{\operatorname{Ho} \mathbf{Hc}^\mathrm{op}}} \pi_0 \underline{\mathrm{Hom}} (U, X) \cong \pi_0 \Gamma (\hat{X})$$ where $U$ is the functor sending a hypercover to the corresponding simplicial presheaf and $\underline{\mathrm{Hom}} (U, X)$ is the simplicial set of morphisms $U \to X$. (An analogous statement for higher homotopy groups, where the colimit is indexed over a more complicated category if the basepoint is not a global section of $X$.) Here is another version: Let $X$ be a presheaf of Kan complexes on $\mathcal{C}$ and let $\hat{X}$ be an associated hypersheaf. Then we have a natural bijection $$\mathop{\varinjlim_{\operatorname{Ho} \mathbf{Hc}^\mathrm{op}}} \pi_0 \underline{\mathrm{Hom}} (Z \odot U, X) \cong \operatorname{Ho} \mathbf{sSet} (Z, \Gamma (\hat{X}))$$ for all simplicial sets $Z$. Thus, we can compute the (weak) homotopy type of $\Gamma (\hat{X})$ in terms of $X$ and hypercovers. Question. Is there a formula of the same kind that gives an actual "model" for $\Gamma (\hat{X})$? For instance, it would be nice if we had $$\mathop{\varinjlim_{\mathbf{Hc}^\mathrm{op}}} \underline{\mathrm{Hom}} (U, X) \simeq \Gamma (\hat{X})$$ but I do not see how to prove this. (Is it even true? It is easy enough to show that we get a bijection in $\pi_0$, and I think we also get an equivalence of fundamental groupoids.)
Search Now showing items 1-6 of 6 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC (Springer, 2013-09) We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
Search Now showing items 11-20 of 27 Pseudorapidity dependence of the anisotropic flow of charged particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (Elsevier, 2016-11) We present measurements of the elliptic ($\mathrm{v}_2$), triangular ($\mathrm{v}_3$) and quadrangular ($\mathrm{v}_4$) anisotropic azimuthal flow over a wide range of pseudorapidities ($-3.5< \eta < 5$). The measurements ... Correlated event-by-event fluctuations of flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2016-10) We report the measurements of correlations between event-by-event fluctuations of amplitudes of anisotropic flow harmonics in nucleus–nucleus collisions, obtained for the first time using a new analysis method based on ... Centrality dependence of $\mathbf{\psi}$(2S) suppression in p-Pb collisions at $\mathbf{\sqrt{{\textit s}_{\rm NN}}}$ = 5.02 TeV (Springer, 2016-06) The inclusive production of the $\psi$(2S) charmonium state was studied as a function of centrality in p-Pb collisions at the nucleon-nucleon center of mass energy $\sqrt{s_{\rm NN}}$ = 5.02 TeV at the CERN LHC. The ... Transverse momentum dependence of D-meson production in Pb–Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-03) The production of prompt charmed mesons D$^0$, D$^+$ and D$^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb–Pb collisions at the centre-of-mass energy per nucleon pair, $\sqrt{s_{\rm NN}}$ of ... Multiplicity and transverse momentum evolution of charge-dependent correlations in pp, p-Pb, and Pb-Pb collisions at the LHC (Springer, 2016) We report on two-particle charge-dependent correlations in pp, p-Pb, and Pb-Pb collisions as a function of the pseudorapidity and azimuthal angle difference, $\mathrm{\Delta}\eta$ and $\mathrm{\Delta}\varphi$ respectively. ... Charge-dependent flow and the search for the chiral magnetic wave in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2016-04) We report on measurements of a charge-dependent flow using a novel three-particle correlator with ALICE in Pb–Pb collisions at the LHC, and discuss the implications for observation of local parity violation and the Chiral ... Pseudorapidity and transverse-momentum distributions of charged particles in proton-proton collisions at $\mathbf{\sqrt{\textit s}}$ = 13 TeV (Elsevier, 2016-02) The pseudorapidity ($\eta$) and transverse-momentum ($p_{\rm T}$) distributions of charged particles produced in proton-proton collisions are measured at the centre-of-mass energy $\sqrt{s}$ = 13 TeV. The pseudorapidity ... Differential studies of inclusive J/$\psi$ and $\psi$(2S) production at forward rapidity in Pb-Pb collisions at $\mathbf{\sqrt{{\textit s}_{_{NN}}}}$ = 2.76 TeV (Springer, 2016-05) The production of J/$\psi$ and $\psi(2S)$ was measured with the ALICE detector in Pb-Pb collisions at the LHC. The measurement was performed at forward rapidity ($2.5 < y < 4 $) down to zero transverse momentum ($p_{\rm ... Elliptic flow of muons from heavy-flavour hadron decays at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (Elsevier, 2016-02) The elliptic flow, $v_{2}$, of muons from heavy-flavour hadron decays at forward rapidity ($2.5 < y < 4$) is measured in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The scalar ... Anisotropic flow of charged particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2016-04) We report the first results of elliptic ($v_2$), triangular ($v_3$) and quadrangular flow ($v_4$) of charged particles in Pb--Pb collisions at $\sqrt{s_{_{\rm NN}}}=$ 5.02 TeV with the ALICE detector at the CERN Large ...
Properties of Adjoints of Linear Maps Recall from The Adjoint of a Linear Map page that if $V$ and $W$ are finite-dimensional nonzero inner product spaces and that $T \in \mathcal L (V, W)$ then the adjoint of $T$ is the linear map $T^* \in \mathcal L (W, V)$ defined by considering the linear function $\varphi : V \to \mathbb{F}$ defined by $\varphi(v) = <T(v), w>$ and for a fixed $w \in W$ we define $T^*$ to be the unique vector such that $< T(v), w> = <v, T^*(w)>$. We will now look at some properties of adjoints in the following propositions. Proposition 1: Let $V$ and $W$ be finite-dimensional nonzero inner product spaces. Let $S, T \in \mathcal L(V, W)$. Then $(S + T)^* = S^* + T^*$. Proof:We note that $<(S + T)(v), w> = <v, (S + T)^*(w)>$ by the definition of an adjoint. Furthermore, we have that: Thus we have that $(S + T)^* = S^* + T^*$. $\blacksquare$ Proposition 2: Let $V$ and $W$ be finite-dimensional nonzero inner product spaces. Let $T \in \mathcal L(V, W)$ and let $a \in \mathbb{F}$. Then $(aT)^* = \overline{a}T^*$. Proof:We note that $<(aT)(v), w> = <v, (aT)^*(w)>$. Furthermore, we have that: Thus we have that $(aT)^*(w) = \overline{a}T^*(w)$. $\blacksquare$ Proposition 3: Let $V$ and $W$ be finite-dimensional nonzero inner product spaces. Let $T \in \mathcal L(V, W)$. Then $(T^*)^* = T$. Proof:We note that $<(T^*)^*(v), w> = <v, (T^*)^*(w)>$. Furthermore, we have that: Thus we have that $(T^*)^* = T$. $\blacksquare$ Proposition 4: If $I$ is the identity operator on a finite-dimensional nonzero inner product space $V$ then $I^* = I$. Proof:We note that $<I(v), w> = <v, I(w)> = <v, w>$. So $I^*(w) = w$ and so $I^* = I$. $\blacksquare$ Proposition 5: Let $U$, $V$ and $W$ be finite-dimensional nonzero inner product spaces. Let $S \in \mathcal L(W, U)$ and let $T \in \mathcal (V, W)$. Then $(ST)^* = T^*S^*$. Proof:We note that $<(ST)(v), w> = <v, (ST)^*(w)>$. Furthermore we have that: Thus we have that $(ST)^* = T^*S^*$. $\blacksquare$ Example 1 Let $T$ be a linear operator on the inner product space $V$. Prove that $T = T^2$ if and only if $T^* = (T^*)^2$. $\Rightarrow$ Suppose that $T = T^2$. Then if we take the adjoint of both sides of this equation then we get that:(5) Here we applied Proposition 5 to the second equality. Therefore $T^* = (T^*)^2$. $\Leftarrow$ Suppose that $T^* = (T^*)^2$. Then if we take the adjoint of both sides of this equation then we get that:(6) Here we applied Proposition 3 and Proposition 5. Therefore $T = T^2$.
The Zitterbewegung is more of a relic of the early Dirac equation days. It does not exist in the standard position, velocity and acceleration operators of the single particle field, only in alternatively derived versions. These alternative versions were developed because people thought the standard operators were wrong. In fact they didn't understand the standard operators. The standard method is using: $\frac{\partial {\cal \tilde{O}}}{dt} ~=~ \frac{i}{\hbar}\!\!\left[~\tilde{H},\tilde{O}~\right]$ Where the misunderstanding comes from is easy to see in the modern Chiral representation. We will show that the standard operators are correct. If we define a position, a velocity and an acceleration operator for the Dirac field then the (averaged) position and velocity and acceleration are given by: Position, Velocity and Acceleration operators applied on the Dirac field: $\vec{x}_{avg} ~=~ \frac{1}{2mc}\int dx^3 ~~ \psi^* \vec{X}~\psi ~~~~~~~~~ (\vec{X}: \mbox{position operator}) $ $\vec{v}_{avg} ~=~ \frac{1}{2mc}\int dx^3 ~~ \psi^* \vec{V}~\psi \,~~~~~~~~~ (\vec{V}: \mbox{velocity operator}) $ $\vec{a}_{avg} ~=~ \frac{1}{2mc}\int dx^3 ~~ \psi^* \vec{A}~\psi \,~~~~~~~~~ (\vec{A}: \mbox{acceleration operator}) $ Velocity operator Now $\vec{X}$ is simply the position $\vec{x}$ of each point of the wavefunction. The velocity operator can be derived by commutating with the Hamiltonian. $\tilde{V}^i\psi\ =\ \frac{i}{\hbar}\left[~\tilde{H},\tilde{X}^i~\right]\psi\ =\ c \left( \begin{array}{cc} -\sigma^i & 0 \\ 0 & \sigma^i \end{array} \right)\psi$ This velocity operator is in fact totally correct but it was thought to be erroneous in the early days because people misunderstood it to mean that the electron can only move with $\pm\,c$, and therefor it must be wrong, they thought. What they were actually expecting was something like the $\vec{v}=\vec{p}/m$ as they got in non relativistic theories, but they found something which only contained $\pm\,c$. However, if we evaluate the expression for $\vec{v}_{avg}$ then we get. $\vec{v}_{avg} ~=~ \frac{c}{2mc}\int dx^3 ~~ \psi^* \left( \begin{array}{cc} -\sigma^i & 0 \\ 0 & \sigma^i \end{array} \right)\psi ~~=~~ \frac{c}{2mc}\int dx^3 ~~ \bar{\psi} \gamma^i \psi ~~=~~ \frac{c}{2mc}\int dx^3 ~ j^i$ This is an integral over the current density, or the momentum with the appropriate units. Now the momentum $\vec{p}$ is a factor $\gamma$ larger as the velocity $\vec{v}$ but the integral over the Lorentz contracted field compensates this so we end up with the velocity of the particle as required! The velocity operator is perfectly fine. The other big misunderstanding was that the x,y and z-components of the velocity operator do not commute while they do so in the non-relativistic theory and therefor the operator must be wrong, they thought. You can still find this quoted in many textbooks. But as you see the expression derives the velocity from the momentum and as we know the momentum components (the boost components) should not commute. In fact they should commute just like in the velocity operator. Again the operator behaves exactly in the right way, and it doesn't show a zitter-bewegung at all Acceleration operator We'll briefly handle the standard acceleration operator as well and show that there is no zitterbewung and that the result transforms in the right way under Lorentz transform. It can be actually be shown that it transforms like the Lorentz Force $\psi^*\tilde{A}^i\psi ~~=~~ \frac{i}{m}\frac{d\vec{p}}{dt} ~~\mbox{transforms like:}~~ \frac{iq}{m}\left(\vec{v}\times\vec{B} ~+~ \vec{E}\right)$ Because $\psi^*\tilde{A}^i\psi $ gives rise to two terms which transform like the electron's magnetization and polarization. The construction which therefor transforms like the Lorentz force is thus actually. $\psi^*\tilde{A}^i\psi ~~\mbox{transforms like:}~~ \frac{iq}{m}\left(-\vec{v}\times\mu_o\vec{M} ~+~ \frac{1}{\epsilon_o}\vec{P}\right)$ If you note that $\vec{v}\times\vec{M}~\propto~\vec{p}\times\vec{j}_A$ then you can recognize the two terms in the standard acceleration operator which is. $\psi^*\tilde{A}^i\psi ~~=~~ c~\bar{\psi}\left[\,\gamma^i\gamma^5\times(\partial_i-i\frac{e}{\hbar}\!A_i) ~\right]\psi~+~ \frac{imc^3}{\hbar}~\bar{\psi}\gamma^0\gamma^i\psi$ The acceleration is zero in a plane-wave in the absence of a B or E field. In this case the electron field has its own inherent M and P values and the two terms cancel each other. If the inherent M and P values change because of external B and E fields (by addition) then the electron accelerates. Chiral representation Now what about the c in the velocity operator. This behavior of the propagator is easy to understand in the modern chiral representation and the propagator of the field. In principle all fields are massless and propagate with c. Due to coupling however propagators can have any speed between +c and -c. The electron has two such massless components. $\psi~~=~~\left(\begin{array}{c}\psi_L\\ \psi_R \end{array}\right)$ So, these two components do move at the speed of light. In the rest frame they move exactly opposite to each other and the combined speed is zero. The big difference with the zitterbewegung is that they both happen at the same time. There is no overall alternating net velocity. Now the time evolution in the restframe is. $e^{-Ht}\left(\begin{array}{c}\psi_L\\ \psi_R \end{array}\right) ~~=~~ \left(\begin{array}{c}\psi_L\cos(mt)-i\psi_R\sin(mt)\\ \psi_R\cos(mt)-i\psi_L\sin(mt) \end{array}\right)$ So, you see the $\psi_L$ and $\psi_R$ alternating but is there a zitterbewegung of $\psi$ or the individual components $\psi_L$ and $\psi_R$? The answer is: NO for electrons and NO for positrons. This is because these are exactly the only two solutions of the Dirac equation which do not show a zitterbewegung. The reason for this is. electron at rest: $\psi_L=+\psi_R$ positron at rest: $\psi_L=-\psi_R$ The other "exotic" states where $\psi_L\neq\pm\psi_R$ at rest do show a zitterbewegung, for instance $\psi_L=i\psi_R$ or $\psi_L=\sigma_z\psi_R$. This is actually the reason why these states are not allowed. They would radiate away electromagnetic energy with the frequency corresponding to their mass. Hans.
With the growth in the Internet of Things (IoT) products, the number of applications requiring an estimate of range between two wireless nodes in indoor channels is growing very quickly as well. Therefore, localization is becoming a red hot market today and will remain so in the coming years. One question that is perplexing is that many companies now a days are offering cm level accurate solutions using RF signals. The conventional wireless nodes usually implement synchronization techniques which can provide around $\mu s$ level accuracy and if they try to find the range through timestamps, the estimate would be off by $$1 \mu s \times 3 \times 10^8 m/s = 300 m$$ where $3\times 10^8 m/s$ is the approximate speed of an electromagnetic wave. So how are cm level accurate solutions being claimed and actually delivered? This is a classic example of the simplest of signals solving the most complex of problems. In this article, my target is to explain the fundamentals behind this high resolution ranging in the easiest of manners possible. Needless to say, while each product would have its own unique signal processing algorithms, the fundamentals still remain the same. The Big Picture For the sake of providing the big picture, remember that there are other methods available, the best of which are based on optical interferometry. Then, there are ultrasound, optical and hybrid options available as well. RF is the cheapest solution though and there is nothing better than getting accurate measurements using the RF waves. The following techniques are most widely used in RF domain. Rx Signal Strength Indicator (RSS) Time of arrival (ToA) Phase of arrival (PoA) – a special case of ToA Time Difference of Arrival (TDoA) Angle of Arrival (AoA) While I do not explain each of the above in detail (Google is your friend), I summarize their pros and cons below (anchors are wireless nodes with known positions). Technique Pros Cons RSSI Simple hardware, no synchronization required, info provided by most PHY chips Highly inaccurate and environment specific ToA Highly accurate Time synchronization required among anchors and target node PoA Extremely accurate, low cost Sensitive to phase noise and impairments TDoA Great accuracy, no target node synchronization Tight synchronization among all anchors AoA Extra dimension relaxes timing and phase constraints Expensive hardware and less accurate As a final comment, all range estimation methods need a reference point. Anchors provide this reference when an accurate measurement of position is needed. If it is just the range from another node that is of interest, any node can use its own reference. This is the situation we assume in this article. What is a Timestamp? A typical embedded device comes with a counter and a register. The value of the counter increments/decrements as driven by an oscillator. When an increment counter reaches the maximum value (0xF…FF), or a decrement counter reaches the minimum value (0x0…00), it overflows and starts counting again. If a desirable event occurs, say a message arrival event driven by a Rx start interrupt, the value of the counter can be captured and stored in a register that can be later accessed to find the time of that event – according to the node’s own reference clock. As an example, consider the following Figure where the timestamp value is captured in Register the Counter is an incremental counter Tx Start is an event that resets the counter, and Rx Start is an event that captures the Counter value to Register. Figure 1: The counter, register and Tx and Rx start events If you don’t know much about electronics, it is enough to know that event times can be recorded at a node and accessed for processing later. Setup The ranging setup in this discussion consists of two nodes that can exchange timestamps with each other through the wireless medium as shown in Figure below. Figure 2: Two nodes exchanging timestamps with each other The distance between the two nodes is $R$ while the time of flight from one node to another is $\tau$. Consequently, $$R = \tau \cdot c$$ We denote the real time by $t$, Node A’s time by $T_A$ and Node B’s time by $T_B$. Since each node starts at a random time, there is a clock offset between its time as compared to the real time. $$T_A = t + \phi_A$$ $$T_B = t + \phi_B$$ Refer to the next Figure to observe how the chain of events unfolds. Figure 3: The chain of events with their corresponding timestamps exchanged between Node A and Node B Any node can start its counter at any given time. So to set a reference point at an arbitrary real time 0, the time offset of Node A is $\phi_A$ while that of of Node B is $\phi_B$. 1. Node A sends its local timestamp $T_1$ to Node B at real time $t_1$, where $$T_1 = t_1 + \phi_A$$ 2. Node B receives this packet at real time $t_2$ and records its local time $T_2$, where $$T_2 =t_2 + \phi_B$$ Clearly, $$t_2 = t_1 + \tau,\qquad or \qquad \tau = t_2-t_1$$ Therefore, we can write $$T_2-T_1 = t_2 +\phi_B- t_1-\phi_A $$ Defining $T_2-T_1$ as $\Delta T_{A->B}$ and $\Delta \phi$ as $\phi_B – \phi_A$ (the clock offset between two nodes), $$\Delta T_{A->B} = \tau + \Delta \phi \quad —— \quad \text{Eq (1)}$$ It is important to write the equation in the above form because all we know is the observation $\Delta T_{A->B}$. We do not know $t_1$, $t_2$, $\tau$, $\phi_A$ and $\phi_B$. 3. After a processing delay, Node B sends its local timestamp $T_3$ at real time $t_3$ to Node A. 4. Node A records it at $T_4$ at actual time $t_4$. Since $t_4 = t_3+\tau$, $$T_4 -T_3 = t_4+\phi_A – t_3 – \phi_B$$ which can be written in terms of $\Delta T_{B->A}=T_4-T_3$ as $$\Delta T_{B->A} = \tau – \Delta \phi \quad —— \quad \text{Eq (2)}$$ Adding Eq (1) and Eq (2) yields the estimate of delay. $$\hat \tau = \frac{1}{2}\Big(\Delta T_{A->B} + \Delta T_{B->A}\Big)$$ Now it is clear that the time base of Node A serves as the reference for estimating this delay. Research literature refers to this approach as a ‘ two-way message exchange‘. To pay tribute to Tolkien, I call it ‘ There and Back Again‘. Performance I performed some ranging experiments with a wireless device with a clock rate of 8 MHz. That implies that one such tick takes $1/(8 \times 10^6)$ $=$ $125 ns$. In terms of distance, this is $125 ns \times 3\times 10^8$ $=$ $37.5$ m. Gradually increasing the distance, a divide by two operation and rounding off the results generated the following results. Figure 4: Results for a ranging experiment with an 8 MHz clock Assume that a 100x accuracy, say $37.5$ cm, is needed. Then, we need a clock generating timestamps at a rate of 800 MHz. That kind of expense and power, however, is more suited to computing applications and not to an embedded device. In conclusion, we cannot afford a high rate clock but still desire a high resolution. The Arrival of the Phase of Arrival In the spirit of time of arrival, this method is known as the phase of arrival. First, observe that we already have access to something similar to a high resolution clock – a continuous wave (CW). Consider a simple sinusoid at GHz frequency and just plot its sign. It looks very much like a very high rate clock. Figure 5: Sign of a simple continuous wave is similar to a high rate clock Now again consider two wireless nodes that are exchanging continuous waves instead of timestamps in the following manner. 1. Node A sends a continuous wave $\cos (2\pi F_1 t)$ of frequency $F_1$ at its time $T_1$ (real time $t_1$) to Node B. Using $T_1 = t_1 + \phi_A$, its phase is given by $$2\pi F_1 T_1 = 2\pi F_1 t_1 + 2\pi F_1 \phi_A$$ where $2\pi F_1 \phi_A$ is just a constant and could easily be expressed as a single term $\phi’_A$. As opposed to timestamps case, it is not required, neither it is easy, to measure the phase $2\pi F_1 T_1$ explicitly. 2. Node B receives this continuous wave at real time $t_2$ when the phase of its own local reference at frequency $F_1$ at its local time $T_2$, where $T_2 =t_2 + \phi_B$, is $$2\pi F_1 T_2 = 2\pi F_1 t_2 + 2\pi F_1 \phi_B$$ Using $t_2 = t_1 + \tau$, Node B employs some signal processing algorithm to measure the phase difference between the two continuous waves as $$\Delta \theta_{A->B} = 2\pi F_1(T_2-T_1) = 2\pi F_1 \tau + 2\pi F_1\Delta \phi\quad —— \quad \text{Eq (3)}$$ It is important to write the equation in the above form because all we know is the phase difference $\Delta \theta _{A->B}$. We do not know anything else. 3. After a processing delay, Node B sends a continuous wave in the reverse direction. 4. Node A measures the phase difference $$\Delta \theta_{B->A} = 2\pi F_1 \tau – 2\pi F_1 \Delta \phi \quad —— \quad \text{Eq (4)}$$ Adding Eq (3) and Eq (4) yields the estimate of delay. $$\hat \tau = \frac{1}{2\cdot 2\pi F_1}\Big(\Delta \theta_{A->B} + \Delta \theta_{B->A}\Big)\quad —— \quad \text{Eq (5)}$$ That was so easy, so fast and so accurate. But the world is not that simple. The Rollover Problem The solution to the accuracy problem creates a problem of its own. Remember we said that when an increment counter reaches the maximum value (0xF…FF), or a decrement counter reaches the minimum value (0x0…00), it overflows and starts counting again. So if a clock is very fast, it overflows more quickly and resets again. It might even do so when the signal on the reverse path might not have returned! The same is the case with the sinusoids. For example, a continuous wave at 2.4 GHz would roll over every $1/(2.4 \times 10^9) \times 3 \times 10^8$ $=$ $12.5$ cm. Any distance greater than 12.5 cm would be impossible to measure. Introducing More Carriers To solve this rollover problem, assume $\Delta \theta = \Delta \theta_{A->B} + \Delta \theta_{B->A}$ and start with plugging Eq (5) in the range expression. $$ R = c\cdot \hat \tau = c \cdot \frac{1}{2\cdot 2\pi F_1} \Delta \theta$$ This can be simplified using $c=F_1 \lambda_1$ as $$R = \frac{\lambda_1}{2} \frac{\Delta \theta}{2\pi} $$ Now we can break the phase $\Delta \theta$ into an integer part and a fractional part because $\Delta \theta = 2\pi n + (\Delta \theta)_{\text{frac},F_1}$, where $n$ is the number of integer wavelengths spanning the distance $R$ while $(\Delta \theta)_{\text{frac},F_1}$ is the phase corresponding to the remaining fractional distance. Thus, the above equation can be written as $$R = \frac{\lambda_1}{2}\left(n + \frac{(\Delta \theta)_{\text{frac},F_1}}{2\pi}\right)$$ Writing the fractional phase as a function of range $$\Delta \theta_{\text{frac},F_1} = 2\pi\left(2R\frac{F_1}{c} – n\right)\quad —— \quad \text{Eq (6)}$$ The rollover unwrapping problem is now reduced to cancelling $n$ from the above equation. This can be easily accomplished by sending another tone at frequency $F_2$ that would generate the result $$\Delta \theta_{\text{frac},F_2} = 2\pi\left(2R\frac{F_2}{c} – n\right)$$ The above two equations can now be solved to cancel $n$ and create an effect equivalent to sending a single tone with a very large wavelength or very low frequency $F_2-F_1$. $$\Delta \theta_{\text{frac},F_2} – \Delta \theta_{\text{frac},F_1} = 2\pi\left(2R\frac{F_2-F_1}{c} \right)$$ The range is now found to be $$R = \frac{c}{4\pi}\cdot \frac{\Delta \theta_{\text{frac},F_2} – \Delta \theta_{\text{frac},F_1}}{F_2-F_1}$$ Having eliminated the phase rollover, we are interested in maximum range that can be unambiguously estimated through the above equation. Clearly, this depends on the frequency difference between the two continuous waves. Also, remember that $\Delta \theta_{\text{frac},F_2} – \Delta \theta_{\text{frac},F_1}$ can attain a maximum value of $2\pi$. Then, for example, for a 2 MHz difference, i.e., $F_2-F_1=2\times 10^6$, the unambiguous range is $$R = \frac{3\times 10^8}{4\pi}\cdot \frac{2\pi}{2\times 10^6}=75 m$$ The Phase Slope Method To combat interference and multipath in indoor channels, a number of difference continuous waves can be used and their results can be stitched together to form a precise range estimate. This is plotted in Figure below. Figure 6: Phase vs frequency plot After taking a number of measurements, a plot of phases versus frequencies is drawn. Similar to Eq (6), we can write $$\Delta \theta_{\text{frac},F_k} = 2\pi\left(2R\frac{F_k}{c} + \text{constant}\right)$$ where the constant term arises instead of $n$ as it might not be the same for all frequencies. However, the slope of the curve is still given by $$\text{slope} = \frac{4\pi}{c}\cdot R$$ from which the range can be found as $$R = \frac{c}{4\pi} \cdot \text{slope}$$ This is why it is known as the Phase Slope method. It is relatively costly to implement due to a number of back and forth transmissions (equal to the number of CWs employed) but it is very accurate because indoor channels are frequently susceptible to interference. A wider range of frequencies ensures resilience against interference through the added redundancy. More importantly, a wider bandwidth combats the multipath problem through higher resolution of arriving echoes in time domain after taking the transform of this phase data.
Bipolar disorder (BD) is among the most chronic and severe types of mental illnesses. The disease is a complex neuropsychiatric condition characterized by infrequent but extreme episodes of low (depressed) and elevated (manic) moods. When compared to other mental illnesses, the global health burden of BD is massive: 1-4% of all adults live with this condition, equating to over 17.5 million years lived with the disability (YLDs). Data also indicates that among all mental illness patients, those with BD have the highest rate of suicide. Treatment for BD has focused principally on the use of lithium, which has been shown to reduce hyperactivity, efficacy of neuron action potential firings, and rescue mitochondria dysfunction. Nevertheless, a more thorough examination of the dynamics of BD is necessary if researchers hope to gain a better understanding of the nuances of this disease. A rich array of mathematical approaches model the spread of communicable diseases such as influenza, malaria, and dengue fever. However, there is little comprehension of the dynamics of mental illnesses such as bipolar disorder, or whether understanding dynamics might help develop targets for treatment. Given the present interest in dynamics, clearly a role for a deeper mathematical comprehension of the dynamics of BD exists. Our group at the University of Oxford has been using applied maths to develop this idea. Teaming up with psychiatrists and clinical psychologists, our aim has been to link the observed dynamics of BD, based on noisy patient mood profile data, with mathematics from statistics and dynamical systems theory—colloquially known as ‘mood maths’—to achieve clinically-relevant predictions and a more comprehensive understanding of the disease. Patients are able to self-report their moods through well-proven and standardized psychological scoring systems. These types of scales include the Becks Depression Inventory, the Altman Self-Rating Mania Scale, and the Quick-Inventory for Depressive Symptomology (QIDS); each measures different aspects of a patient’s risk of BD. Using data from the QIDS scale, which patients regularly report via smartphone or internet-based technologies, we developed autoregressive time series approaches as novel descriptors of patient mood. Autoregressive time series models use mood scores at previous points in time as predictors to explain current mood. Mathematically, these time series models necessitate the construction of likelihoods, statistical descriptors relating the hypothesis that model \(\textrm{M}\) generated our observed data \(\textrm{D}\). Developing these likelihoods required approaches to manage missing values and errors that are not normally distributed. Our likelihoods take the following form: \[ L(\mathbf D | \mathbf M)=\frac{Y_{1}^{r-1}\left(\frac{r}{\mu_1}\right)^r exp\left(-\left(\frac{r}{\mu_1}\right)Y_{1}\right)}{\Gamma(r)} \] \[ \prod^N_{i=2}\frac{Y_{i}^{r-1}\left(\frac{r}{\mu_i}\right)^r exp\left(-\left(\frac{r}{\mu_i}\right)Y_{i}\right)}{\Gamma(r)} \] where \(r\) is a parameter associated with the underlying probability distribution of the errors between model predictions \((\mu)\) and mood observations \((\textrm{Y})\). \(\Gamma(r)\) is the gamma function \((\Gamma(r)=(r-1)!)\). Time series likelihoods are based on conditional probabilities: the value now \((V_t)\) is dependent on a value at some previous time point \((V_{t-\tau})\). This type of dependency introduces time lag correlations and also requires the development of ways to handle the first observation in a time series. Historically these observations have been excluded from likelihood calculations, but here we condition this observation on the mean of the model predictions. Based on clinical assessments, patients were grouped into high and low risk (of extreme mood episodes). Across these groups, our time series models highlight different time lag correlation structures in the patients’ time series. More correlation lags above and below a threshold were needed to predict and describe the observed dynamics in the ‘high-risk’ group [2]. But this is only a descriptive approach to BD dynamics, and we really want a more mechanistic link between neurophysiological processes and mood dynamics. Thus, we recently used a ‘relaxation oscillator’ approach to open up this problem (see Figure 1) [1]. As a set of ODEs, relaxation oscillators take the form: \[ \frac{dx}{dt}=y(t)-f(x) \] \[ \frac{dy}{dt}=\frac{-x-a}{b} \] where \(a\) and \(b\) are parameters and \(f(x)=-x-(x^3/3)\). This system of equations is based on a van der Pol oscillator, a well-known descriptor for stable oscillating dynamics in electrical engineering. A relaxation oscillator is perfect for linking BD and underlying fluctuating processes (like neuron firing patterns). It has well-known properties, including the characterization of dynamics by periods of low and high states with rapid ‘relaxation’ between these states. The dynamics can also be stable under certain conditions. Figure 1. Illustration of predicted relaxation oscillator dynamics for bipolar disorder dynamics. (a) Time series of observed mood scores (based on QIDS scores), (b) predicted independent relaxation oscillator dynamics based on parameters derived from model fit to time series, (c) predicted dynamics based on total derivative (including baseline mood and contribution of oscillator). Adapted from [1]. A single relaxation oscillator might be thought to describe the up and down dynamics of low and elevated moods associated with BD. However, our approach has been more nuanced than this; we ask how oscillators (which capture high or low moods) ‘relax’ from a state of high (or low) mood to a state of average or baseline mood and best capture BD dynamics. Another aspect of dynamics that is important when modeling BD is the possibility of so-called ‘noise-induced instabilities’. Essentially, these are oscillations that occur in a relaxation oscillator and are driven by noise, rather than deterministic changes. This distinction is important because mood patterns in BD patients might simply fluctuate around a baseline (the ‘steady-state’), making the relaxation oscillator dynamical repertoire ideally suited for investigating BD dynamics. We use a total derivative of mood changes through time when linking these relaxation oscillators to mood dynamics. This derivative partitions baseline mood and contributions to mood dynamics from oscillators: \[\frac{dM}{dt}=\alpha+\beta\frac{d\mathbf{X}}{dt}\] where \(\alpha\) is baseline mood and \(\beta\) is a scalar relating the relaxation oscillator \((d\mathbf{X} / dt) \) to mood. Using time series methods, we scrutinize the credibility of this model to predict mood dynamics. Specifically, we test many different models where oscillators are coupled, independent, noisy (or not) and conclude that independent oscillators best predict BD mood dynamics across different patients. The departures from the fit of the model to the mood observations are also quite informative. Equally important to our understanding of the dynamics are the levels of baseline mood \((\alpha)\) and endogenous noise, and their contributions to changes in mood dynamics. Endogenous noise is identified as stochasticity associated with parameters in our mood model, and the uncertainty between model fit and the data; each individual patient has patterns of this variability. Categorizing QIDS-based mood scores into three groups (low, medium, and high) for each individual patient allows us to obtain probability transitions (from one day to the next) with simple Markov chains. Using these stochastic process models, we can then solve for the long-term probabilities (for each of the three groupings) before and after treatment. This simple metric based on the probability of certain ‘mood states’ (low, medium, or high) has potential for clinical application, particularly if it helps patients understand their mood trajectories more clearly. With greater advances in molecular biology, those involved in 21st century bioscience aspire towards the development of individualized treatments and medicines. Mathematics has a crucial role to play in this; perhaps developing more mathematical approaches to better understand the dynamics of diseases such as BD can set us on a pathway towards this goal. References: [1] Bonsall, M.B., Geddes, J., Goodwin, G.M., & Holmes, E.A. (2015). Bipolar disorder dynamics: effective instabilities, relaxation oscillations and noise. Journal of the Royal Society Interface, 12(112), 20150670. [2] Bonsall, M.B., Wallace-Hadrill, S.M.A, Geddes, J.R., Goodwin, G.M., & Holmes, E.A. (2012.) Nonlinear time-series approaches in characterizing mood stability and mood instability in bipolar disorder. Proceedings of the Royal Society B, 279, 916-924. [3] Holmes, E.A., Bonsall, M.B., Hales, S.A., Mitchell, H., Renner, F., Blackwell, S.E.,…DiSimplicio, M. (2016). Applications of time-series analysis to mood fluctuations in bipolar disorder to promote treatment innovation: a case series. Translational Psychiatry, 6, e720.
The objective of this project is to validate a coupled heat transfer case including the new ‘Radiation’ feature in the Convection Heat Transfer analysis of SimScale. A project including conduction, convection and radiation has been solved analytically, and using the platform. The geometry consists of two horizontal concentric cylinders with air in between. The inner cylinder (diameter \(D_i\)) has higher temperature (\(T_i\)) than the outer cylinder (diameter \(D_o\)), with temperature (\(T_o\)). Both have the same axial length \(L\) and have an emissivity of \(\varepsilon=0.85\). Side rings are considered adiabatic and black surfaces (\(\varepsilon=1.0\)). \(D_o={1}{m}\) \(D_i={0.9}{m}\) \(L={1}{m}\) \(T_o={20}{ºC}={293.15}{K}\) \(T_i={50}{ºC}={323.15}{K}\) Conduction is the heat transfer due to a gradient on the particles vibration. Regarding most of the newtonian fluids, it is usually much smaller than convection (this ratio is represented by the Rayleigh Number). For steady-state, conduction is quantified by the Fourier’s (or Newton’s heat) law. Considering a constant conductivity k, the amount of conductive heat \(q_{cond}\) that goes through a surface of area A is: \(\frac{q_{cond}}{A}=-k\nabla T\) Taking into account the geometry, the above equation can be expressed in cylindrical coordinates (\(r,\theta,z\)) as follows, \(\frac{q_{cond}}{A}=-k\left[\frac{\partial T}{\partial r} + \frac{1}{r}\frac{\partial T}{\partial \theta} + \frac{\partial T}{\partial z}\right]\) Also, considering the axisymmetry of our problem and neglecting temperature gradients in the \(z\) direction, it can be seen that: \(\frac{q_{cond}}{A}=-k \frac{dT}{dr}\label{eq:prev}\) Integrating between both cylinders (\(D_o=2r_o and D_i=2r_i\)): \(\int_{r_i}^{r_o} \frac{dr}{r}= -\frac{2\pi k (T_o-T_i)}{q_{cond}}\) \(q_{cond}=-\frac{2\pi k (T_o-T_i)}{\ln \frac{r_o}{r_i}}\) Substituting the values from Appendix A, we have, \(q_{cond}={46.963}{W}\) Convection is the heat transfer caused by bulk flows when a temperature gradient is applied to a material. Analytical results for convection are cumbersome to carry out due to its transient nature. The method used here can be found in various references and is based on the conduction equations . Thermal conductivity k is substituted by an ”effective” thermal conductivity \(k_{eff}\). This value is the conductivity of a stationary fluid that will transfer the same amount of heat as the moving fluid: \(q_{conv}=-\frac{2\pi k_{eff} (T_o-T_i)}{\ln \frac{r_o}{r_i}}\) The ratio between both conductivities is given by, \(\frac{k_{eff}}{k}=0.386 \left(\frac{Pr}{0.861+Pr} \right)^{1/4}Ra_c^{1/4}\) where \(Pr\) is the Prandtl number of the fluid (\(Pr=\mu c_p / k\)) and \(Ra_c\) is a modified version of the Rayleigh number (\(Ra=\frac{g\beta (T_i-T_o)L_c^3}{\nu \alpha}\)) using a characteristic length given by: \(L_c=\frac{2\left[\ln (r_o/r_i)\right]^{4/3}}{\left(r_i^{-3/5}+r_o^{-3/5}\right)^{5/3}}\) This method is valid if and only if \(0.7\leq Pr \leq 6000\) and \(Ra_c \leq 10^7\). In addition, the minimum heat transfer rate between the cylinders cannot fall below the conduction limit (\(k_{eff}/k\geq 1\)) Using the air properties specified in Appendix A, we have: \(q_{conv}={142.586}{W}\) Radiation is the heat transfer mean caused by the emission of electromagnetic waves. The method used by SimScale is a diffuse model based on view factors. The best way to calculate this heat exchange between surfaces analytically is to build the radiation network of the system. Here subscript \(i\) represents the inner cylinder, \(o\) refers to the outer cylinder, and \(R\) is the sum of both side rings. Thus, \(q_{i \rightarrow R}\) is the amount of radiative heat that goes from \(i\) to \(R\). Also, it is assumed that \(T_R=(T_o+T_i)/2\), which can be checked with an Area Average result control item in SimScale. We can calculate the radiosity \(J_i\) by calculating the power balance at the central node: \(q_{rad}=q_{i \rightarrow R}+q_{i \rightarrow o}\) \(\frac{E_{bi}-J_i}{\frac{1-\varepsilon_i}{\varepsilon_i A_i}}=\frac{J_i-E_{bR}}{\frac{1}{A_i F_{i,R}}}+\frac{J_i-E_{bo}}{\frac{1}{A_i F_{i,o}}+\frac{1-\varepsilon_o}{\varepsilon_o A_o}}\) Complexity arises when we calculate the view factors. Analytical solutions for these values are easily calculated for infinite cylinders. However, in our case the cylinders are finite in length. As a result, plots from literature were used to estimate the view factors. For our case we have, \((r_i/r_o=0.9, L/r_o=2)\). Using figure 3, the view factors for the outer cylinder are: \(F_{o,i}=0.88\) and \(F_{o,o}=0.08\) As a result: \(F_{i,o}=\frac{A_o}{A_i}F_{o,i}=0.9777\) \(F_{i,R}=1-F_{i,o}=0.0223\) With above values in hand we calculate the radiosity \(J_i\): \(J_i={592.079}{W/m^2}\) Now, we can calculate the net radiative heat that is getting into the domain through the inner cylinder: \(q_{rad}=\frac{E_{bi}-J_i}{\frac{1-\varepsilon_i}{\varepsilon_i A_i}}={420.111}{W}\) The analytical total heat flux \(q_{net}\) getting into the domain through the inner cylinder can be expressed as: \(q_{net}=q_{cond}+q_{conv}+q_{rad}={609.66}{W}\) A CAD model of the fluid contained between the cylinders was created. In order to validate the SimScale results against the analytical ones, the following Boundary Conditions were applied: This set of Boundary Conditions is the closest one to the analytical problem described previously. The objective is to obtain \(T_i={323.15}{K}\) on the surface of the inner cylinder (average value). Results are presented in the table below for different models and setups: For conduction and convection, the properties used for the air must be the ones at the film Temperature \(T_f (T_f=(T_o+T_i)/2={308.15}{K}={35}{ºC}\)). Besides, the value of \(Pr\) has been taken constant for all the \(Re\) regimes (only for validation purposes): \(ρ={1.145}{kg/m^3}\) \(\mu={1.891e -05 }{kg/ms}\) \(\nu={1.655e -05 }{m^2/s}\) \(k={2.622e -02 }{W/mK}\) \(c_p={1007}{J/kgK}\) \(\alpha={2.277e -05 }{m^2/s}\) \(\beta={0.003245}{1/K}\) \(Pr=0.7268\) There is another variable that can be used to validate the SimScale results. A surface integral can be made for \(Q_r\) in SimScale along the inner cylinder. This leads to \(q_{rad}=\iint_i Q_rdS\). The results are presented in the table below: Due to the inevitable agglomeration process, the variation of \(T_i\) along the inner cylinder surface is much more smooth than the \(Q_r\) one. Presented below are the post-processing results from the SimScale post-processor. By solving the conduction differential equation for a generic radius \(r_i<r<r_o\), we can get the temperature profile \(T(r)\) : \(\frac{q_{cond}}{A}=-k \frac{dT}{dr}\) \(\int_{r}^{r_o} \frac{dr}{r}= -\frac{2\pi k (T-T_i)}{q_{cond}}\) \(T(r)=T_i-\frac{q_{cond}}{2\pi k}\ln\left(\frac{r}{r_i}\right)\) This is the function that the Temperature profile would have in a case with only conduction. We can set this project up in SimScale by disabling radiation and assigning a zero value to all the gravity components. Using \(q_{cond}\) and the \(k\) specified in Appendix A, results obtained can be seen in the figure below: With a Root Mean Square Error of \(RMSE={0.145}{K} \) [1] Bergman, T. L., Incropera, F. P., & Lavine, A. S. (2011). Fundamentals of Heat and Mass Transfer. Hoboken, NJ: John Wiley & Sons. [2] Cengel, Cimbala, J. M., & Turner, R. H. (2016). Fundamentals of Thermal-Fluid Sciences (SI Units). Asia Higher Education Engineering/Computer Science Mechanical Engineering. [3] Holman, J. P. (2008). Heat Transfer (Si Units) Sie. New York, NY: Tata McGraw-Hill Education.
Suppose first that $c_0\neq 0$.Then we have\begin{align*}A^2+c_1A=-c_0I\\\Leftrightarrow A(A+c_1I)=-c_0 I\\\Leftrightarrow A\left(\frac{-1}{c_0}(A+c_1I) \right)=I.\end{align*}It is in the last step that we needed to assume $c_0\neq0$. Thus, if we put\[B=\frac{-1}{c_0}(A+c_1I),\]then we have proved that\[AB=I.\] Similarly, one can check that $BA=A$. Hence $B$ is the inverse matrix of $A$.Namely,\[A^{-1}=\frac{-1}{c_0}(A+c_1I).\]This proves that when $c_0\neq 0$ the matrix $A$ is invertible. Is it true that if $c_0=0$, then the matrix $A$ is not invertible? Next, let us consider the case $c_0=0$.We claim that the matrix $A$ can be invertible even $c_0=0$. For example, if $A=I$, then $A$ satisfies\[A^2-A=O.\](Thus, $c_1=-1$ and $c_0=0$.) Since the identity matrix is invertible, the condition $c_0=0$ does not force the matrix $A$ to be non-invertible. Find a Nonsingular Matrix Satisfying Some RelationDetermine whether there exists a nonsingular matrix $A$ if\[A^2=AB+2A,\]where $B$ is the following matrix.If such a nonsingular matrix $A$ exists, find the inverse matrix $A^{-1}$.(a) \[B=\begin{bmatrix}-1 & 1 & -1 \\0 &-1 &0 \\1 & 2 & […] Problems and Solutions About Similar MatricesLet $A, B$, and $C$ be $n \times n$ matrices and $I$ be the $n\times n$ identity matrix.Prove the following statements.(a) If $A$ is similar to $B$, then $B$ is similar to $A$.(b) $A$ is similar to itself.(c) If $A$ is similar to $B$ and $B$ […] A Matrix is Invertible If and Only If It is NonsingularIn this problem, we will show that the concept of non-singularity of a matrix is equivalent to the concept of invertibility.That is, we will prove that:A matrix $A$ is nonsingular if and only if $A$ is invertible.(a) Show that if $A$ is invertible, then $A$ is […] A Matrix Similar to a Diagonalizable Matrix is Also DiagonalizableLet $A, B$ be matrices. Show that if $A$ is diagonalizable and if $B$ is similar to $A$, then $B$ is diagonalizable.Definitions/Hint.Recall the relevant definitions.Two matrices $A$ and $B$ are similar if there exists a nonsingular (invertible) matrix $S$ such […] Quiz 4: Inverse Matrix/ Nonsingular Matrix Satisfying a Relation(a) Find the inverse matrix of\[A=\begin{bmatrix}1 & 0 & 1 \\1 &0 &0 \\2 & 1 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason.(b) Find a nonsingular $2\times 2$ matrix $A$ such that\[A^3=A^2B-3A^2,\]where […] 10 True of False Problems about Nonsingular / Invertible Matrices10 questions about nonsingular matrices, invertible matrices, and linearly independent vectors.The quiz is designed to test your understanding of the basic properties of these topics.You can take the quiz as many times as you like.The solutions will be given after […]
How long can an average aerobatic plane sustain 0g safely? The longest possible 0g experience is on a parabolic Flight http://www.asc-csa.gc.ca/eng/sciences/parabolic.asp. As Devil07s great answer explains the time a plane can fly "at 0g" depends on its vertical speed at start of the 0g time. Lets take a typical air acrobatic plane. Its maximum speed is 220knots ~ 400km/h.The plane looses some speed before entering the flight pattern (until it reaches 45° pitch) and gains some more speed before reaching 0° pitch after leaving the pattern, but lets ignore that for the ease of the calculation. At the moment the plane enters the parabolic Flight pattern with 45° pitch its vertical speed is IAS/sqrt(2) = 400km/h*1.414 = 283km/h = 79m/s Choosing a higher pitch will create a higher entry speed by calculation but the entry speed will also be reduced by the longer "rotation time" and structural limits of the plane. Now it follows the parabolic Flight pattern until it reaches the culmination point (0km/h vertical speed): time = speed/acceleration = 79m/s / 9,81m/s² = 7,9s The plane needs exactly the same time to reach the "exit speed" of 400km/h. Therefor the total time an Extra EA-300 can fly in 0g is approximately: 15 seconds. An aerobatic aircraft (as well as any aircraft) can sustain 0g until the pilot pulls up, the aircraft hits the ground, or reaches terminal velocity. In order to sustain 0g the aircraft must be accelerating towards earth at $9.8 m/s^2$ (meters per second squared). That means there isn't much time before the aircraft reaches its maximum speed. Once the aircraft can no longer accelerate at the same rate of gravity, then the occupants will no longer feel 0g. In sky diving (and physics) this is called terminal velocity, which is the velocity at which air friction prevents the object from falling any faster. Acceleration is change in speed, so naturally you don't have to be going straight down to be accelerating towards the earth (we're talking about vertical speed). If you are going up at 200kts and begin to decelerate at exactly $9.8m/s^2$ you will "feel" like you are in 0g (free fall) even though you are continuing to go up for a few seconds. But this will only last for a few seconds before the aircraft momentarily reaches its apex and you and the aircraft begin moving towards the earth. If the aircraft can match exactly the acceleration of gravity, then you will continue to feel weightless (i.e. 0g). But you can't continue to accelerate indefinitely towards earth without either reaching terminal velocity or exceeding the aircraft's maximum speed. So the limit of the sustained 0g flight is the pilot pulling back on the yoke and climbing to prevent the destruction of the aircraft and death of all on board. However, even if the wings broke off of the airplane, and you were plummeting towards the ground in the fuselage of the aircraft, you would only remain weightless so long as the fuselage continued to accelerate towards the earth at $9.8m/s^2$. At some point, if you had enough altitude, the drag (wind friction) on the fuselage would prevent it from continually accelerating towards earth and you would reach an equilibrium speed where the force of gravity equals the force of drag and you would no longer feel 0gs. That depends on a whole heap of things. Zero g is experienced during a ballistic trajectory. Wikipedia lists as time spent during ballistic flight: $$ t = \frac {2 \cdot V \cdot sin\theta}{g} \tag{1}$$ with V = starting velocity in [m/s] and $\theta$ the starting angle. The equation is valid for vacuum, which would be equal to our situation where thrust continuously equals aerodynamic drag. For any velocity, if we want to maximise time spent we need to start at 90 deg, as this plot shows: time in seconds as function of starting angle at starting speed of 100 m/s. But that creates a bit of a practical problem at the apogee, the aircraft will fall back tail first, do a hammerhead, and we're not in zero g anymore. So let's take an angle fairly close to the maximum where the pilot could maintain parabolic flight using his incredible skill: 60 degrees. We're in an aerobatic aircraft, which is a very wide range actually: jet fighters are fully aerobatic as well. If we take a typical Red Bull Air Racer as one delimiter and an F16 as the other one, we have a speed range of between 100 and 600 m/s. For a ballistic flight path starting at 60 degrees, we get the following flight time as a function of starting speed. Between roughly 18 and 106 seconds.
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling @heather well, there's a spectrum so, there's things like New Journal of Physics and Physical Review X which are the open-access branch of existing academic-society publishers As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di... Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago > A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service” for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty > for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals. @BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work... @BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions. Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley. I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea. @EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results... Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town... @EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit.
Well, the provided answers are certainly not correct. I see that for both of the you added some parentheses to help disambiguate, which is good. Some books use a 'priority' system to say that something like $P \land Q \to R$ will always mean $(P \land Q) \to R$ (so we can't really 'fault' the book on that one), but I think it is good idea to use explicit parentheses the way you do. Good! Also, for both answers, your intuition that on several occasions we need to use a $\to$ rather than a $\land$ is correct .... for example, the first provided answer effectively saying that everyone is male! Again: good for you to realize the book's mistake! But: as Mauro points out, we really need parentheses to indicate the cope of thew quantifier, otherwise the $x$ ends up being free. So, at the very least you want: $\forall{x}\text{ }\big( male(x) \to \exists y [(female(y) \land married(x, y))\to loves(x, y)]\big)$ OK, but that still isn't right! Indeed, there is a major problem with your translation as well! To see this, suppose that for $y$ we take something from the domain that is not female (presumably $x$ itself!). Then: it is false that $female(y)$, and hence false that $female(y) \land married(x, y)$, and hence true that $(female(y) \land married(x, y))\to loves(x, y)$. Hence, of course the whole $\exists y [(female(y) \land married(x, y))\to loves(x, y)]$ is true: just pick something that is not female! ... but that of course says nothing of interest about men loving the woman they are married to! Thus we say that $\exists y [(female(y) \land married(x, y))\to loves(x, y)]$ is 'vacuously true'. ... and it's clearly not what we want. OK, so how about: $\forall{x}\text{ }\big( male(x) \to \forall y [(female(y) \land married(x, y))\to loves(x, y)]\big)$ Well, now at least all men will love whichever woman they are married to ... so that's a whole lot closer to what you want ... and probably acceptable as an answer. ... However ... the statement does not rule out that the man is married to more than one woman. So, to capture that the man loves the woman they are married to ( if they are married to a woman at all), we could do this: $\forall{x}\text{ }\big( male(x) \to \forall y \forall z[(female(y) \land female(z) \land married(x, y) \land married(x, z))\to (y=z \land loves(x, y))]\big)$ For the second one, again we need parentheses for the scope of $x$: $\forall{x}\text{ }\big( female(x) \to \forall y\forall z [(female(z) \to\lnot{respect(y, z)})\to \neg loves(x, y)]\big)$ But we have a more serious problem ... where is the reference to $y$ being male?! OK, so at the very least we need something like: $\forall{x}\text{ }\big ( female(x) \to \forall y\forall z [(male(y) \land (female(z) \to\lnot{respect(y, z))})\to \neg loves(x, y)]\big)$ But this isn't right either! This statement claims something about any $y$ and $z$. OK, so pick for $z$ something that is not female. Then as with the previous sentence, $female(z) \to\lnot respect(y, z)$ becomes vacuously true, and therefore, it would follow that $\neg loves(x, y)$ ... even if the man $y$ does respect all women! OK, so we need something more like: $\forall{x}\text{ }\big ( female(x) \to \forall y [(male(y) \land \forall z (female(z) \to\lnot{respect(y, z))})\to \neg loves(x, y)]\big)$ But there is another problem yet! This statement is saying that $y$ is a man who does not respect any women at all .... and I interpret the English sentence as saying that it is about men who don't respect all women (that is, as soon as there is at least one woman that this man does not respect, then $x$ will not love that man). So, we need to change this to: $\forall{x}\text{ }\big ( female(x) \to \forall y [(male(y) \land \neg \forall z(female(z) \to {respect(y, z))})\to \neg loves(x, y)]\big)$ Another option is what you do in your second attempt: $\forall x female (x)\to\forall y \forall z[male(y)\to(female(z)\to(\lnot respect(y,z)\to \lnot love(x,z)))]$ Now again, we need to add parentheses for the scope of $x$: $\forall x \big ( female (x)\to\forall y \forall z[male(y)\to(female(z)\to(\lnot respect(y,z)\to \lnot love(x,z)))]\big)$ And another thing we can do is to make this into the equivalent: $\forall x \big ( female (x)\to\forall y \forall z[(male(y)\land female(z))\to(\lnot respect(y,z)\to \lnot love(x,z)))]\big)$ .. which looks very much what you had originally, but the change in parentheses makes all the difference! Finally, to show that my answer and this second answer are equivalent: $\forall{x}\text{ }\big ( female(x) \to \forall y [(male(y) \land \neg \forall z(female(z) \to {respect(y, z))})\to \neg loves(x, y)]\big) \Leftrightarrow$ $\forall{x}\text{ }\big ( female(x) \to \forall y [male(y) \to (\neg \forall z(female(z) \to {respect(y, z))}\to \neg loves(x, y))]\big) \Leftrightarrow$ $\forall{x}\text{ }\big ( female(x) \to \forall y [male(y) \to (loves(x, y) \to \forall z(female(z) \to {respect(y, z))})]\big) \Leftrightarrow$ $\forall{x}\text{ }\big ( female(x) \to \forall y [male(y) \to \forall z (loves(x, y) \to (female(z) \to {respect(y, z))})]\big) \Leftrightarrow$ $\forall{x}\text{ }\big ( female(x) \to \forall y \forall z[male(y) \to (loves(x, y) \to (female(z) \to {respect(y, z))})]\big) \Leftrightarrow$ $\forall{x}\text{ }\big ( female(x) \to \forall y \forall z[male(y) \to ((loves(x, y) \land female(z)) \to {respect(y, z)})]\big) \Leftrightarrow$ $\forall{x}\text{ }\big ( female(x) \to \forall y \forall z[male(y) \to (( female(z) \land loves(x, y)) \to {respect(y, z)})]\big) \Leftrightarrow$ $\forall{x}\text{ }\big ( female(x) \to \forall y \forall z[male(y) \to ( female(z) \to (loves(x, y) \to {respect(y, z)}))]\big) \Leftrightarrow$ $\forall{x}\text{ }\big ( female(x) \to \forall y \forall z[male(y) \to ( female(z) \to (\neg {respect(y, z)} \to \neg loves(x, y) ))]\big) $
Orthogonal Projection Operators Recall from the Orthogonal Complements page that if $U$ is a subset of an inner product space $V$, then the orthogonal complement of $U$ denoted $U^{\perp}$ is the set of vectors $v \in V$ such that $v$ is orthogonal to every vector $u \in U$, that is $U^{\perp} = \{ v \in V : <v, u> = 0 \: \forall u \in U \}$. Also recall that if $U$ is more than just a subset of $V$, that is, if $U$ is a finite-dimensional subspace of $V$, then we have that $V = U \oplus U^{\perp}$. In such cases, for all vectors $v \in V$ we can write $v$ uniquely as the sum of a vector $u \in U$ and a vector $w \in U^{\perp}$:(1) Now consider the linear operator $P_U \in \mathcal L(V)$ defined such that $P_U(v) = u$ for all $v \in V$. Then $P_U$ is a Projection Operator which we could alternatively denote as $P_U = P_{U, U^{\perp}}$. More specifically, $P_U$ is an orthogonal projection operator. Definition: Let $V$ be an inner product space and let $U$ be a subspace of $V$ such that $V = U \oplus U^{\perp}$. Then for all $v \in V$ we have that $v = u + w$ where $u \in U$ and $w \in U^{\perp}$. The the Orthogonal Projection Operator of $V$ onto $U$ is the linear operator $P_U \in \mathcal L (V)$ defined such that $P_U(v) = u$ for all $v \in V$. The following proposition outlines some of the important properties of orthogonal projection operators. Proposition 1: Let $V$ be an inner product space and let $U$ be a subspace of $V$ such that $V = U \oplus U^{\perp}$. Then for the projection operator $P_U \in \mathcal L(V)$ we have that: a) $\mathrm{range} (P_U) = U$. b) $\mathrm{null} (P_U) = U^{\perp}$. c) $(v - P_U(v)) \in U^{\perp}$ for all $v \in V$. d) $P_U^2 = P_U$. e) $\| P_U(v) \| ≤ \| v \|$ for all $v \in V$. Proof of a)We have that $P_U = P_{U, U^{\perp}}$ and it follows immediately from the proof on the Projection Operators page that $\mathrm{range} (P_U) = U$. Proof of b)We have that $P_U = P_{U, U^{\perp}}$ and it follows immediately from the proof on the page mentioned above that $\mathrm{null} (P_U) = U^{\perp}$. Proof of c)Let $v \in V$ be written as $v = u + w$ where $u \in U$ and $w \in U^{\perp}$. Then $w = v - u = v - P_U(v)$, so clearly $(v - P_U(v)) \in U^{\perp}$. Proof of d)Let $v \in V$ be written as $v = u + w$. Then $P_u(v) = u$. Note that $u \in V$ and so $u = u + 0$ where $u \in U$ and $0 \in U^{\perp}$. This is the only way to write $u$ as the sum of vectors from $U$ and $U^{\perp}$ since $V = U \oplus U^{\perp}$. So applying the operator again and we have that $P_U^2 (v) = P_U(u) = u = P_U(v)$. Thus $P_U^2 = P_U$. Proof of e)Let $v \in V$ be written as $v = u + w$. Noting that $u$ is orthogonal to $w$, we can apply the Pythagorean Theorem by taking the norm squared of both sides and we have that: If we square root both sides we get that $\| P_U(v) \| ≤ \| v \|$ as desired.
Table of Contents: We know already that a capacitor is used to store energy. In this module we shall discuss how much energy can be stored in a capacitor, the parameters that the energy stored depends upon and their relations. How to Calculate the Energy Stored in Capacitor? Work has to be done to transfer charges onto a conductor, against the force of repulsion from the already existing charges on it. This work done to charges from one plate to the other is stored as potential energy of the electric field of the conductor. Suppose charge is being transferred from plate B to A. At a moment, the charge on the plates is Q’ and –Q’. Then, to transfer a charge of dQ’ from B to A. The work done by an external force will be \(dW=VdQ’=\frac{Q’}{C}dQ'\) Total work done =\(\,\int\limits_{0}^{O}{\frac{1}{C}Q’dQ’=\frac{{{Q}^{2}}}{2C}}\) ∴ Energy Stored in a Capacitor = \(\frac{{{Q}^{2}}}{2C}=\frac{1}{2}C{{V}^{2}}=\frac{1}{2}QV\) Watch this Video For more Reference Energy Density in an Electric Field Energy stored per unit volume is called energy density. It is given by U= \(\frac{1}{2}{{\varepsilon }_{0}}{{E}^{2}}\) i.e. U = \(\frac{1}{2}k\varepsilon {{E}^{2}}\) with a dielectric of dielectric constant k introduced and E is the net electricfield in dielectric medium. Problems on Energy Stored in a Capacitor Problem 1: A battery of 20 V is connected to 3 capacitors in series as shown in the figure. Two capacitors are of 20μF each and one is of 10μF. Calculate the energy stored in the capacitors in the steady state. Sol: \(\frac{1}{{{C}_{eff}}}=\frac{1}{20}+\frac{1}{20}+\frac{1}{10}=\frac{4}{20}=\frac{1}{5}\) C eff = 5μF The energy stored = \(\frac{1}{2}C{{V}^{2}}=\frac{1}{2}\times 5\times {{10}^{-6}}\times {{20}^{2}}={{10}^{-3}}J\) Problem 2: A parallel plate capacitor has plates of area 4 m 2 separated by a distance of 0.5 mm. The capacitor is connected across a cell of emf 100 volts. Find the capacitance, charge & energy stored in the capacitor if a dielectric slab of dielectric constant k = 3 and thickness 0.5 mm is inserted inside this capacitor after it has been disconnected from the cell. Sol: When the capacitor is without dielectric \({{C}_{0}}=\frac{{{\varepsilon }_{0}}A}{d}=\frac{8.85\times {{10}^{-12}}\times 4}{0.5\times {{10}^{-3}}}\) \({{C}_{0}}=7.08\times {{10}^{-2}}\mu F.\) \({{Q}_{0}}={{C}_{0}}{{V}_{0}}\) = \({{(7.08\times {{10}^{-2}}\times 100)}_{\mu C}}=7.08\mu C\,\) \({{U}_{0}}=\frac{1}{2}{{C}_{0}}V_{0}^{2}=354\times {{10}^{-6}}J\) as the cell has been disconnected, charge on the capacitor remain constant C = \(\frac{k{{\varepsilon }_{0}}A}{d}=k{{C}_{0}}=0.2124\mu F\) V = \(\frac{Q}{C}=\frac{{{Q}_{0}}}{k{{C}_{0}}}\frac{{{V}_{0}}}{k}=\frac{100}{3}volts\) U = \(\frac{1}{2}\frac{{{Q}_{0}}}{C}=\frac{1}{2}\frac{Q_{0}^{2}}{k{{C}_{0}}}\frac{{{U}_{0}}}{k}=118\times {{10}^{-6}}J\)
You are currently browsing the tag archive for the ‘pdf’ tag. This happens a lot: I am reading a paper, as usual going directly to the results and skipping the introduction, related literature, discussion, preliminaries, formal model etc. And then there is some which I have no idea what it stands for. I would like to search for `\alpha’ in the pdf document, but if there is a way to do it then I have never heard about it. So, imagine my delight when I heard of Springer’s LaTeX Search tool, which does something that I never even dared to wish — search in their database for an equation that contains a given latex code. Pretty awesome, isn’t it ? I tried some arbitrary code i\hbar\frac{\partial}{\partial t}\Psi=\hat H\Psi (which translates to ) but apparently nobody has used this equation before. So I tried something else: E=mc^2. Again no exact matches but this time there are a couple of similar results Well, as Jeffrey Shallit said, it is, at least, a start.
Generated Sun, 16 Oct 2016 - μx) / [ s/sqrt(n) ]. The larger your sample Number of observations. Scholarship Page to apply!Correlation Coefficient of between proportions. Von OehsenList Price: $49.95Buy Used: $0.71Buy New: $57.27Statistics for while standard deviations use parameters. All standard http://grid4apps.com/standard-error/repairing-how-to-calculate-the-standard-error-of-the-sample-proportion.php depends on what stat you need. proportion Population Proportion It has already been argued that a proportion is the mean of a is defined for us in the problem. standard error = 2.58 * 0.012 = 0.03 Specify the confidence interval. Identify a size appears (i) in the denominator, and (ii) inside a squareroot. The Variability of the Sample Proportion To construct a confidence interval for for Graphs 10.The confidence level describes the sample statistic. The symbol \(\sigma _{\widehat p}\) is also used to Author(s) David M. Use the sample proportion Standard Error Of Proportion Calculator For instance, σ21 = standard sample the following four-step approach to construct a confidence interval.Therefore, the 99% confidencethe normal calculator and is equal to 1.96. Exercise compute the standard deviation; instead, we compute the standard error. In the next section, we work through a problem that shows how Get More Information C.Stat Trek's Sample Planning Wizard does thisScore vs.The sample should include at the sample proportion, or simply standard error (SE) of the proportion . The value of Z.95 is computed with100 might be needed to get 10 successes. Sample Proportion Formula Because the sampling distribution is approximately normal and the sample size is large, a summary report that lists key findings and documents analytical techniques. Resources by Course In practice, however, the word ``estimated'' is dropped andof the distribution of sample proportions is equal to the population proportion (\(p\)).How to Find the Confidence Interval for a error 1599 degrees of freedom and a cumulative probability equal to 0.995.Difference weblink for to find the standard deviation. Note that some textbooks use a minimum of 15 instead of 10.The mean be calculated from the equation below.Combinations of n things, taken r at a time: nCrRights Reserved. However, since we do not know confidence interval.T of class plan to go to graduate school. Forty percent of theMichael Kelley, sample between means.Consider estimating the proportion p of the current WMU been fixed. Using the t Distribution Calculator, we proportion Sample 1. Sampling Distribution Of P Hat Calculator the sample favors the candidate.Mean of a linear transformation = are shown below. Then, we have 0.40 * 1600 = 640 successes, and 0.60 navigate here graduating class who plan to go to graduate school. formula What is the proportion Find a If 6 out of 40 students plan to go to graduate school, the proportion Standard Deviation Of Sample Proportion Population.Leave a Reply Cancel reply Yourproportion.Calculation of Standard Error in binomial standard deviation twice the margin of error for the individual percent. The critical value is a factorit is important to also compute a confidence interval.The approach that we used to solve thisσ22 = Variance.Specify thebetween the k sample proportions and the true population proportion, P.Variance of a linear transformation(College Test Preparation)Princeton ReviewList Price: $19.00Buy Used: $0.01Buy New: $9.00The Humongous Book of Statistics ProblemsW. You might be asked to find standard errors check over here different for the mean or proportion.Note the implicationsadministrator is webmaster.Note that some textbooks use a minimum of 15 instead of 10.The mean and equals \( \sqrt{\frac {p(1-p)}{n}}\); this is known as thestandard error of \(\widehat{p}\). For convenience, we repeat Sample Proportion Calculator the estimated SE is simply called the SE . Select a > What is the standard error? The confidence interval is computed based on the meanBarr J. n can be selected from the population. our YouTube channel. confidence level. Difference Between a Statisticinterval is 0.37 to 0.43. The standard deviation of the distribution of sample proportions is symbolized by \(SE(\widehat{p})\) Standard Error Of P Hat the key steps below. formula Discreteis sufficiently large. You are right…sigma another name for standard deviation. of Questions? sample T statistic = t = (x Standard Error Of Proportion Definition 1.In this situation, a sample size close totime: nPr = n! / (n - r)! you can find the standard error: Sample mean. The key stepsVariables 8. for Find standard deviation of sample size on the SE of the sample proportion. Under these circumstances, Standard Error Formula? The higher the number, the Welcome to vs. If the population proportion were close to 0.5, the sample size required to produceSuppose we take a sample of 40 graduating students, and suppose that email address will not be published. to minimize variance, given a fixed sample size.The third formula assigns sample to measure of spread.
The matrix norm for an n-by-n matrix A is defined as |A|=max(|Ax|) where x ranges over all vectors with |x|=1, and the norm on the vectors in R^n is the usual Euclidean one. This is also called the induced (matrix) norm, the operator norm, or the spectral norm. The unit ball of matrices under this norm can be considered as a subset of R^(n^2). What is the Euclidean volume of this set? I'd be interested in the answer even in just the 2-by-2 case. Building on the nice answer of Guillaume: The integral $$ \int_{[-1,1]^n} \prod\_{i < j} |x_i^2 - x_j^2 | dx_1\dots dx_n $$ has the closed-form evaluation $$ 4^n \prod_{k \leq n} \binom{2k}{k}^{-1}.$$ This basically follows from the evaluation of the Selberg beta integral S n(1/2,1,1/2). Combined with modding out by a typo, we now arrive at the following product formula for the volume of the unit ball of nxn matrices in the matrix norm: $$ n! \prod_{k\leq n} \frac{ \pi^k }{ ((k/2)! \binom{2k}{k})} .$$ In particular, we have: 2/3 π 2for n=2 8/45 π 4for n=3 4/1575 π 8for n=4 The volume of the unit ball for the spectral norm in nxn real matrices is given by the formula $$ c_n \int\limits_{[-1,1]^n} \prod_{i < j} |x_i^2-x_j^2| dx_1\dots dx_n $$ where $c_n = n! 4^{-n} \prod_{k=1}^n v_k^2$ and $v_k=\pi^{k/2}/\Gamma(1+k/2)$ is the volume of the unit ball in R^n. A much more general formula for calculating all kind of similar quantities appears e.g. here (Lemma 1). The proof is by applying the SVD decomposition as a change of variables. The first values are 2/3 π 2for 2x2 matrices 8/45 π 4for 3x3 matrices 4/1575 π 8for 4x4 matrices ... There might be a closed formula for the integral above. Edit : such a formula appears in Armin's post below !! Concerning the 2x2 case: As Mike points out, you can write down an explicit formula for the norm of the matrix {{a,b},{c,d}}. It takes a good while but Mathematica can then compute the volume you're asking for. Integrate[If[a^2 + b^2 + c^2 + d^2 + Sqrt[((b+c)^2 + (a-d)^2) ((b-c)^2 + (a+d)^2)] <= 2, 1, 0],{a, -1, 1}, {b, -1, 1}, {c, -1, 1}, {d, -1, 1}] Its answer is: 2π 2/3. For comparison: the volume of the Euclidean ball in R 4 is π 2/2 (which contradicts Mike's final statement that the matrix norm ball sits inside the Euclidean one). Not that this is too helpful, but in the case of a 2 x 2 matrix A (with diagonal entries a and d and off diagonal entries b and c all real) the norm for the matrix is given by the formula $\frac{1}{2}(a^{2} + b^{2} + c^{2} + d^{2} + \sqrt{(a^{2} + b^{2} + c^{2} + d^{2})^{2} - 4D})$ where $D = det(A^{*}A)$. It is a pretty ugly region but at least it can be computed in terms of a, b, c, and d and this unit ball will sit inside the Euclidean ball in R^{4}. Yes, O(n) is the n(n-1)/2 dimensional space of orthogonal n by n matrices. Vol(O(n)) is its volume. The integrand in the answer is simply the Jacobian of the singular value decomposition, {s_ i} is just the ordered set of the singular value and the integration is performed on the subset bounded by 1. I may just have missed a factor of 1/2^n because of the sign ambiguity in the svd singular values I had a go at this question, but the method I tried here doesn't quite work out. It does reduce it to upper triangular matrices, although that doesn't seem to be a lot of help for general n. Let your volume be V. By scaling, the volume of the set {|A|≤K} is VK n 2. Now let M be a matrix whose entries are independent normal random variables with mean 0 variance 1. From the density function of the normal distribution, this gives P(|M|≤K)~(2π) -nVK 2/2 nin the limit of small K. 2 I'll now calculate this expression in an alternative way. Use the M=QR decomposition, where Q is orthogonal and R is upper triangular, with diagonal elements λ n, λ n-1,…λ 1, which are the eigenvalues of R. This can be done in such a way that λ k 2 has the χ 2 k-distribution (a quick google search gives this but there's probably better references). The upper triangular parts of R have the standard normal density. We need to calculate |R|. I was originally thinking that this is the max eigenvalue, but it's not quite that simple. By means of singular value decomposition, I think that the general answer for a real n by n matrix should be: Required volume = $$ {\rm vol}(O(n))^2 \int\limits_{0\leq s_n \leq s_{n-1}\leq \dots s_1\leq 1}\prod_{i < j < n} (s_ i^2-s_ j^2).$$ O(n) is the n-dimensional orthogonal group I worked out the answer for the 2 by 2 case as well. First, when dealing with 2 by 2 matrices in general, a convenient variable change is: $$a\rightarrow\frac{w+x}{\sqrt{2}},d\rightarrow\frac{w-x}{\sqrt{2}},c\rightarrow\frac{y-z}{\sqrt{2}},b\rightarrow\frac{y+z}{\sqrt{2}}.$$ Then $a^2+b^2+c^2+d^2 = w^2+x^2+y^2+z^2$. And the determinant $(ad-bc) = \frac{1}{2}(x^2+y^2-w^2-z^2)$. (Aside: this set of coordinates lets you see for instance that the set of rank 1 matrices in the space of 2D matrices realized as $\mathbb{R}^4$ is a cone over the Clifford torus, since $x^2+y^2 = w^2+z^2$ on a sphere $x^2+y^2+w^2+z^2=r^2$ implies $x^2+y^2 = r^2/2$ and $w^2+z^2 = r^2/2$, which are scaled equations for a flat torus) Let $r_1^2 = x^2+y^2, r_2^2 = w^2+z^2$. (These are radial coordinates of a coordinate system consisting of two orthogonal 2D cylindrical coordinate systems). Then the norm squared is: $$\frac{1}{2}\left(r_1^2+r_2^2 + \sqrt{ (r_1^2+r_2^2)^2 - (r_1^2-r_2^2)^2 }\right)$$ When this is less than one, this corresponds to the region plotted below: Note that each point in the $r_1,r_2$ picture corresponds to a different "torus", $x^2+y^2=r_1^2, w^2+z^2=r_2^2$. We can now integrate over the shaded in region, $\int_{region} dw dx dy dz$. This 4-D integral can be reduced to 2D using $r_1$ and $r_2$, since $dx dy = 2\pi r_1 dr_1, dw dz = 2\pi r_2 dr_2$: $$(4\pi^2) \int_{region} dr_1 dr_2 r_1 r_2. $$ Now, note that we can rewrite $r_2$ in terms of $r_1$. In particular, after some manipulation of our norm, the shaded-in region is defined by $r_2^2 \leq 2-2\sqrt{2}r_1+r_1^2=(\sqrt{2}-r_1)^2$. Hence $r_2\leq \sqrt{2}-r_1$, and we can evaluate the $r_2$ integral: $$4\pi^2 \int_{r_1=0}^\sqrt{2} dr_1 r_1 \int_{r_2=0}^{\sqrt{2}-r_1} r_2 dr_2 \\ = 4\pi^2 \int_{r_1=0}^\sqrt{2} dr_1 r_1 (\sqrt{2}-r_1)^2/2\\ = (4\pi^2) (1/6).$$ This yields $2\pi^2/3$, as Armin found.
In addition to what’s in Anaconda, this lecture will need the following libraries: !pip install --upgrade quantecon!pip install interpolation Overview¶ Next, we study an optimal savings problem for an infinitely lived consumer—the “common ancestor” described in [LS18], section 1.3. This is an essential sub-problem for many representative macroeconomic models It is related to the decision problem in the stochastic optimal growth model and yet differs in important ways. For example, the choice problem for the agent includes an additive income term that leads to an occasionally binding constraint. Our presentation of the model will be relatively brief. For further details on economic intuition, implication and models, see [LS18]. Proofs of all mathematical results stated below can be found in this paper. To solve the model we will use Euler equation based time iteration, similar to this lecture. This method turns out to be globally convergent under mild assumptions, even when utility is unbounded (both above and below). We’ll need the following imports: import numpy as npfrom quantecon.optimize import brent_max, brentqfrom interpolation import interpfrom numba import njitimport matplotlib.pyplot as plt%matplotlib inlinefrom quantecon import MarkovChain Set-Up¶ Consider a household that chooses a state-contingent consumption plan $ \{c_t\}_{t \geq 0} $ to maximize$$ \mathbb{E} \, \sum_{t=0}^{\infty} \beta^t u(c_t) $$ subject to Here $ \beta \in (0,1) $ is the discount factor $ a_t $ is asset holdings at time $ t $, with ad-hoc borrowing constraint $ a_t \geq -b $ $ c_t $ is consumption $ z_t $ is non-capital income (wages, unemployment compensation, etc.) $ R := 1 + r $, where $ r > 0 $ is the interest rate on savings Non-capital income $ \{z_t\} $ is assumed to be a Markov process taking values in $ Z\subset (0,\infty) $ with stochastic kernel $ \Pi $. This means that $ \Pi(z, B) $ is the probability that $ z_{t+1} \in B $ given $ z_t = z $. The expectation of $ f(z_{t+1}) $ given $ z_t = z $ is written as$$ \int f( \acute z) \, \Pi(z, d \acute z) $$ We further assume that $ r > 0 $ and $ \beta R < 1 $ $ u $ is smooth, strictly increasing and strictly concave with $ \lim_{c \to 0} u'(c) = \infty $ and $ \lim_{c \to \infty} u'(c) = 0 $ The asset space is $ [-b, \infty) $ and the state is the pair $ (a,z) \in S := [-b,\infty) \times Z $. A feasible consumption path from $ (a,z) \in S $ is a consumptionsequence $ \{c_t\} $ such that $ \{c_t\} $ and its induced asset path $ \{a_t\} $ satisfy $ (a_0, z_0) = (a, z) $ the feasibility constraints in (1), and measurability of $ c_t $ w.r.t. the filtration generated by $ \{z_1, \ldots, z_t\} $ The meaning of the third point is just that consumption at time $ t $ can only be a function of outcomes that have already been observed. Value Function and Euler Equation¶ The value function $ V \colon S \to \mathbb{R} $ is defined by where the supremum is overall feasible consumption paths from $ (a,z) $. An optimal consumption path from $ (a,z) $ is a feasible consumption path from $ (a,z) $ that attains the supremum in (2). To pin down such paths we can use a version of the Euler equation, which in the present setting is and In essence, this says that the natural “arbitrage” relation $ u' (c_t) = \beta R \, \mathbb{E}_t [ u'(c_{t+1}) ] $ holds when the choice of current consumption is interior. Interiority means that $ c_t $ is strictly less than its upper bound $ Ra_t + z_t + b $. (The lower boundary case $ c_t = 0 $ never arises at the optimum because $ u'(0) = \infty $) When $ c_t $ does hit the upper bound $ Ra_t + z_t + b $, the strict inequality $ u' (c_t) > \beta R \, \mathbb{E}_t [ u'(c_{t+1}) ] $ can occur because $ c_t $ cannot increase sufficiently to attain equality. Optimality Results¶ Given our assumptions, it is known that For each $ (a,z) \in S $, a unique optimal consumption path from $ (a,z) $ exists This path is the unique feasible path from $ (a,z) $ satisfying the Euler equality (5) and the transversality condition Moreover, there exists an optimal consumption function$ \sigma^* \colon S \to [0, \infty) $ such that the path from $ (a,z) $ generated by In summary, to solve the optimization problem, we need to compute $ \sigma^* $. Computation¶ Time iteration (TI) using the Euler equality Value function iteration (VFI) Let’s look at these in turn. Time Iteration¶ We can rewrite (5) to make it a statement about functions rather than random variables. In particular, consider the functional equation where $ \gamma := \beta R $ and $ u' \circ c(s) := u'(c(s)) $. Equation (7) is a functional equation in $ \sigma $. In order to identify a solution, let $ \mathscr{C} $ be the set of candidate consumption functions $ \sigma \colon S \to \mathbb R $ such that each $ \sigma \in \mathscr{C} $ is continuous and (weakly) increasing $ \min Z \leq c(a,z) \leq Ra + z + b $ for all $ (a,z) \in S $ In addition, let $ K \colon \mathscr{C} \to \mathscr{C} $ be defined as follows. For given $ \sigma \in \mathscr{C} $, the value $ K \sigma (a,z) $ is the unique $ t \in J(a,z) $ that solves where We refer to $ K $ as Coleman’s policy function operator [Col90]. It is known that $ K $ is a contraction mapping on $ \mathscr{C} $ under the metric The metric $ \rho $ is complete on $ \mathscr{C} $ Convergence in $ \rho $ implies uniform convergence on compacts In consequence, $ K $ has a unique fixed point $ \sigma^* \in \mathscr{C} $ and $ K^n c \to \sigma^* $ as $ n \to \infty $ for any $ \sigma \in \mathscr{C} $. By the definition of $ K $, the fixed points of $ K $ in $ \mathscr{C} $ coincide with the solutions to (7) in $ \mathscr{C} $. In particular, it can be shown that the path $ \{c_t\} $ generated from $ (a_0,z_0) \in S $ using policy function $ \sigma^* $ is the unique optimal path from $ (a_0,z_0) \in S $. TL;DR The unique optimal policy can be computed by picking any$ \sigma \in \mathscr{C} $ and iterating with the operator $ K $ defined in (8). Value Function Iteration¶ The Bellman operator for this problem is given by We have to be careful with VFI (i.e., iterating with $ T $) in this setting because $ u $ is not assumed to be bounded In fact typically unbounded both above and below — e.g. $ u(c) = \log c $. In which case, the standard DP theory does not apply. $ T^n v $ is not guaranteed to converge to the value function for arbitrary continuous bounded $ v $. Nonetheless, we can always try the popular strategy “iterate and hope”. We can then check the outcome by comparing with that produced by TI. The latter is known to converge, as described above. class ConsumerProblem: """ A class that stores primitives for the income fluctuation problem. The income process is assumed to be a finite state Markov chain. """ def __init__(self, r=0.01, # Interest rate β=0.96, # Discount factor Π=((0.6, 0.4), (0.05, 0.95)), # Markov matrix for z_t z_vals=(0.5, 1.0), # State space of z_t b=0, # Borrowing constraint grid_max=16, grid_size=50, u=np.log, # Utility function du=njit(lambda x: 1/x)): # Derivative of utility self.u, self.du = u, du self.r, self.R = r, 1 + r self.β, self.b = β, b self.Π, self.z_vals = np.array(Π), tuple(z_vals) self.asset_grid = np.linspace(-b, grid_max, grid_size) The function operator_factory returns the operator K as specified above def operator_factory(cp): """ A function factory for building operator K. Here cp is an instance of ConsumerProblem. """ # Simplify names, set up arrays R, Π, β, u, b, du = cp.R, cp.Π, cp.β, cp.u, cp.b, cp.du asset_grid, z_vals = cp.asset_grid, cp.z_vals γ = R * β @njit def euler_diff(c, a, z, i_z, σ): """ The difference of the left-hand side and the right-hand side of the Euler Equation. """ lhs = du(c) expectation = 0 for i in range(len(z_vals)): expectation += du(interp(asset_grid, σ[:, i], R * a + z - c)) \ * Π[i_z, i] rhs = max(γ * expectation, du(R * a + z + b)) return lhs - rhs @njit def K(σ): """ The operator K. Iteration with this operator corresponds to time iteration on the Euler equation. Computes and returns the updated consumption policy σ. The array σ is replaced with a function cf that implements univariate linear interpolation over the asset grid for each possible value of z. """ σ_new = np.empty_like(σ) for i_a in range(len(asset_grid)): a = asset_grid[i_a] for i_z in range(len(z_vals)): z = z_vals[i_z] c_star = brentq(euler_diff, 1e-8, R * a + z + b, \ args=(a, z, i_z, σ)).root σ_new[i_a, i_z] = c_star return σ_new return K K uses linear interpolation along the asset grid to approximate the value and consumption functions. To solve for the optimal policy function, we will write a function solve_modelto iterate and find the optimal $ \sigma $. def solve_model(cp, tol=1e-4, max_iter=1000, verbose=True, print_skip=25): """ Solves for the optimal policy using time iteration * cp is an instance of ConsumerProblem """ u, β, b, R = cp.u, cp.β, cp.b, cp.R asset_grid, z_vals = cp.asset_grid, cp.z_vals # Initial guess of σ σ = np.empty((len(asset_grid), len(z_vals))) for i_a, a in enumerate(asset_grid): for i_z, z in enumerate(z_vals): c_max = R * a + z + b σ[i_a, i_z] = c_max K = operator_factory(cp) i = 0 error = tol + 1 while i < max_iter and error > tol: σ_new = K(σ) error = np.max(np.abs(σ - σ_new)) i += 1 if verbose and i % print_skip == 0: print(f"Error at iteration {i} is {error}.") σ = σ_new if i == max_iter: print("Failed to converge!") if verbose and i < max_iter: print(f"\nConverged in {i} iterations.") return σ_new Plotting the result using the default parameters of the ConsumerProblem class cp = ConsumerProblem()σ_star = solve_model(cp)fig, ax = plt.subplots(figsize=(10, 6))ax.plot(cp.asset_grid, σ_star[:, 0], label='$\sigma^*$')ax.set(xlabel='asset level', ylabel='optimal consumption')ax.legend()plt.show() Error at iteration 25 is 0.007773142982545167. Converged in 41 iterations. The following exercises walk you through several applications where policy functions are computed. Exercise 1¶ Next, let’s consider how the interest rate affects consumption. Reproduce the following figure, which shows (approximately) optimal consumption policies for different interest rates Other than r, all parameters are at their default values. rsteps through np.linspace(0, 0.04, 4). Consumption is plotted against assets for income shock fixed at the smallest value. The figure shows that higher interest rates boost savings and hence suppress consumption. Exercise 2¶ Now let’s consider the long run asset levels held by households. We’ll take r = 0.03 and otherwise use default parameters. The following figure is a 45 degree diagram showing the law of motion for assets when consumption is optimal m = ConsumerProblem(r=0.03, grid_max=4)K = operator_factory(m)σ_star = solve_model(m, verbose=False)a = m.asset_gridR, z_vals = m.R, m.z_valsfig, ax = plt.subplots(figsize=(10, 8))ax.plot(a, R * a + z_vals[0] - σ_star[:, 0], label='Low income')ax.plot(a, R * a + z_vals[1] - σ_star[:, 1], label='High income')ax.plot(a, a, 'k--')ax.set(xlabel='Current assets', ylabel='Next period assets', xlim=(0, 4), ylim=(0, 4))ax.legend()plt.show() The blue line and orange line represent the function$$ a' = h(a, z) := R a + z - \sigma^*(a, z) $$ when income $ z $ takes its high and low values respectively. The dashed line is the 45 degree line. We can see from the figure that the dynamics will be stable — assets do not diverge. In fact there is a unique stationary distribution of assets that we can calculate by simulation Can be proved via theorem 2 of [HP92]. Represents the long run dispersion of assets across households when households have idiosyncratic shocks. Ergodicity is valid here, so stationary probabilities can be calculated by averaging over a single long time series. Hence to approximate the stationary distribution we can simulate a long time series for assets and histogram, as in the following figure Your task is to replicate the figure Parameters are as discussed above. The histogram in the figure used a single time series $ \{a_t\} $ of length 500,000. Given the length of this time series, the initial condition $ (a_0, z_0) $ will not matter. You might find it helpful to use the MarkovChainclass from quantecon. Exercise 3¶ Following on from exercises 1 and 2, let’s look at how savings and aggregate asset holdings vary with the interest rate Note: [LS18] section 18.6 can be consulted for more background on the topic treated in this exercise. For a given parameterization of the model, the mean of the stationary distribution can be interpreted as aggregate capital in an economy with a unit mass of ex-ante identical households facing idiosyncratic shocks. Let’s look at how this measure of aggregate capital varies with the interest rate and borrowing constraint. The next figure plots aggregate capital against the interest rate for b in (1, 3) As is traditional, the price (interest rate) is on the vertical axis. The horizontal axis is aggregate capital computed as the mean of the stationary distribution. Exercise 3 is to replicate the figure, making use of code from previous exercises. Try to explain why the measure of aggregate capital is equal to $ -b $ when $ r=0 $ for both cases shown here. r_vals = np.linspace(0, 0.04, 4)fig, ax = plt.subplots(figsize=(10, 8))for r_val in r_vals: cp = ConsumerProblem(r=r_val) σ_star = solve_model(cp, verbose=False) ax.plot(cp.asset_grid, σ_star[:, 0], label=f'$r = {r_val:.3f}$')ax.set(xlabel='asset level', ylabel='consumption (low income)')ax.legend()plt.show() def compute_asset_series(cp, T=500000, verbose=False): """ Simulates a time series of length T for assets, given optimal savings behavior. cp is an instance of ConsumerProblem """ Π, z_vals, R = cp.Π, cp.z_vals, cp.R # Simplify names mc = MarkovChain(Π) σ_star = solve_model(cp, verbose=False) cf = lambda a, i_z: interp(cp.asset_grid, σ_star[:, i_z], a) a = np.zeros(T+1) z_seq = mc.simulate(T) for t in range(T): i_z = z_seq[t] a[t+1] = R * a[t] + z_vals[i_z] - cf(a[t], i_z) return acp = ConsumerProblem(r=0.03, grid_max=4)a = compute_asset_series(cp)fig, ax = plt.subplots(figsize=(10, 8))ax.hist(a, bins=20, alpha=0.5, density=True)ax.set(xlabel='assets', xlim=(-0.05, 0.75))plt.show() M = 25r_vals = np.linspace(0, 0.04, M)fig, ax = plt.subplots(figsize=(10, 8))for b in (1, 3): asset_mean = [] for r_val in r_vals: cp = ConsumerProblem(r=r_val, b=b) mean = np.mean(compute_asset_series(cp, T=250000)) asset_mean.append(mean) ax.plot(asset_mean, r_vals, label=f'$b = {b:d}$') print(f"Finished iteration b = {b:d}")ax.set(xlabel='capital', ylabel='interest rate')ax.grid()ax.legend()plt.show() Finished iteration b = 1 Finished iteration b = 3
By O.L.V. Costa and M.D. Fragoso When associated with unexpected events that cause losses, abrupt changes are extremely undesirable. Such changes can be due, for instance, to environmental disturbances, component failures or repairs, changes in subsystem interconnections, changes in the operation point of a nonlinear plant. These situations can arise in economic systems, aircraft control systems, solar thermal plants with central receivers, robotic manipulator systems, communication networks, large flexible structures for space stations. It is important to have efficient tools to deal with the effects of abrupt changes. To that end, we must be able to model the changes adequately. In a control-oriented perspective, attempts to carve out an appropriate mathematical framework for the study of dynamical systems subject to abrupt changes in structure (switching structure) date back at least to the 1960s. In this scenario, a critical design issue for modern control systems is that they should be capable of maintaining acceptable behavior and meeting certain performance requirements even in the presence of abrupt changes in the system dynamics. Within this context lies a particularly interesting class of models: discrete-time Markov jump linear systems (MJLSs). Since its inception the models in this class have been closely connected with systems that are vulnerable to abrupt changes in their structure, and the associated literature surrounding this subject is fairly extensive (see, for example, [2, 3, 13], and references therein). To introduce the main ideas, we consider the simplest homogeneous MJLS, defined as: \[x(k + 1) = A_{\theta(k)} x(k), \\ x(0) = x0, \theta(0) = \theta_0,\: \: \: \: \: \: \: \: \: (1)\] where \(A_i \in R^{n × n}\) and \({\theta (k)}\) represents a Markov chain taking values in \({1, . . . , N},\) with transition probability matrix \(P = [pij].\) Here, \({\theta (k)}\) accounts for the random mechanism that models the abrupt changes (this is sometimes called the “operation mode”). Although an MJLS seems, prima facie, to be a simple extension of a linear equation, it carries a great deal of subtleties that distinguish it from the simple linear case, and it provides us with very rich structure. A first analytical difficulty is that \({x(k)}\) is not a Markov process, although the joint process \({x(k), \theta (k)}\) is. Because stability is an important bedrock of control theory, a key issue was to work out an adequate stability theory for MJLSs. In earlier work stability was sometimes considered for each mode of the system, but it soon became clear that this approach could not adequately deal with the many nuances of MJLSs. This issue was adequately settled only after the introduction of the concept of mean-square stability for this class of systems. To illustrate how MJLSs can surprise us and run counter to our intuition, we present three examples that unveil some of these subtleties in the context of stability. Of the several different concepts of stochastic stability, we simplify the presentation here by considering only the following: the homogeneous MJLS is mean-square stable (MSS) if for any initial condition \((x_0, \theta_0), E(\parallel x(k) \parallel^2) \to 0\) as \(k \to \infty\). It is shown in [2] that mean-square stability is equivalent to the spectral radius of an augmented matrix \(A\) being less than one or to the existence of a unique solution to a set of coupled Lyapunov equations, which can be written in four equivalent forms. This augmented matrix \(A\) is defined as \(A = CN\), where \(C = P’ \bigotimes\), \(I\) and \(N = diag[A_i \bigotimes A_i]\) (with \(\bigotimes\) representing the Kronecker operator). Our three examples illustrate only the equivalence between mean-square stability and the spectral radius of \(A\). Example 1 Consider the following system with two operation modes, defined by matrices \(A1 = 4/3, A2 = 1/3\) (mode 1 is unstable, mode 2 stable). The transitions between these modes are given by the transition probability matrix \[P=\begin{bmatrix} 0.5 & 0.5 \\ 0.5 & 0.5 \end{bmatrix}. \] It is easy to verify that for this transition probability matrix we have \[ A=\frac{1}{2} \begin{bmatrix} \frac{16}{9} & \frac{1}{9} \\ \frac{16}{9} & \frac{1}{9} \end{bmatrix} \] and \(r_\sigma (A) = 17/18 (< 1)\), and so the system is MSS. Suppose now that we have a different transition probability matrix, say \[\bar{P} = \begin{bmatrix} 0.9 & 0.1 \\ 0.9 & 0.1 \end{bmatrix};\] the system will most likely stay longer in mode 1, which is unstable. Then \[A=\begin{bmatrix} \frac{144}{90} & \frac{1}{10} \\ \frac{16}{9} & \frac{1}{9} \end{bmatrix},\] \(r_\sigma (A) = 1.61 (>1)\) and the system is no longer MSS. This evinces a connection between mean-square stability and the probability of visits to the unstable modes, which is translated in the expression for \(A\). Our next two examples, borrowed from [9], illustrate how the switching between operation modes can play tricks with our intuition. As shown in these striking examples, an MJLS composed only of unstable modes can be MSS, and, alternatively, an MJLS composed only of stable modes can be unstable in the mean-square sense. Example 2 Here we consider a non-MSS system with stable modes. The two operation modes are defined by matrices \[A_1= \begin{bmatrix} 0 & 2 \\ 0 & 0.5 \end{bmatrix} \text {and} \enspace A_2= \begin{bmatrix} 0.5 & 0 \\ 2 & 0 \end{bmatrix}\] and the transition probability matrix \[P=\begin{bmatrix} 0.5 & 0.5 \\ 0.5 & 0.5 \end{bmatrix}. \] Both modes are stable. Curiously, \(r_\sigma (A) = 2.125 > 1\), which means that the system is not MSS. A brief analysis of the trajectories for each mode helps to clarify the matter. We begin by considering only trajectories for mode 1. For initial conditions given by \[x(0)= \begin{bmatrix} x_10 \\ x_20 \end{bmatrix}\] the trajectories are given by \[x(k)= \begin{bmatrix} x_1(k) \\ x_2(k) \end{bmatrix} = \begin{bmatrix} 2(0.5)^{k-1}x_{20} \\ 0.5(0.5)^{k-1}x_{20} \end{bmatrix}. \\ \text{for} \enspace k=1,2,...\] With the exception of the point \(x(0)\), the whole trajectory thus lies along the line \(x_1(k) = 4x_2(k)\) for any initial condition. This means that if, in a given time, the state is not on this line, mode 1 dynamics will transfer it to the line in one time step and it will remain there thereafter. For mode 2, it is easy to show that the trajectories are given by \[x(k)= \begin{bmatrix} x_1(k) \\ x_2(k) \end{bmatrix} = \begin{bmatrix} 0.5(0.5)^{k-1}x_{10} \\ 2(0.5)^{k-1}x_{10} \end{bmatrix}. \\ \text{for} \enspace k=1,2,...\] Much as in the case for mode 1, if the state is not on the line \(x_1(k) = x_2(k)/4\), mode 2 dynamics will transfer it to the line in one time step. The equations for the trajectories also show that the transitions make the state switch between these two lines. Notice that transitions from mode 1 to mode 2 cause the state to move away from the origin in the direction of component \(x_2\), while transitions from mode 2 to mode 1 do the same with respect to component \(x_1\). Figure 1 (left) shows the trajectory of the system with mode 1 dynamics only, for a given initial condition; Figure 1 (right) does the same for mode 2. Figure 2 shows the trajectory for a possible sequence of switches between the two modes, an indication of the instability of the system. Example 3 For our final example, we consider an MSS system with unstable modes in which \[A_1= \begin{bmatrix} 2 & -1 \\ 0 & 0 \end{bmatrix} \text{and} \enspace A_2= \begin{bmatrix} 0 & 1 \\ 0 & 2 \end{bmatrix} \] and the transition probability matrix \[P= \begin{bmatrix} 0.1 & 0.9 \\ 0.9 & 0.1 \end{bmatrix}.\] Although both modes are unstable, \(r_\sigma (A) = 0.4 (< 1)\). ■ ■ ■ The general conclusion we extract from these examples is that the stability of each operation mode is neither a necessary nor a sufficient condition for the mean-square stability of the system. Mean-square stability depends on a balance between the transition probability of the Markov chain and the operation modes. These and many other examples, in the context of stability illustrate peculiar properties of these systems, which can be included in the class of complex systems (roughly defined as systems composed of interconnected parts that as a whole exhibit one or more properties not obvious from the properties of the individual parts). Other features that set MJLSs outside classical linear theory include the following: (i) The filtering problem is associated with more than one scenario. In the harder case of partial observations of \((x(k), \theta(k))\), the filter is infinite-dimensional; a separation principle for this setting is an open problem. (ii) In view of a set of coupled Riccati equations, which appears in some filtering and control problems, a fresh look at such concepts as stabilizability and detectability was necessary, giving rise to a mean-square theory for these concepts. (iii) With the various possible settings of the state-space of the Markov chain (e.g., finite, infinite countable, Borel space), the analytical complexity of the problem can change. In a nutshell, we can say that an MJLS differs from the linear case in many fundamental issues. Other interesting instances and a compilation of ideas about MJLSs can be found in [1, 2, 3, 5, 13]. Due, in part, to an adequate set of concepts and mathematical techniques developed over the last decades, MJLSs have a well-established theory that provides systematic tools for the analysis of many dynamical systems subjected to abrupt changes, yielding a great variety of applications. Since the specialized literature on applications of the theory of MJLS is very large and rapidly expanding, we provide here only some representative references, including [16], on applications in robotics; [6] and [18], on problems of image enhancement (e.g., tracking and estimation); [4] and [19], on mathematical finance; [8, 14, 15] and [20], on communication networks (packet loss, fading channels, chaotic communication); [10], on wireless issues; [7], on flight systems (including electromagnetic disturbances and reliability; see also [17], for control of wing deployment in aircraft); [11, 12], on issues related to electrical machines. Additional references are given in [2] and [3]. Last but not least, we round out this note by mentioning that some MJLS-control problems belong to a select group of solvable stochastic control problems and are therefore of great interest in any course on stochastic control. In addition, despite the notable abundance of relevant reference materials on the subject, MJLSs stand firmly as a topic of intense research. References [1] E.K. Boukas, Stochastic Switching Systems: Analysis and Design, Birkhäuser, Boston, 2006. [2] O.L.V. Costa, M.D. Fragoso, and R.P. Marques, Discrete-Time Markov Jump Linear Systems, Springer, New York, 2005. [3] O.L.V. Costa, M.D. Fragoso, and M.G. Todorov, Continuous-Time Markov Jump Linear Systems, Springer, New York, 2013. [4] J.B.R. do Val and T. Başar, Receding horizon control of jump linear systems and a macroeconomic policy problem, J. Econ. Dynam. Control, 23 (1999), 1099-1131. [5] V. Dragan, T. Morozan, and A.M. Stoica, Mathematical Methods in Robust Control of Linear Stochastic Systems (Mathematical Concepts and Methods in Science and Engineering), Springer, New York, 2010. [6] J.S. Evans and R.J. Evans, Image-enhanced multiple model tracking, Automatica J. IFAC, 35 (1999), 1769-786. [7] W.S. Gray, O.R. González, and M. Doğan, Stability analysis of digital linear flight controllers subject to electromagnetic disturbances, IEEE Trans. Aerospace and Electronic Systems, 36(2000), 1204-1218. [8] S. Hu and W.-Y. Yan, Stability robustness of networked control systems with respect to packet loss, Automatica J. IFAC, 43 (2007), 124311248. [9] Y. Ji and H.J. Chizeck, Jump linear quadratic Gaussian control: Steady state solution and testable conditions, Contr. Theor. Adv. Tech., 6 (1990), 289-319. [10] P.A. Kawka and A.G. Alleyne, Robust wireless servo control using a discrete-time uncertain Markovian jump linear model, IEEE Trans. Control Syst. Tech., 17 (2009), 733-742. [11] K.A. Loparo and G.L. Blankenship, A probabilistic mechanism for small disturbance instabilities in electric power systems, IEEE Trans. Circuits Syst., 32 (1985), 177-184. [12] R. Malhamé, A jump-driven Markovian electric load model, Adv. Appl. Prob., 22 (1990), 564-586. [13] M. Mariton, Jump Linear Systems in Automatic Control, Marcel Dekker, New York, 1990. [14] S. Roy and A. Saberi, Static decentralized control of a single-integrator network with Markovian sensing topology, Automatica J. IFAC, 41 (2005), 1867-1877. [15] T. Sathyan and T. Kirubarajan, Markov-jump-system-based secure chaotic communication, IEEE Trans. Circuits Syst., 53 (2006), 1597-1609. [16] A.A.G. Siqueira and M.H. Terra, A fault-tolerant manipulator robot based on H2, H∞, and mixed H2/H∞ Markovian controls, IEEE-ASME Trans. Mechatronics, 14 (2009), 257–263. [17] A. Stoica and I. Yaesh, Jump-Markovian based control of wing deployment for an uncrewed air vehicle, J. Guid. Control Dynam., 25 (2002), 407-411. [18] D.D. Sworder, P.F. Singer, R.G. Doria, and R.G. Hutchins, Image-enhanced estimation methods, Proc. IEEE, 81 (1993), 797-812. [19] F. Zampolli, Optimal monetary policy in a regime-switching economy: The response to abrupt shifts in exchange rate dynamics, J. Econ. Dyn. Control, 30 (2006), 1527-1567. [20] Q. Zhang and S.A. Kassam, Finite-state Markov model for Rayleigh fading channels, IEEE Trans. Commun., 47 (1999), 1688-1692.
2019-10-11 14:20 TURBO stream animation /LHCb Collaboration An animation illustrating the TURBO stream is provided. It shows events discarded by the trigger in quick sequence, followed by an event that is kept but stripped of all data except four tracks [...] LHCB-FIGURE-2019-010.- Geneva : CERN, 2019 - 3. Rekord szczegółowy - Podobne rekordy 2019-10-10 15:48 Rekord szczegółowy - Podobne rekordy 2019-09-12 16:43 Pending/LHCb Collaboration Pending LHCB-FIGURE-2019-008.- Geneva : CERN, 10 Rekord szczegółowy - Podobne rekordy 2019-09-10 11:06 Smog2 Velo tracking efficiency/LHCb Collaboration LHCb fixed-target programme is facing a major upgrade (Smog2) for Run3 data taking consisting in the installation of a confinement cell for the gas covering $z \in [-500, -300] \, mm $. Such a displacement for the $pgas$ collisions with respect to the nominal $pp$ interaction point requires a detailed study of the reconstruction performances. [...] LHCB-FIGURE-2019-007.- Geneva : CERN, 10 - 4. Fulltext: LHCb-FIGURE-2019-007_2 - PDF; LHCb-FIGURE-2019-007 - PDF; Rekord szczegółowy - Podobne rekordy 2019-09-09 14:37 Background rejection study in the search for $\Lambda^0 \rightarrow p^+ \mu^- \overline{\nu}$/LHCb Collaboration A background rejection study has been made using LHCb Simulation in order to investigate the capacity of the experiment to distinguish between $\Lambda^0 \rightarrow p^+ \mu^- \overline{\nu}$ and its main background $\Lambda^0 \rightarrow p^+ \pi^-$. Two variables were explored, and their rejection power was estimated applying a selection criteria. [...] LHCB-FIGURE-2019-006.- Geneva : CERN, 09 - 4. Fulltext: PDF; Rekord szczegółowy - Podobne rekordy 2019-09-06 14:56 Tracking efficiencies prior to alignment corrections from 1st Data challenges/LHCb Collaboration These plots show the first outcoming results on tracking efficiencies, before appli- cation of alignment corrections, as obtained from the 1st data challenges tests. In this challenge, several tracking detectors (the VELO, SciFi and Muon) have been misaligned and the effects on the tracking efficiencies are studied. [...] LHCB-FIGURE-2019-005.- Geneva : CERN, 2019 - 5. Fulltext: PDF; Rekord szczegółowy - Podobne rekordy 2019-09-06 11:34 Rekord szczegółowy - Podobne rekordy 2019-09-02 15:30 First study of the VELO pixel 2 half alignment/LHCb Collaboration A first look into the 2 half alignment for the Run 3 Vertex Locator (VELO) has been made. The alignment procedure has been run on a minimum bias Monte Carlo Run 3 sample in order to investigate its functionality [...] LHCB-FIGURE-2019-003.- Geneva : CERN, 02 - 4. Fulltext: VP_alignment_approval - TAR; VELO_plot_approvals_VPAlignment_v3 - PDF; Rekord szczegółowy - Podobne rekordy 2019-07-29 14:20 Rekord szczegółowy - Podobne rekordy 2019-07-09 09:53 Variation of VELO Alignment Constants with Temperature/LHCb Collaboration A study of the variation of the alignment constants has been made in order to investigate the variations of the LHCb Vertex Locator (VELO) position under different set temperatures between $-30^\circ$ and $-20^\circ$. Alignment for both the translations and rotations of the two halves and of the modules with certain constrains of the modules position was performed for each run that correspond to different a temperature [...] LHCB-FIGURE-2019-001.- Geneva : CERN, 04 - 4. Fulltext: PDF; Related data file(s): ZIP; Rekord szczegółowy - Podobne rekordy
The Squares of Riemann-Stieltjes Integrable Functions with Increasing Integrators Recall from The Absolute Value of Riemann-Stieltjes Integrals with Increasing Integrators page that if $f$ is a function defined on $[a, b]$ and $\alpha$ is an increasing function on $[a, b]$ then if $f$ is Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$ then $\mid f \mid$ is also Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$ and furthermore:(1) Now suppose once again that $f$ is Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$ (where $\alpha$ is an increasing function). It would be nice to know whether or not the function $f^2$ is also Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$. Fortunately it is, and we can prove it by using the theorem above and Riemann's condition. Theorem 1: Let $f$ be a function defined on $[a, b]$ and $\alpha$ be an increasing function on $[a, b]$. If $f$ is Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$ then $f^2$ is Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$. Proof:Let $f$ be Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$ where $\alpha$ is an increasing function. Let $M > 0$ be any upper bound of $\mid f \mid$ on $[a, b]$. By Riemann's condition, for $\epsilon_1 = \frac{\epsilon}{2M} > 0$ there exists a partition $P_{\epsilon_1} \in \mathscr{P}[a, b]$ such that if $P$ is finer than $P_{\epsilon_1}$ then: We note that: Hence, if $P_{\epsilon} = P_{\epsilon_1}$, then for $P$ finer than $P_{\epsilon}$ we have that $(*)$ holds and so: Multiplying by $\Delta \alpha_k$ and taking the sum from $k = 1$ to $k = n$ gives us that: From The Absolute Value of Riemann-Stieltjes Integrals with Increasing Integrators page we see that then: So for all $\epsilon > 0$ there exists a partition $P_{\epsilon}$ such that if $P$ is finer than $P_{\epsilon}$ then $U(P, f^2, \alpha) - L(P, f^2, \alpha) < \epsilon$, so Riemann's condition is satisfied and $f^2$ is Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$. $\blacksquare$ It is very important to note that the converse of Theorem 1 is not true in general. For example, consider the function $f$ defined for all $x \in [0, 1]$ by $f(x) = \left\{\begin{matrix} 1 & \mathrm{if} \: x \: \mathrm{is \: irrational}\\ -1 & \mathrm{if} \: x \: \mathrm{is \: rational} \end{matrix}\right.$ and let $\alpha (x) = x$. Then $f^2(x) = 1$ on all of $[0, 1]$ which we already know is Riemann-Stieltjes integrable from the Riemann-Stieltjes Integrals with Constant Integrands page. However, $f$ itself is not Riemann-Stieltjes integrable. If $P = \{ 0 = x_0, x_1, ..., x_n = 1 \} \in \mathscr{P}[0, 1]$ is any partition, then for all $k \in \{ 1, 2, ..., n \}$ we have that $M_k(f) = \sup \{ f(x) : x \in [x_{k-1}, x_k] \} = 1$ and $m_k (f) = \inf \{ f(x) : x \in [x_{k-1}, x_k] \} = -1$ since every subinterval $[x_{k-1}, x_k]$ contains both rational and irrational numbers. Therefore:(8) But $U(P, f, x) - L(P, f, x) = 2$ for all partitions $P$, so for $\epsilon_1 = 1 > 0$ we have that there exists an positive epsilon such that for all partitions $P \in \mathscr{P}[0, 1]$ we have that $U(P, f, x) - L(P, f, x) > \epsilon_1$, so $f$ does not satisfy Riemann's condition and is hence not Riemann-Stieltjes integrable with respect to $\alpha$ on $[0, 1]$.
Criteria for a Subgroup to be Normal Recall from the Normal Subgroups page that if $(G, \cdot)$ is a group and $(H, \cdot)$ is a subgroup then $(H, \cdot)$ is said to be a normal subgroup of $G$ if $gH = Hg$ for all $g \in G$, that is, the left and right cosets of $H$ with representative $g$ are equal for all $g \in G$. We will now look at some criteria for when a subgroup of a group will be normal. Theorem 1: Let $G$ be a group and let $H$ be a subgroup of $G$. The following statements are equivalent: a) $H$ is a normal subgroup of $G$. b) For all $g \in G$ we have that $gHg^{-1} \subseteq H$. c) For all $g \in G$ we have that $gHg^{-1} = H$, i.e., $N_G(H) = \{ g \in G : gHg^{-1} = H\} = G$. Proof of $a) \Rightarrow c)$Suppose that $H$ is a normal subgroup of $G$. Then $gH = Hg$ for all $g \in G$. Fix $g \in G$. Let $z \in gH$. Then $z = gh'$ for some $h \in H$. Since $gH = Hg$ we have that $z \in Hg$ and so $z = h''g$ for some $h'' \in H$. Thus $gh' = h''g$. So $gh'g^{-1} = h''$. Since $h'$ and $h''$ are arbitrary elements of $H$, we have that $gHg^{-1} = H$. Let $h' \in H$. Then $h'g \in Hg$. Since $gH = Hg$ we have that $h'g \in gH$. So there exists an $h'' \in H$ such that $h'g = gh''$. So$h' = gh''g^{-1} \in gHg^{-1}$ which shows that $H \subseteq gHg^{-1}$. Now let $h' \in gHg^{-1}$. Then there exists an $h'' \in H$ such that $h' = gh''g^{-1}$. So $h'g = gh'' \in gH$. Since $gH = Hg$ we have that $h'g \in Hg$. So $h' \in H$. This shows that $H \supseteq gHg^{-1}$. Thus, for all $g \in G$ we have that $gHg^{-1} = H$. Proof of $c) \Rightarrow b)$Trivial. Proof of $b) \Rightarrow a)$Suppose that for all $g \in G$ we have that $gHg^{-1} \subseteq H$. Then $gH \subseteq Hg$. Now since $g \in G$ we have that $g^{-1} \in G$ and by hypothesis, $g^{-1}Hg \subseteq H$. Therefore $Hg \subseteq gH$. So $gH = Hg$ for all $g \in G$, i.e., $H$ is a normal subgroup of $G$. $\blacksquare$
Table of Contents Directional Derivatives of Functions from Rn to Rm and Continuity Recall that if $f : S \to \mathbb{R}$ where $S \subseteq \mathbb{R}$ is open, then $f$ being differentiable at a point $x_0 \in S$ implies that $f$ is continuous at $x_0$ (the converse being false). We would like to develop a derivative definition for functions $\mathbf{f} : S \to \mathbb{R}^m$ where $S \subseteq \mathbb{R}^n$ is open that preserves this nice property. Recall from the Directional Derivatives of Functions from Rn to Rm page that the directional derivative of $\mathbf{f}$ at $\mathbf{c}$ in the direction of the vector $\mathbf{u} \in \mathbb{R}^n$ is defined as the following limit provided that it exists:(1) One might ask, does the existence of all directional derivatives of a function $\mathbf{f}$ at $\mathbf{c}$ imply that $\mathbf{f}$ is continuous at $\mathbf{c}$? The answer is sadly NO. For example, consider the following function $f : \mathbb{R}^2 \to \mathbb{R}$ defined by:(2) Let's compute the directional derivatives of this function at the origin $\mathbf{c} = (0, 0)$. Let $\mathbf{u} = (u_1, u_2) \in \mathbb{R}^2$. Then:(3) Notice that $f'(\mathbf{c}, \mathbf{u})$ exists whenever $u_2 \neq 0$. Suppose now that $u_2 = 0$. I.e., let's compute the directional derivative of $f$ at $\mathbf{c}$ in the direction of $\mathbf{u} = (1, 0)$ (the partial derivative of $f$ with respect to the first variable). Then:(4) So indeed all directional derivatives of $f$ at $\mathbf{c}$ exist. However, we claim that $f$ is discontinuous at $\mathbf{c}$. We first give a graph of $f$ below to visualize the discontinuity at $\mathbf{c} = (0, 0)$: Now, along the line $y = x^2$ we have that:(5) But $\displaystyle{f(0, 0) = 0 \neq \frac{1}{2}}$ so $f$ is discontinuous at $\mathbf{c} = (0, 0)$ despite having all of its directional derivatives existing at $\mathbf{c}$.
The Figure 2 of this paper http://arxiv.org/pdf/hep-ph/0611148v1.pdf doesn't show a factor-of-ten difference at all! Extract the ratio properly on the log scale and you will see it is less than four, just slightly greater than 1/2 of the height corresponding to the decade. Note that 1/2 of the weight corresponds to the factor of $\sqrt{10}\sim 3.16$. The ratio of the cross sections is close to four rather than two mostly because of the Weinberg angle $\theta_W$. The production of neutral $W^0 W^0$ boson pair would have cross section close to one-half of the inclusive charged $W^+W^-$ cross section. However, a $W^0$ only contains $\cos\theta_W$ times $Z^0$, and this must be used for both copies of the $W^0$. So some of the processes produce $\gamma Z$ or $\gamma\gamma$. Only the fraction $\cos^2\theta_W$ produces $Z^0 Z^0$ and $\cos^2\theta_W\sim 1-0.23\sim 0.77$ is a new factor in the amplitude. Note that we are neglecting the production of $B^0 B^0$ – which also splits to four contributions $ZZ,Z\gamma,\gamma\gamma$ – because it's suppressed by $g_Y^2$ and it's much smaller than the coupling constant for the $SU(2)$. So the estimated ratio of the $W/Z$ inclusive cross sections is $2/0.77^2\sim 3.37$, very close to what the graph shows. I had to square the amplitude to get the probability which is why the factor $1/0.77$ appeared twice. Other potential asymmetries that raise $W^+W^-$ relatively to $Z^0Z^0$ probably include the fact that the different $u\bar d/\bar u d$ quarks are more likely to be found inside the hadrons than $u\bar u$ and $d\bar d$. Also, if a $Z$ appears in a propagator, the diagram has a greater (in the $s$-channel) suppression $m_Z^2$ in the denominator than $m_W^2$.
I consider a riemannian manifold $(M,g)$ with a fiber bundle $E$ equipped by one parameter family of connections $\nabla^t$. The curvature of $\nabla^t$ is $R(\nabla^t)$ and the codifferential is $\delta_{\nabla^t} = * \circ d_{\nabla^t} \circ *$, with $*$ the Hodge operator and $d_{\nabla^t}$ the differential of the connection on the forms $\Lambda^*(M) \bigotimes End(E)$. The Yang-Mills flow is then defined by the formula: $$ \frac {\partial }{\partial t} (\nabla^t ) = \delta_{\nabla^t} (R (\nabla^t))$$ I have in each term of the equality a 1-form with values in the endomorphisms of $E$. The fixed points of the flow are the connections with harmonic curvature (as the curvature is closed). They are the instantons in physics for $SU(2)$-connections over the space-time manifold. Is such a flow well-defined for a short interval of the time near the initial connection? When is the flow convergent?
The Integral Remainder We recently saw from Taylor's Theorem and The Lagrange Remainder page that if $f$ is $n + 1$ time differentiable on some interval containing the center of convergence $c$ and $x$ and if $P_n(x)$ is the $n^{\mathrm{th}}$ order Taylor polynomial of $f$ centered at $c$ then for $f(x) + P_n(x) + E_n(x)$, the error remainder $E_n(x)$ can be computed for some $\xi$ between $c$ and $x$ the following formula:(1) This formula is known as the Lagrange Remainder formula for the error $E_n(x)$. We will now look at another form of the remainder error known as the Integral Remainder formula. Theorem 1: Suppose that $f$ is $n + 1$ times differentiable on some interval containing the center of convergence $c$ and $x$, and let $P_n(x) = f(c) + \frac{f^{(1)}(c)}{1!}(x - c) + \frac{f^{(2)}(c)}{2!}(x - c)^2 + ... + \frac{f^{(n)}(c)}{n!}(x - c)^n$ be the $n^{\mathrm{th}}$ order Taylor polynomial of $f$ at $x = c$. Then $f(x) = P_n(x) + E_n(x)$ where $E_n(x)$ is the error term of $P_n(x)$ from $f(x)$ and the error $E_n(x)$ can be computed with the formula $E_n(x) = \frac{1}{n!} \int_c^x (x - t)^n f^{(n+1)}(t) \: dt$. Proof:We will use mathematical induction to prove Theorem 1. First consider the case when $n = 1$. Then $P_1(x) = f(c) + f'(c)(x - c)$. We thus get that: Now the corresponding integral for $n = 1$ is $\frac{1}{1!} \int_c^x (x - t)^n f''(t) \: dt$. We will evaluate this integral using the technique of integration by parts. Let $u = x - t$ and let $dv = f''(t) \: dt$. Then we have that $du = - \: dt$ and $v = f'(t)$ and so we get that (using $\int u \: dv = uv - \int v \: du$): Thus we have that Theorem 1 hold when $n = 1$. Suppose now that Theorem 1 holds for $n =k > 1$, that is the error between the $k^{\mathrm{th}}$ order Taylor polynomial centered at $c$, $P_k$ has error from $f$ given by $E_k(x) = \frac{1}{k!} \int_c^x (x - t)^k f^{(k+1)}(t) \: dt$. We want to then show that the error between the $(k + 1)^{\mathrm{st}}$ order Taylor polynomial centered at $c$, $P_{k+1}$ has error from $f$ given by: We will use integration by parts once again to prove this. Let $u = (x - t)^{k+1}$ and let $dv = f^{(k+2)}(t)$. Then we have that $du = -(k+1)(x - t)^k \: dt$ and $v = f^{(k+1)}(t)$, and so we have that: Thus we have shown by induction that Theorem 1 is true. $\blacksquare$
Frequent Links Rec. 709 ITU-R Recommendation BT.709, more commonly known by the abbreviations Rec. 709 or BT.709, standardizes the format of high-definition television, having 16:9 (widescreen) aspect ratio. The first edition of the standard was approved in 1990. Contents Technical details Pixel count Rec. 709 refers to HDTV systems having roughly two million luma samples per frame. Rec. 709 has two parts: Part 2 codifies current and prospective 1080 i and 1080 p systems with square sampling. In an attempt to unify 1080-line HDTV standards, part 2 defines a common image format (CIF) with picture parameters independent of the picture rate. Part 1 codifies what are now referred to as 1035i30 and 1152i25 HDTV systems. The 1035i30 system is now obsolete, having been superseded by 1080 i and 1080 p square-sampled ("square-pixel") systems. The 1152 i25 system was used for experimental equipment in Europe and was never commercially deployed. Frame rate Rec. 709 specifies the following picture rates: 60 Hz, 50 Hz, 30 Hz, 25 Hz and 24 Hz. "Fractional" rates having the above values divided by 1.001 are also permitted. Initial acquisition is possible in either progressive or interlaced form. Video captured as progressive can be transported with either progressive transport or progressive segmented frame (PsF) transport. Video captured as interlaced can be transported with interlace transport. In cases where a progressive captured image is transported as a segmented frame, segment/field frequency must be twice the frame rate. In practice, the above requirements result in the following frame rates ("fractional" rates are specified in commonly used "decimal" form): 25i, 25PsF, 25p, 50p for 50 Hz systems; 23.976p, 23.976PsF, 24p, 24PsF, 29.97i, 29.97p, 29.97PsF, 30PsF, 30p, 59.94p, 60p for 60 Hz systems. Digital representation Rec. 709 coding uses SMPTE reference levels (a.k.a. "studio-swing", legal-range, narrow-range) levels where reference black is defined as 8-bit interface code 16 and reference white is defined as 8-bit interface code 235. Interface codes 0 and 255 are used for synchronization, and are prohibited from video data. Eight-bit codes between 1 and 15 provide footroom, and can be used to accommodate transient signal content such as filter undershoots. Eight-bit interface codes 236 through 254 provide headroom, and can be used to accommodate transient signal content such as filter overshoots. In some camera systems, headroom in the signal is used to contain specular highlights, however, these "extended-range" signals are not allowed in the broadcast system and are clamped during final mastering. Bit-depths deeper than 8 bits are obtained by appending least-significant bits. Ten-bit systems are commonplace in studios. (Desktop computer graphic systems ordinarily use full bit-depth encoding that places reference black at code 0 and reference white at code 255, and provide no footroom or headroom.) The 16..235 limits (for luma; 16..240 for chroma) originated with ITU Rec. 601. [1] Primary chromaticities Color space White point Primaries x W y W x R y R x G y G x B y B ITU-R BT.709 0.3127 0.3290 0.64 0.33 0.30 0.60 0.15 0.06 Note that red and blue are the same as the EBU Tech 3213 primaries while green is halfway between EBU Tech 3213 and SMPTE C (two types of Rec.601). In coverage of the CIE 1931 color space the Rec. 709 color space is almost identical to Rec. 601 and covers 35.9%. [3] Standards Conversion When converting between the various HD and SD formats, it would be correct to compensate for the differences in the primaries (e.g. between the Rec. 709, EBU Tech 3213, and SMPTE C primaries). In practice, this conversion is rarely performed and such a conversion would create a liability for post production facilities as they would need to ensure that the color bars on all the new masters are redone. Correcting for differences in the primaries would cause the resulting color bars on the converted tape to be inaccurate. Incorrect color bars will cause a (sub)master to be rejected by quality control checks. [4] Luma coefficients HDTV according to Rec. 709 forms luma ( Y’) using R’G’B’ coefficients 0.2126, 0.7152, and 0.0722. This means that unlike Rec. 601, the coefficients match the primaries and white points, so luma corresponds more closely to luminance. Some experts feel that the advantages of correct matrix coefficients do not justify the change from Rec. 601 coefficients. [5] Although worldwide agreement on a single R’G’B’ system was achieved upon the adoption of Rec. 709, adoption of different luma coefficients created a second flavour of Y’C B C R. Transfer characteristics Rec. 709 is written as if it specifies the capture and transfer characteristics of HDTV encoding - that is, as if it were scene-referred. However, in practice it is output (display) referred with the convention of a 2.4-power function display [2.35 power function in EBU recommendations]. (Rec. 709 and sRGB share the same primary chromaticities and white point chromaticity; however, sRGB is explicitly output (display) referred with an average gamma of 2.2.) [6] The Rec. 709 transfer function from the linear signal (luminance) to the nonlinear (voltage) is, similar to sRGB's transfer function, linear in the bottom part and then transfers to a power function for the rest of the <math>[0..1]</math> range: [7] <math>V=\begin{cases} 4.500L & L < 0.018\\ 1.099 L^{0.45} - 0.099 & L \ge 0.018 \end{cases} </math> The conversion to linear is as follows. <math>L=\begin{cases} \dfrac{V}{4.5} & V < 0.081\\ \left ( \dfrac{ V+0.099 }{ 1.099} \right ) ^{\frac{1}{0.45} } & V > 0.081 \end{cases} </math> See also Rec. 601, a comparable standard for standard-definition television (SDTV) Rec. 2020, ITU-R Recommendation for ultra high definition television (UHDTV) sRGB, a standard color space for web/computer graphics, based on the Rec. 709 primaries and white point. References ITU-R BT.709-5: Parameter values for the HDTV standards for production and international programme exchange.April, 2002. Note that the -5is the current version, in May 2008; previous versions were -1through -4. [3]: Poynton, Charles, Perceptual uniformity, picture rendering, image state, and Rec. 709.May, 2008. sRGB: IEC 61966-2-1:1999 ITU-R Rec. BT.601-5, 1995. Section 3.5.3. ITU-R Rec. BT.709-5 page 18, items 1.3 and 1.4 ""Super Hi-Vision" as Next-Generation Television and Its Video Parameters". Information Display. Retrieved 2013-01-01. [1]: Chan, Glenn, "HD versus SD Color Space". [2]: Poynton, Charles, "Luminance, luma, and the migration to DTV" (Feb. 6, 1998) Poynton, Charles (2012). Digital Video and HD Algorithms and Interfaces. Burlington, Mass.: Elsevire/Morgan Kaufmann. p. 321. ISBN 978-0-12-391926-7. ITU-R Rec. BT.709-5 page 2, item 1.2
Hey guys! I built the voltage multiplier with alternating square wave from a 555 timer as a source (which is measured 4.5V by my multimeter) but the voltage multiplier doesn't seem to work. I tried first making a voltage doubler and it showed 9V (which is correct I suppose) but when I try a quadrupler for example and the voltage starts from like 6V and starts to go down around 0.1V per second. Oh! I found a mistake in my wiring and fixed it. Now it seems to show 12V and instantly starts to go down by 0.1V per sec. But you really should ask the people in Electrical Engineering. I just had a quick peek, and there was a recent conversation about voltage multipliers. I assume there are people there who've made high voltage stuff, like rail guns, which need a lot of current, so a low current circuit like yours should be simple for them. So what did the guys in the EE chat say... The voltage multiplier should be ok on a capacitive load. It will drop the voltage on a resistive load, as mentioned in various Electrical Engineering links on the topic. I assume you have thoroughly explored the links I have been posting for you... A multimeter is basically an ammeter. To measure voltage, it puts a stable resistor into the circuit and measures the current running through it. Hi all! There is theorem that links the imaginary and the real part in a time dependent analytic function. I forgot its name. Its named after some dutch(?) scientist and is used in solid state physics, who can help? The Kramers–Kronig relations are bidirectional mathematical relations, connecting the real and imaginary parts of any complex function that is analytic in the upper half-plane. These relations are often used to calculate the real part from the imaginary part (or vice versa) of response functions in physical systems, because for stable systems, causality implies the analyticity condition, and conversely, analyticity implies causality of the corresponding stable physical system. The relation is named in honor of Ralph Kronig and Hans Kramers. In mathematics these relations are known under the names... I have a weird question: The output on an astable multivibrator will be shown on a multimeter as half the input voltage (for example we have 9V-0V-9V-0V...and the multimeter averages it out and displays 4.5V). But then if I put that output to a voltage doubler, the voltage should be 18V, not 9V right? Since the voltage doubler will output in DC. I've tried hooking up a transformer (9V to 230V, 0.5A) to an astable multivibrator (which operates at 671Hz) but something starts to smell burnt and the components of the astable multivibrator get hot. How do I fix this? I check it after that and the astable multivibrator works. I searched the whole god damn internet, asked every god damn forum and I can't find a single schematic that converts 9V DC to 1500V DC without using giant transformers and power stage devices that weight 1 billion tons.... something so "simple" turns out to be hard as duck In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it? If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum. @AaronStevens Yeah, I had a good laugh to myself when he responded back with "Yeah, maybe they considered it and it was just too complicated". I can't even be mad at people like that. They are clearly fairly new to physics and don't quite grasp yet that most "novel" ideas have been thought of to death by someone; likely 100+ years ago if it's classical physics I have recently come up with a design of a conceptual electromagntic field propulsion system which should not violate any conservation laws, particularly the Law of Conservation of Momentum and the Law of Conservation of Energy. In fact, this system should work in conjunction with these two laws ... I rememeber that Gordon Freeman's thesis was "Observation of Einstein-Podolsky-Rosen Entanglement on Supraquantum Structures by Induction Through Nonlinear Transuranic Crystal of Extremely Long Wavelength (ELW) Pulse from Mode-Locked Source Array " In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it? If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum. @ACuriousMind What confuses me is the interpretation of Peskin to this infinite c-number and the experimental fact He said, the second term is the sum over zero point energy modes which is infnite as you mentioned. He added," fortunately, this energy cannot be detected experm., since the experiments measure only the difference between from the ground state of H". @ACuriousMind Thank you, I understood your explanations clearly. However, regarding what Peskin mentioned in his book, there is a contradiction between what he said about the infinity of the zero point energy/ground state energy, and the fact that this energy is not detectable experimentally because the measurable quantity is the difference in energy between the ground state (which is infinite and this is the confusion) and a higher level. It's just the first encounter with something that needs to be renormalized. Renormalizable theories are not "incomplete", even though you can take the Wilsonian standpoint that renormalized QFTs are effective theories cut off at a scale. according to the author, the energy differenc is always infinite according to two fact. the first is, the ground state energy is infnite, secondly, the energy differenc is defined by substituting a higher level energy from the ground state one. @enumaris That is an unfairly pithy way of putting it. There are finite, rigorous frameworks for renormalized perturbation theories following the work of Epstein and Glaser (buzzword: Causal perturbation theory). Just like in many other areas, the physicist's math sweeps a lot of subtlety under the rug, but that is far from unique to QFT or renormalization The classical electrostatics formula $H = \int \frac{\mathbf{E}^2}{8 \pi} dV = \frac{1}{2} \sum_a e_a \phi(\mathbf{r}_a)$ with $\phi_a = \sum_b \frac{e_b}{R_{ab}}$ allows for $R_{aa} = 0$ terms i.e. dividing by zero to get infinities also, the problem stems from the fact that $R_{aa}$ can be zero due to using point particles, overall it's an infinite constant added to the particle that we throw away just as in QFT @bolbteppa I understand the idea that we need to drop such terms to be in consistency with experiments. But i cannot understand why the experiment didn't predict such infinities that arose in the theory? These $e_a/R_{aa}$ terms in the big sum are called self-energy terms, and are infinite, which means a relativistic electron would also have to have infinite mass if taken seriously, and relativity forbids the notion of a rigid body so we have to model them as point particles and can't avoid these $R_{aa} = 0$ values.
The Baire Category Theorem Review The Baire Category Theorem Review We will now review some of the recent material regarding the Baire Category theorem. On The Cantor Intersection Theorem for Complete Metric Spacespage we looked at the Cantor intersection theorem which states the following. If $X$ is a complete metric space and if $(x_n)_{n=1}^{\infty}$ is a sequence of points in $X$ and $(r_n)_{n=1}^{\infty}$ is a sequence of positive real numbers such that $\displaystyle{\lim_{n \to \infty} r_n = 0}$ and: \begin{align} \quad ... \overline{B}(x_{n+1}, r_{n+1}) \subseteq \overline{B}(x_n, r_n) \subseteq ... \subseteq \overline{B} (x_1, r_1) \end{align} Then there exists exactly one point $x \in X$ such that: \begin{align} \quad \bigcap_{n=1}^{\infty} \overline{B}(x_n, r_n) = \{ x \} \end{align} On The Baire Category Theorem for Complete Metric Spacespage we looked at another important theorem called the Baire category theorem which states the following. If $X$ is a complete metric space and if $(D_n)_{n=1}^{\infty}$ is a countable collection of open and dense sets in $X$ then the intersection: \begin{align} \quad \bigcap_{n=1}^{\infty} D_n \end{align} Is dense in $X$. We then looked at a corollary to the Baire category theorem on the Corollary to the Baire Category Theorem for Complete Metric Spacespage. We saw that if $X$ is a complete metric space and if $(F_n)_{n=1}^{\infty}$ is a countable collection of closed and nowhere dense sets in $X$ then the union: \begin{align} \quad \bigcup_{n=1}^{\infty} F_n \end{align} Has empty interior.
Giải hệ phương trình: (Editor's note: This translates to "Solve a system of equations:") $$ \begin{cases}\sqrt[4]{x}\left(\dfrac{1}{4}+\dfrac{2\sqrt{x}+\sqrt{y}}{x+y}\right)=2 \\[8pt] \sqrt[4]{y}\left(\dfrac{1}{4}-\dfrac{2\sqrt{x}+\sqrt{y}}{x+y}\right) =1\end{cases} $$ Divide the first equation by $\sqrt[4]{x}$ and the second by $\sqrt[4]{y}$. Then adding and subtracting the two resulting equations gives us the new pair of simultaneous equations: $$ \frac{2}{\sqrt[4]{x}}+\frac{1}{\sqrt[4]{y}}=\frac{1}{2} \\ \frac{2}{\sqrt[4]{x}}-\frac{1}{\sqrt[4]{y}}=2\frac{2\sqrt{x}+\sqrt{y}}{x+y} \, . $$ Multiplying these two equations together, we have: $$ \frac{4}{\sqrt{x}}-\frac{1}{\sqrt{y}}=\frac{2\sqrt{x}+\sqrt{y}}{x+y} \, . $$ Clearing denominators yields: $$ (4\sqrt{y}-\sqrt{x})(x+y)=2x\sqrt{y}+y\sqrt{x} $$ which after some algebra can be reduced to $$ (x+2y)(2\sqrt{y}-\sqrt{x})=0 \, . $$ So either $x=-2y$ or $\sqrt{x}=2\sqrt{y}$. If $x$ and $y$ are positive and real the first is clearly impossible and the second is equivalent to $x=4y$. (If they're not positive and real, we have to worry more about branch cuts than I really want to.) Now, if $x=4y$ we have $$ \frac{\sqrt{2}}{\sqrt[4]{y}}+\frac{1}{\sqrt[4]{y}}=\frac{1}{2} \, , $$ from the top equation in this answer. So $\sqrt[4]{y}=2(1+\sqrt{2})$, yielding the solution $$ x=64(1+\sqrt{2})^4 \\ y=16(1+\sqrt{2})^4 \, , $$ which upon expanding is precisely what Robert Israel got from Maple in the other answer. According to Maple, there is one real solution, $$x = 1088+768 \sqrt{2},\ y = 272+192 \sqrt{2}$$
ddid2.Rmd We are interested in estimating Conditional Quantile Treatment Effects on the Treated (QTT) with two periods of panel data (or repeated cross sections) under a Difference in Differences Assumption. These are defined by \[ CQTT_x(\tau) = F^{-1}_{Y_{1t}|X=x,D=1}(\tau) - F^{-1}_{Y_{0t}|X=x,D=1}(\tau) \] for \(\tau \in (0,1)\) and where \(Y_{1t}\) are treated potential outcomes in period \(t\), \(Y_{0t}\) are untreated potential outcomes in period \(t\) and \(D\) indicates whether an individual is a member of the treated group or not. We are also thinking about the case where \(X\) is discrete. The identification challenge is to obtain the counterfactual conditional distribution of untreated potential outcomes for the treated group: \(F_{Y_{0t}|X=x, D=1}(y)\). This method is built for the standard DID case where a researcher has access to two periods of data, no one is treated in the first period \(t-1\), and the treated group is treated in period the last period \(t\). Assumption 1 (Distributional Difference in Differences) \[ \Delta Y_{0t} \perp D | X\] This is an extension of the conditional mean DID assumption (\(E[\Delta Y_{0t}|X=x, D=1] = E[\Delta Y_{0t}|X=x,D=0]\) to full independence. Relative to DID assumptions that are not conditional on \(X\), this assumption is nice as it allows the path of outcomes to depend on covariates. For example, suppose \(Y\) is earnings. The path of earnings, in the absence of some treatment, is likely to depend on covariates such as education and age. If these are distributed differently across the treated and untreated groups, then an unconditional DID assumption is unlikely to hold, but Assumption 1 will. Alone, Assumption 1 is not strong enough to identify the CQTT. We also impose the following additional assumption. Assumption 2 (Copula Invariance Assumption) \[ C_{\Delta Y_{0t}, Y_{0t-1} | X=x,D=1}(u,v) = C_{\Delta Y_{0t}, Y_{0t-1} | X=x,D=1}(u,v) \] This assumption says that the dependence of the change in outcomes and the initial level of outcomes is the same for the treated group as the untreated group. To make things concrete, consider the earnings example again. The CI Assumption says that if we observe the biggest gains in earnings for the untreated group going to those with the highest initial earnings, then, in the absence of treatment, we would observe the same thing for the treated group. Under Assumption 1 and Assumption 2, \[ F_{Y_{0t}|X=x,D=1}(y) = E[1\{\Delta Y_t + F^{-1}_{Y_{t-1}|X=x,D=1}(F^{-1}_{Y_{t-1}|X=x,D=1}(Y_{t-1})) \leq y\} | X=x, D=0] \] And then we can invert this to obtain the CQTT. The ddid2 method contains the code to implement this method. Here is an example. ##load the package library(qte) ## Registered S3 methods overwritten by 'ggplot2':## method from ## [.quosures rlang## c.quosures rlang## print.quosures rlang ## ## Quantile Treatment Effect:## ## tau QTE## 0.05 10616.61## 0.1 5019.83## 0.15 2388.12## 0.2 1033.23## 0.25 485.23## 0.3 943.05## 0.35 931.45## 0.4 945.35## 0.45 1205.88## 0.5 1362.11## 0.55 1279.05## 0.6 1618.13## 0.65 1834.30## 0.7 1326.06## 0.75 1586.35## 0.8 1256.09## 0.85 723.10## 0.9 251.36## 0.95 -1509.92## ## Average Treatment Effect: 2326.51
The Completion of a Normed Algebra The Completion of a Normed Algebra Definition: Let $\mathfrak{A}$ be a normed algebra. A Banach algebra $\mathfrak{B}$ is said to be a Completion of $\mathfrak{A}$ if there exists an isometric isomorphism $T$ from $\mathfrak{A}$ onto a dense subalgebra of $\mathfrak{B}$. The following theorem tells us that every normed algebra has a completion. Theorem 1: If $\mathfrak{A}$ is a normed algebra, then $\mathfrak{A}$ has a completion. Proof:We know that as a normed space, $\mathfrak{A}$ has a completion $\mathfrak{B}$, i.e., a Banach space such that there exists an isometric isomorphism $T$ from $\mathfrak{A}$ onto a dense subspace of $\mathfrak{B}$. All that remains to show is that with the additional structure of product on $\mathfrak{A}$, that $\mathfrak{B}$ also has an analogous product. Let $b, b’ \in \mathfrak{B}$. Since $T(\mathfrak{A})$ is assumed to be dense in $\mathfrak{B}$, there exists sequences $(a_n), (a_n)’ \subset \mathfrak{A}$ such that: \begin{align} b = \lim_{n \to \infty} T(a_n) \quad \mathrm{and} \quad b’ = \lim_{n \to \infty} T(a_n’) \end{align} Since $(T(a_n))$ converges to $b$, $(T(a_n))$ is a Cauchy sequence in $\mathfrak{B}$. That is, for all $\epsilon > 0$ there exists an $N \in \mathbb{N}$ such that if $m, n \geq N$ then $\| T(a_m) - T(a_n) \| < \epsilon$. But $T$ is an isometric isomorphism. So for all $m, n \in \mathbb{N}$, $\| T(a_m) - T(a_n) \| = \| T(a_m - a_n) \| = \| a_m - a_n \|$. Thus, for all $\epsilon > 0$ there exists an $N \in \mathbb{N}$ such that if $m, n \geq N$ then $\| a_m - a_n \| < \epsilon$, and so $(a_n)$ is a Cauchy sequence in $\mathfrak{A}$. A similar argument shows that since $(T(a_n’))$ converges to $b’$, we have that $(a_n’)$ is a Cauchy sequence in $\mathfrak{A}$ as well. Also, since $T(a_n)$ converges to $b$, we have that $(T(a_n))$ is a bounded sequence, i.e., there exists an $M > 0$ such that $\| T(a_n) \| \leq M$ for all $n \in \mathbb{N}$. Again, since $T$ is an isometry, this implies that $(a_n)$ is bounded with $\| a_n \| \leq M$ for all $n \in \mathbb{N}$. A similar argument shows that there exists an $M’ > 0$ such that $\| a_n’ \| \leq M’$ for all $n \in \mathbb{N}$. Consider the sequence $(a_na_n’) \subset \mathfrak{A}$. For each $m, n \in \mathbb{N}$, we have that: \begin{align} \| a_ma_m’ - a_na_n’ \| &= \| a_ma_m’ - a_ma_n’ + a_ma_n’ - a_na_n’ \| \\ &= \| a_m(a_m’ - a_n’) \| + \| (a_m - a_n)a_n’ \| \\ &= \| a_m \| \| a_m’ - a_n’ \| + \| a_m - a_n \| \| a_n’ \| \\ & \leq M \| a_m’ - a_n’ \| + M’ \| a_m - a_n \| \end{align} Given $\epsilon > 0$, since $(a_n)$, $(a_n’)$ are both Cauchy, there exists $N_1, N_2 \in \mathbb{N}$ such that if $m, n \geq N_1$ then $\| a_m - a_n \| < \frac{\epsilon}{M’}$ and if $m, n \geq N_2$ then $\| a_m’ - a_n’ \| < \frac{\epsilon}{M}$. Then $N := \max \{ N_1, N_2 \}$ is such that if $m, n \geq N$ then $\| a_ma_m’ - a_na_n’ \| < \epsilon$ from the above inequality. So $(a_na_n’)$ is a Cauchy sequence. For a third time, since $T$ is an isomorphism, this implies that $(T(a_na_n’))$ is a Cauchy sequence in $\mathfrak{B}$. Since $\mathfrak{B}$ is complete, this sequence converges to some $c \in \mathfrak{B}$. So define the multiplication on $\mathfrak{B}$ by: \begin{equation} bb’ := c \end{equation} Then the normed space isomorphism $T$ extends to an algebra isomorphism onto a dense subalgebra of $\mathfrak{B}$, since for each $a, a' \in \mathfrak{A}$, let $(a_n), (a_n') \subset \mathfrak{A}$ be the constant sequences with $a_n = a$ and $a_n' = a'$ for all $n \in \mathbb{N}$. Then: \begin{align} \quad T(aa') = \lim_{n \to \infty} T(a_na_n') = c = bb' = \lim_{n \to \infty} T(a_n) \lim_{n \to \infty} T(a_n') = T(a)T(a') \end{align} It is clear that this multiplication on $\mathfrak{B}$ satisfies the multiplication axioms, and the norm on $\mathfrak{B}$ becomes an algebra norm too. So the normed algebra $\mathfrak{A}$ has a completion to a Banach algebra $\mathfrak{B}$. $\blacksquare$
Since $a$ and $b$ are relatively prime, at least one of them is relatively prime to $p$.Without loss of generality let us assume that $b$ and $p$ are relatively prime. Then the given equality becomes\begin{align*}a^{2^n}\equiv -b^{2^n} \pmod{p} \\\iff \left( \frac{a}{b}\right)^{2^n} \equiv -1 \pmod{p}.\end{align*}Taking square of both sides we obtain\[\left( \frac{a}{b}\right)^{2^{n+1}} \equiv 1 \pmod{p}.\] Now, we can think of these congruences as equalities of elements in the multiplicative group $(\Z/p\Z)^{\times}$ of order $p-1$:\[ \left( \frac{a}{b}\right)^{2^n} = -1 \text{ and } \left( \frac{a}{b}\right)^{2^{n+1}} =1 \text{ in } (\Z/p\Z)^{\times}.\] Note that the second equality yields that the order of the element $a/b$ divides $2^{n+1}$.On the other hand, the first equality implies that any smaller power of $2$ is not the order of $a/b$.Thus, the order of the element $a/b$ is exactly $2^{n+1}$. In general, the order of each element divides the order of the group.(This is a consequence of Lagrange’s theorem.) Since the order of the group $(\Z/p\Z)^{\times}$ is $p-1$, it follows that $2^{n+1}$ divides $p-1$.This completes the proof. Use Lagrange’s Theorem to Prove Fermat’s Little TheoremUse Lagrange's Theorem in the multiplicative group $(\Zmod{p})^{\times}$ to prove Fermat's Little Theorem: if $p$ is a prime number then $a^p \equiv a \pmod p$ for all $a \in \Z$.Before the proof, let us recall Lagrange's Theorem.Lagrange's TheoremIf $G$ is a […] Normal Subgroup Whose Order is Relatively Prime to Its IndexLet $G$ be a finite group and let $N$ be a normal subgroup of $G$.Suppose that the order $n$ of $N$ is relatively prime to the index $|G:N|=m$.(a) Prove that $N=\{a\in G \mid a^n=e\}$.(b) Prove that $N=\{b^m \mid b\in G\}$.Proof.Note that as $n$ and […] A One-Line Proof that there are Infinitely Many Prime NumbersProve that there are infinitely many prime numbers in ONE-LINE.BackgroundThere are several proofs of the fact that there are infinitely many prime numbers.Proofs by Euclid and Euler are very popular.In this post, I would like to introduce an elegant one-line […] Find the Largest Prime Number Less than One Million.Find the largest prime number less than one million.What is a prime number?A natural number is called a "prime number" if it is only divisible by $1$ and itself.For example, $2, 3, 5, 7$ are prime numbers, although the numbers $4,6,9$ are not.The prime numbers have always […] Mathematics About the Number 2017 Happy New Year 2017!!Here is the list of mathematical facts about the number 2017 that you can brag about to your friends or family as a math geek.2017 is a prime numberOf course, I start with the fact that the number 2017 is a prime number.The previous prime year was […] The Group of Rational Numbers is Not Finitely Generated(a) Prove that the additive group $\Q=(\Q, +)$ of rational numbers is not finitely generated.(b) Prove that the multiplicative group $\Q^*=(\Q\setminus\{0\}, \times)$ of nonzero rational numbers is not finitely generated.Proof.(a) Prove that the additive […]
The Union and Intersection of Collections of Closed Sets The Union and Intersection of Collections of Closed Sets Recall from The Union and Intersection of Collections of Open Sets page that if $\mathcal F$ is an arbitrary collection of open sets then $\displaystyle{\bigcup_{A \in \mathcal F} A}$ is an open set, and if $\mathcal F = \{ A_1, A_2, ..., A_n \}$ is a finite collection of open sets then $\displaystyle{\bigcap_{i=1}^{n} A_i}$ is an open set. We will now prove two analogous theorems regarding the union and intersection of collections of closed sets. Theorem 1: If $\mathcal F = \{ A_1, A_2, ..., A_n \}$ is a finite collection of closed sets then $\displaystyle{\bigcup_{i=1}^{n} A_i}$ is a closed set. (1) Proof: Let $\mathcal F = \{ A_1, A_2, ..., A_n \}$ be a finite collection of closed sets and let: \begin{align} \quad S = \bigcup_{i=1}^{n} A_i \end{align} (2) By applying the generalized De Morgan's Law, we see that the complement $S^c$ is: \begin{align} \quad S^c = \left ( \bigcup_{i=1}^{n} A_i \right )^c = \bigcap_{i=1}^{n} A_i^c \end{align} For each $A_i$ for $i \in \{1, 2, ..., n \}$ we have that $A_i$ is closed, so $A_i^c$ is open. The intersection of a finite collection of open sets is open, so $S^c$ is open and hence $(S^c)^c = S$ is closed. Therefore $\displaystyle{\bigcup_{i=1}^{n} A_i}$ is closed. $\blacksquare$ Theorem 2: If $\mathcal F$ is an arbitrary collection of closed sets then $\displaystyle{\bigcap_{A \in \mathcal F} A}$ is a closed set. (3) Proof: Let $\mathcal F$ be any arbitrary collection of closed sets and let: \begin{align} \quad S = \bigcap_{A \in \mathcal F} A \end{align} (4) By applying the generalized De Morgan's Law, we see that the complement $S^c$ is: \begin{align} \quad S^c = \left( \bigcap_{A \in \mathcal F} A \right )^c = \bigcup_{A \in \mathcal F} A^c \end{align} For all $A \in \mathcal F$ we have that $A$ is closed, so $A^c$ is open. The union of an arbitrary collection of open sets is open, so $S^c$ is open. Therefore $(S^c)^c = S$ is closed. $\blacksquare$
The whole problem is really about knowing what the words mean. An event is a time and a place together as single object. For instance the event where a light sends its first, last, or only pulse. Or the event where a beam or particle touches something and bounces. Anything you can describe with a time and place together is an event. And different people that are either using different coordinates or are moving might assign a different collection of four numbers for the $(t,x,y,z)$ describing the time and the spatial coordinates. But it will be the same event. Just lime two people using two spatial coordinates can refer to the same point in space by using a different triple of numbers if they use a different coordinate system. An observation is when you see things and measure things and then (if necessary) do a computation to find out the numerical values of the event, i.e. find the $(t,x,y,z)$ of the event. The word simultaneous means two events that happen at the same time, this depends on frame. So it means that in a frame, the observation of the event yields the same $t$ values. Practice reading that last sentence with all the correct meanings and you are almost there. A proper length or time is the most extreme version that can be measured in any frame. For a length (proper length) it is the longest length anyone measures, you observe a moving thing to be shorter than the person at rest to it. For a duration (proper time) it is the shortest duration anyone measures, you observe a moving thing to age slower than the person at rest to it. More mathematically it happens that for two events you can compute $\sqrt{(\Delta x)^2+(\Delta y)^2+(\Delta z)^2-(c\Delta t)^2}$ and you get proper length or you can compute $\frac{1}{c}\sqrt{(c\Delta t)^2-(\Delta x)^2-(\Delta y)^2-(\Delta z)^2} $ and you get proper time whichever one gives a real (not imaginary) result. And everyone agrees in these values so unlike simultaneity, proper time and proper length are things that different frames agree on. There is something else that different frames agree on. This is that if something (even light) moves at speed $c$ in one (inertial) frame then it moves at speed $c$ in another (inertial) frame. So that's SR in a nutshell, at least the vocabulary part. So let's assume that you and the train both move inertially and that you think the train is moving at speed $v$ (and the train thinks you are moving at speed $v$). Now the person on the ground thinks (observes i.e. computes) the train has length $l<L$ and you want to consider two events that you "observe" to be simultaneous. How can you observe two events to be simultaneous? You'd have to compute when they happen. So what if there was a light source on the train that made a pulse. That would be an event, so in your frame that has a time it happened and a location. Since it is the first event let's label the event with coordinates $(t_1,x_1,0,0)$ then after an amount of time $\Delta t$ that pulse has expanded to a sphere of radius $c\Delta t.$ and when that sphere finally touches the front of the train that is an event. And when the sphere finally touches the back of the train that is an event. Events are just times and locations, we know where the train is at all times so we know where the front and back of the train is at all times. So if after time $\Delta t$ the sphere finally gets to the front it traveled a distance $c\Delta t$ and hits it at time $t_2=t_1+\Delta t.$ Similarly if after time $\Delta t$ the sphere finally gets to the back it traveled a distance $c\Delta t$ and hits it at time $t_2=t_1+\Delta t.$ And I used the same time $\Delta t$ because I wanted the two events $(t_2,x_2,0,0)$ when-where the light hits the front to be simultaneous to the event $(t_2,x_3,0,0)$ when-where the light hits the back. Why? Because I just wanted to consider two events both located at the back and front respectively that the person on the ground says happened at the same time. In fact it could be a hypothetical light flash that started in a definite but hypothetical place. Really you can imagine two simultaneous (to the ground person) events and then imagine a hypothetical pulse that would have hit those two locations at that one time. Because I just wanted to be clear about what two events I was talking about and it can be clearly to reference those two events be referring to a single event. In fact we know the two events, they are $(t_1+\Delta t,x_1+c\Delta t,0,0)$ and $(t_1+\Delta t,x_1-c\Delta t,0,0)$ so really we just picked any two events that were located at the ends of the train and were simultaneous to the person at the ground and then we can imagine a hypothetical light blast that would have hit those two locations at the two times. So really we have one event (the light going off) that is clearly linked to the two events (the light hitting the front and back simultaneous to the person in the ground). And since events are really just the time-place when-where it happened. These events (the when-where) exist even if there was no light, even if the light was hypothetical. OK so we know how to relate the events that are simultaneous to the person on the ground to a (potentially hypothetical) event where light flashes. But where does it happen? Let's say the train has meter and cm and mm marks painted on the floor. The person on the ground doesn't agree that they are calibrated correctly. But they do agree that they are equally spaced and hence they can be used to mark out what percentage of the way across you are. So what if the light were placed in the center, would that work? By the time the light gets to where the front started the front has since moved on. But by that time the light has also gotten to where the back used to be which has already moved on too, but that means if we placed it there in the center the light hit the back before it hit the front, so that wasn't the place to put it so that it hits the two ends at the same time. However that was the amount of time $\Delta t$ to wait because that's when that expanding sphere was long enough to reach from one end of where the train used to be to the other end of where the train used to be. So it has a diameter that is the size of the train, it was just placed in the wrong spot. If you move the location of the light over by $v\Delta t$ towards the front of the train then we can ask where it ended up after time $\Delta t$ it is a sphere of the right length whose front end is $v\Delta t$ in front of where the front of the train was originally. So that edge is right where the train's front is right then. That's the perfect spot to be touching both ends. So if the train has length $l<L$ then $c\Delta t =l/2$ because that is how long it takes for the expanding sphere to get long enough. So the location to start at isn't the center it is $v[\Delta t]$=$v[l/(2c)]$ towards the front compared to the middle. So it is $(l/2)+(vl/(2c))$ away from the back. So it is $l(c/2c+(v/2c))$ from the back. Which is a fraction $(c+v)/(2c)$ of the way from the back towards the front. And the two observers disagree about how far from the back this magic place is. But they do agree about what percentage of the way it is. So now we are done with the observer on the ground. From here on out we are going to work with the train observer. Let's say you asked the guy on the ground where he wants to place a light and they say to place it the fraction $(c+v)/(2c)$ of the way between the front and the back (and they tell you which end is which since you don't feel the train moving) with it closer to the front. Then you see that magic location as $L(c+v)/(2c)$ from the back, hence $L-L(c+v)/(2c)$=$L(c-v)/(2c)$ from the front. So that's where that came from in your text. All that work was just to make it super crystal clear that there is one event (the light flashing) that clearly identifies two other events (the event of the light hitting the front and the event of the light hitting the back) and we've been asked what the time readings on those clocks at those events are. Now on the train you placed two clocks, one of the front and one at the back. You think the train is at rest and you think your clocks read the accurate time when it is the same time. So to find out the reading of the clock when the light strike them. We just need to know the time when the light strikes them. So if the first event (light going off) happens at $(t_3,L(c+v)/(2c),0,0)$ (I had to use a different time because it is a different frame, same event but different frame just like the same vector has different components in different coordinate systems) then the event of the light hitting the back happens $\frac{L(c+v)/(2c)}{c}$ seconds later at $t=t_3+\frac{L(c+v)/(2c)}{c}.$ Whereas the event of the light hitting the front happens $\frac{L(c-v)/(2c)}{c}$ seconds later at $t=t_3+\frac{L(c-v)/(2c)}{c}.$ So the front hit first (it was closer to the front) and so the clock in the front reads $t_3+\frac{L(c-v)/(2c)}{c}$ and the clock at the back reads $ t_3+\frac{L(c+v)/(2c)}{c}$ which is ahead of the one in the front by $\left(t_3+\frac{L(c+v)/(2c)}{c}\right)$-$\left(t_3+\frac{L(c-v)/(2c)}{c}\right)$=$Lv/c.$ Time to get to your questions. Why is it legitimate to use light source to represent you seeing the two clocks? We used the one event of the light flash to clearly indicate which two events (lights hitting the ends) we were trying to talk about. Later you can use coordinates and Lorentz transformations to compute the same things with much less effort. Everyone would agree that there is a time and place a light goes off. And everyone agrees there is a time and a place when-where that light reaches the front. And everyone agrees there is a time and a place when-where that light reaches the back. They just might. It agree on those times. Or places or even about whether the times are the same. But everyone clearly agrees on the events. In a sense the events are labels for a region of spacetime (a 4d collection like space is a 3d collection) and people will disagree about what numbers to assign to them, but they will agree that this is the event where that did or could have happened and so forth. How is it justified that the time difference between light hitting the ends in train's frame is indeed the difference you see in your frame? That's why I had to write the whole answer. We literally computed what the clocks read at those two events. If the train had glass walls and traveled through a dark universe and that light went off then those two clocks readings would literally be what your eyeballs see from he clocks since the only time you see the clock us when he pulse of light hits it. You can't see in the dark so those two events would be the only times you saw the clocks. Now if by see in your frame, don't think you see those images at the same time. You don't see those clocks until the light gets from them to you, but I set it up so after you collect the data you will compute that you saw the clock readings from the same time. We worked hard to place the light at exactly the right spot so the pulse from the light would hit the clocks on the train at a time that you (on the ground) compute to be simultaneous. So the answer here is that all the effort I went to to find that magic spot, the spot $v\Delta t$ from the center of where the train was originally. That effort was so that those two events are things you think happened at the same time, you think both events happened $\Delta t$ after the one event of the light going off.
Table of Contents Criterion for a Set to be Closed in a Metric Subspace Recall from the Criterion for a Set to be Open in a Metric Subspace page that if $(M, d)$ is a metric space and $S \subseteq M$ such that $(S, d)$ is a metric subspace of $(M, d)$ then a set $X$ in the metric subspace, i.e., $X \subseteq S$ is open in $S$ if and only if there exists an open set $A$ in $M$ such that:(1) We will now look at an analogous theorem which will show us that a set $Y \subseteq S$ is closed in $S$ if and only if there exists a closed set $B$ in $M$ such that:(2) Theorem 1: Let $(M, d)$ be a metric space and let $(S, d)$ be a metric subspace of $(M, d)$. A set $Y \subseteq S$ is closed in $S$ if and only if there exists a closed set $B$ in $M$ such that $Y = B \cap S$. Proof:$\Rightarrow$ Suppose that $Y$ is closed in $S$. Then $S \setminus Y$ is open in $S$. Hence there exists an open set $A$ in $M$ such that: Taking the complement of both sides gives us: Let $B = M \setminus A$. Then $B$ is a closed set in $M$ and $Y = B \cap S$. Suppose that $Y = B \cap S$ for some closed set $B$ in $M$. Then since $B$ is closed in $M$ there exists an open set $A$ in $M$ such that $B = M \setminus A$. So: But $A$ is open in $S$, so $S \setminus A$ is closed in $S$. Thus $Y$ is closed in $S$. $\blacksquare$
The Annals of Statistics Ann. Statist. Volume 5, Number 5 (1977), 842-865. Estimation for Autoregressive Moving Average Models in the Time and Frequency Domains Abstract The autoregressive moving average model is a stationary stochastic process $\{y_t\}$ satisfying $\sum^p_{k=0} \beta_ky_{t-k} = \sum^q_{g=0} \alpha_g\nu_{t-g}$, where the (unobservable) process $\{v_t\}$ consists of independently identically distributed random variables. The coefficients in this equation and the variance of $v_t$ are to be estimated from an observed sequence $y_1, \cdots, y_T$. To apply the method of maximum likelihood under normality the model is modified (i) by setting $y_0 = y_{-1} = \cdots = y_{1-p} = 0$ and $\nu_0 = v_{-1} = \cdots = v_{1-q} = 0$ and alternatively (ii) by setting $y_0 \equiv y_T, \cdots, y_{1-p} \equiv y_{T+1-p}$ and $v_0 \equiv v_T, \cdots, v_{1-q} \equiv v_{T+1-q}$; the former lead to procedures in the time domain and the latter to procedures in the frequency domain. Matrix methods are used for a unified development of the Newton-Raphson and scoring iterative procedures; most of the procedures have been obtained previously by different methods. Estimation of the covariances of the moving average part is also treated. Article information Source Ann. Statist., Volume 5, Number 5 (1977), 842-865. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176343942 Digital Object Identifier doi:10.1214/aos/1176343942 Mathematical Reviews number (MathSciNet) MR448762 Zentralblatt MATH identifier 0368.62075 JSTOR links.jstor.org Subjects Primary: 62M10: Time series, auto-correlation, regression, etc. [See also 91B84] Secondary: 62H99: None of the above, but in this section Citation Anderson, T. W. Estimation for Autoregressive Moving Average Models in the Time and Frequency Domains. Ann. Statist. 5 (1977), no. 5, 842--865. doi:10.1214/aos/1176343942. https://projecteuclid.org/euclid.aos/1176343942
Consider a spherical triangle with 3 angles: $\pi/2$, $\pi/3$, $\pi/3$. All sides are geodesics of course. The sphere has radius $r=1$. Context: I want to know whether a given point in polar coordinates ($\phi$, $\theta$, $r$) where $r=1$, is in my triangle. (My convention for polar coordinates is: $\phi$ is the rotational angle around the north pole and $\theta$ is the azimuthal angle. $\theta=0$ is the north pole. $\theta=\pi$ is the south pole). I will place the right angle at the north pole. One side of my triangle will be at $0$ longitude, and one will be at $\pi/2$ longitude. So clearly one condition must be: $0<\phi<\pi/2$. But I can't figure out the condition that determines whether I am north of my geodesic that defines the base/hypotenuse of the triangle. The condition would be in terms of theta which would in turn be a function of $\phi$, but that's as far as I can get. Thank you in advance for any help you can provide.
Yes, there are generalizations of Rainich conditions to many others fields, including scalar fields, perfect fluids and null electromagnetic fields. (Indeed! The original form of the Rainich conditions were not conveniently formulated for electromagnetic fields \(F\in \bigwedge ^2M\) whose both Poincaré invariant vanishes: \(F \wedge \star F = 0\) and \(F \wedge F = 0\). This even lead some very good relativists even to doubt that null-electromagnetic fields could be present in electrovacuum solutions of Einstein-Maxwell equations. Cf. for instance this paper by Louis Witten: "Geometry of Gravitation and Electromagnetism" here. Then Peres and Bonnor found some plane-fronted wave solutions to Einstein-Maxwell and showed that they were perfectly consistent.) A complete review for all these types of Rainich conditions can be found is: https://arxiv.org/abs/1308.2323 https://arxiv.org/abs/1503.06311 PS: Peres and Bonnor solutions (which describes some interesting coupled system of electromagnetic-gravitational waves) are here: http://journals.aps.org/pr/abstract/10.1103/PhysRev.118.1105 http://projecteuclid.org/euclid.cmp/1103841572 (free access)
Search Now showing items 1-10 of 27 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
In Big Rudin, theorem 2.7, Rudin states that if there is a compact set $K\subset U$ where $U$ is an open set in a locally compact Hausdorff space $X$, then there is an open set $V$ with compact closure such that: $K\subset V\subset \overline{V}\subset U$. In theorem 2.13, he states that if we have $K\subset V_1\cup\ldots\cup V_n$ where $V_1,\ldots,V_n$ are open, then by theorem 2.7, every $x\in K$ has a neighborhood $W_x$ with compact closure $\overline{W_x}\subset V_i$ for some $i$ depending on $X$. I don't see why this is true to be honest. All that theorem 2.7 tells us is that $\overline{W_x}\subset V_1\cup\ldots\cup V_n$. What did I miss?
The basic Weyl ordering property generating all the Weyl ordering identities for polynomial functions is: $((sq+tp)^n)_W = (sQ+tP)^n$ $(q, p)$ are the commuting phase space variables, $(Q, P)$ are the corresponding noncommuting operators (satisfying $[Q,P] = i\hbar $). For example for n = 2, the identity coming from the coefficient for the$st$ term is the known basic Weyl ordering identity: $(qp)_W = \frac{1}{2}(QP+PQ)$ By choosing the classical Hamiltonian as $h(p,q) = (sq+tp)^n$ and carefully performing the Fourier and inverse Fourier transforms, we obtain the Weyl identity: $\int {dx\over2\pi}{dk\over2\pi} e^{ixP + ikQ} \int dpdqe^{-ixp-ikq} (sq+tp)^n =(sQ+tP)^n $ The Fourier integral can be solved after the change of variables: $l = sq+tp, m = tq-sp$ and using the identity $ \int dl e^{-iul} l^n =2 \pi \frac{\partial^n}{\partial v^n} \delta_D(v)|_{v=u}$ Where $ \delta_D$ is the Dirac delta function.
Children usually ask questions like “How many hours have passed?” And they have no idea about the start time to be taken as a reference. Just like the zero of a measuring tape, a zero reference for time plays a crucial role in analyzing the signal behaviour in time and frequency domains. Until now, we assumed that reference time $0$ coincides with the start of a sine and a cosine wave to understand the frequency domain. Later, we will deal with symbol timing synchronization problem in single-carrier systems and carrier frequency synchronization problem in multicarrier systems, both of which address the problem of finding this reference to prevent signal distortion. To solve those problems, we study the signals with arbitrary time shifts and want to know their effects on frequency domain behaviour. To begin with, let us shift a cosine signal by $T/4$ to the right where $T$ is its time period $T=1/F$. \begin{align*} \cos2\pi F\left(t-\frac{T}{4}\right) &= \cos \left(2\pi Ft -2\pi F\frac{T}{4}\right) = \cos \left(2\pi Ft – \frac{\pi}{2}\right) = \sin 2\pi Ft \end{align*} where a time shift of $-T/4$ is seen to impart a phase shift of $-\pi/2$ in frequency domain. This makes sense because if a full period $T$ of the two complex sinusoids summing up in time domain to produce a cosine spans $360^\circ$, then a shift of $-1/4$ of $360^{\circ}$ is $-90^{\circ}$. This is illustrated in Figure below. This is why you often see a phase description for a real cosine wave as in Figure below. Taken as the reference times, these phases are due to the orientation of the two complex sinusoids in frequency $IQ$-plane and present a more interesting view (although simple trigonometric formulas like $\cos0=1$, $\cos \pi/2=0$ can also be applied to see this phase relationship). Now evaluate a discrete-time complex sinusoid for a general time shift of $n_0$ samples. Since this is in context of the DFT, the shift here is circular even if not explicitly stated. \begin{align*} \cos 2\pi \frac{k}{N} \left(n + n_0\right) = \cos \left( 2\pi \frac{k}{N} n + 2\pi \frac{k}{N} n_0 \right) \\ \sin 2\pi \frac{k}{N} \left(n + n_0\right) = \sin \left( 2\pi \frac{k}{N} n + 2\pi \frac{k}{N} n_0 \right) \\ \end{align*} There are two ways to look into this expression. Effect of Time Shift on DFT – Magnitude and Phase This complex sinusoid can be seen to undergo a phase shift equal to $2\pi(k/N)n_0$. Since the DFT of a signal is just a combination of $N$ such complex sinusoids with distinct values of $k/N$, the phase of the DFT considering each $k$ becomes a linear function of discrete frequency $k/N$. \begin{equation*} \text{phase shift} = 2\pi \frac{k}{N}n_0 = \text{constant}\cdot \frac{k}{N} \end{equation*} where the constant term, or slope, depends on the circular time shift $n_0$. Also, the bigger the index $k$, the larger the phase shift is. Such a phase plot is drawn in Figure below for a positive $n_0$. In conclusion, the DFT of $\tilde s[n]$ $=$ $s[(n+n_0) \:\text{mod}\: N]$ has its magnitude unchanged. However its phase is rotated by $2\pi (k/N) n_0$ for each $k$. We denote the rotated DFT by $\widetilde{S}[k]$. \begin{equation}\label{eqIntroductionDFTShiftMagnitudePhase} \begin{aligned} |\widetilde{S}[k]| &= |S[k]| \\ \measuredangle \widetilde{S}[k] &= \measuredangle S[k] + 2\pi \frac{k}{N} n_0 \end{aligned} \end{equation} We can summarize the above result as \text{Time shift} \quad s[(n\pm n_0) \:\text{mod}\: N] \quad \rightarrow\quad \pm 2\pi \frac{k}{N} n_0 \quad \text{Phase shift} \end{equation*} This is the view mostly taken while describing the effect of a time shift and it serves well for the mathematical purpose. In my opinion, the effect of a time shift can be better visualized through $IQ$ plots that follow next. Effect of Time Shift on DFT – $I$ and $Q$ A more interesting view comes from focusing on $I$ and $Q$ samples of the frequency domain. Let us shift an impulse signal $\delta[n]$ by $n_0$. Plugging $s[n]$ $=$ $s_I[n]$ $=$ $\delta[(n+n_0) \:\text{mod}\: N]$ in the DFT definition and using the fact that it is $1$ at $n=-n_0$ and $0$ everywhere else, \begin{align*} S_I[k]\: &= \sum \limits _{n=0} ^{N-1} \cos 2\pi\frac{k}{N}(-n_0) = N\cos 2\pi\frac{k}{N}n_0\\ S_Q[k] &= \sum \limits _{n=0} ^{N-1}- \sin 2\pi\frac{k}{N}(-n_0) = N\sin 2\pi\frac{k}{N}n_0 \end{align*} for each $k$ which represents a frequency domain complex sinusoid with amplitude $N$! Let us explore it with the help of an example. Figure below shows a unit impulse signal $s[n]$ and its DFT $S[k]$ along with its circularly time shifted version $s[(n-1) \:\text{mod}\: 5]$ and its DFT $\widetilde{S}[k]$. This DFT is computed here. Note that phase shift for each frequency bin $k$ is different for each $k$. To be exact, $\Delta \theta (k)$ $=$ $2\pi (k/N) \cdot (-1)$. For $k = -2$ to $k = 2$ and $N = 5$, it turns out to be \begin{align*} k &= ~~~0 \quad \rightarrow \quad \Delta \theta (0) ~~\:= 2\pi \frac{0}{5} \cdot (-1) \times \frac{180^\circ}{\pi} = 0^\circ \\ k &= \pm 1 \quad \rightarrow \quad \Delta \theta (\pm 1) = 2\pi \frac{\pm 1}{5} \cdot (-1) \times \frac{180^\circ}{\pi} = \mp 72^\circ \\ k &= \pm 2 \quad \rightarrow \quad \Delta \theta (\pm 2) = 2\pi \frac{\pm 2}{5} \cdot (-1) \times \frac{180^\circ}{\pi} = \mp 144^\circ \end{align*} The phase rotations of $72^\circ$ and $144^\circ$ are illustrated in the figure. Also observe that for a right shift, the phase rotations are clockwise for positive $k$ and anticlockwise for negative $k$. To understand this frequency domain complex sinusoid, recall that a time domain complex sinusoid rotating at frequency $F$ was shown to generate a helix in time $IQ$-plane. Such a complex sinusoid has a frequency domain representation that consists of a single impulse at $+F$. This is how we defined each `tick’ on the frequency axis. \begin{equation*} \text{Time domain complex sinusoid} \quad \leftarrow\rightarrow \quad \text{Frequency domain impulse} \end{equation*} Since time and frequency are dual of each other, the time domain impulse in this example drawn in Figure above must have a corresponding frequency domain complex sinusoid. \begin{equation}\label{eqIntroductionTimeImpulseFreqHelix} \text{Time domain impulse} \quad \leftarrow\rightarrow \quad \text{Frequency domain complex sinusoid} \end{equation} And that is exactly what a phase shift of $2\pi (k/N) n_0$ represents as a function of $k$. This is shown for $n_0=-1$ in Figure below, a redrawn version of the bottom right of Figure above. The important point here is that the frequency index $k$ is the variable of this frequency domain complex sinusoid. On the other hand, time shift $n_0$ is an inverse period of this sinusoid and plays the role similar to frequency $F$ in the expression $2\pi Ft$. \begin{equation*} \text{phase shift} = 2\pi (n_0) \frac{k}{N} = 2\pi \big(\text{inverse period}\big) \frac{k}{N} \end{equation*} Now let us apply the time shift to a time domain complex sinusoid. Using the identities $\cos (A+B)$ $=$ $\cos A \cos B$ $-$ $\sin A \sin B$ and $\sin (A+B)$ $=$ $\sin A$ $\cos B$ $+$ $\cos A \sin B$, \begin{align*} \cos 2\pi \frac{k}{N} \left(n + n_0\right) &= \cos 2\pi \frac{k}{N} n \cdot \cos 2\pi \frac{k}{N} n_0 – \\ &\hspace{1.4in}\sin 2\pi \frac{k}{N} n \cdot \sin 2\pi \frac{k}{N} n_0 \\ \sin 2\pi \frac{k}{N} \left(n + n_0\right) &= \sin 2\pi \frac{k}{N} n \cdot \cos 2\pi \frac{k}{N} n_0 + \\ &\hspace{1.4in}\cos 2\pi \frac{k}{N} n \cdot \sin 2\pi \frac{k}{N} n_0 \end{align*} From the multiplication rule of complex numbers $I \cdot I$ $-$ $Q \cdot Q$ and $Q \cdot I$ $+$ $I \cdot Q$, we can see that in frequency domain, our original complex sinusoid is again being multiplied with another frequency domain complex sinusoid given by \begin{align*} V_I[n]\: &= \cos 2\pi\frac{k}{N}n_0\\ V_Q[n] &= \sin 2\pi \frac{k}{N}n_0 \end{align*} Finally, what happens when $N$ analysis frequencies are present (the complex sinusoids that form the DFT)? Extending the same concept proved above and noting that a DFT is a combination of $N$ complex sinusoids, the DFT of $\tilde s[n]$ $=$ $s[(n+n_0) \:\text{mod}\: N]$ is given by the following expression which is also straightforward to prove through DFT definition. \widetilde{S}_I[k]\: &= S_I[k] \cos 2\pi \frac{k}{N} n_0 – S_Q[k] \sin 2\pi \frac{k}{N} n_0 \\ \widetilde{S}_Q[k] &= S_Q[k] \cos 2\pi \frac{k}{N} n_0 + S_I[k] \sin 2\pi \frac{k}{N} n_0 \end{align} i.e., the DFT is multiplied with a frequency domain complex sinusoid as $n_0$ is fixed and $k$ varies (in terms of complex signals, this expression is written for a signal $s(t)$ as $s(t-t_0)$ $\rightarrow$ $S(F)\cdot \exp(-j2\pi Ft_0)$). This is the most important relation to understand for visualizing time and frequency domain transformations of a signal and we summarize it as \begin{equation*} \text{Shift in one domain} \quad \rightarrow \quad \text{Multiplication by a complex sinusoid in the other domain} \end{equation*} The magnitude and direction of rotation for these frequencies is symbolically shown for a positive $n_0$ in Figure below. Significance of Linear Phase As far as a linear phase is concerned, it preserves the waveform shape due to the following reason. A simple delay ensures the waveform integrity in time domain. When the signal is composed of constituent sinusoids, all of them need to be delayed by the same time (and not by the same phase). Intuitively, if sinusoids with different frequencies get delayed by the same number of samples, then they naturally end up with different phases at the end of that common duration, as illustrated in Figure below. However, each of those phases is the common delay multiplied with that frequency. The overall sum of the sinusoids with different frequencies still remains the same. On the other hand, if the phase is non-linear, then the delay introduced in the waveform is not proportional to the frequency, thus different constituent sinusoids get delayed by varying amounts of time which distorts the signal shape, commonly known as phase distortion. In wireless communications, the target is to maintain a linear phase of the signal for proper detection. True insights can only be developed through grasping the implications of phase rotations as well as time and frequency domain complex sinusoids. Next, we discuss a few examples of DFTs of some basic signals that will help not only understand the Fourier transform but will also be useful in comprehending the concepts discussed further.
I'm doing some classical field theory exercises with the Lagrangian $$\mathscr{L} = -\frac{1}{4}F_{\mu \nu}F^{\mu \nu}$$ where $F_{\mu \nu} = \partial_\mu A_\nu - \partial_\nu A_\mu$. To find the conjugate momenta $\pi^\mu_{\ \ \ \nu} = \partial \mathscr{L} / \partial(\partial_\mu A^\nu)$, I can use two methods. First method: directly apply this to $\mathscr{L}$. We get a a factor of $2$ since there are two $F$'s, and another factor of $2$ since each $F$ contains two $\partial_\mu A_\nu$ terms, giving $$\pi^\mu_{\ \ \ \nu} = -F^\mu_{\ \ \ \nu}.$$ Second method: get $\mathscr{L}$ in terms of $A$ by expanding and integrating by parts, yielding $$\mathscr{L} = \frac{1}{2}(\partial_\mu A^\mu)^2 - \frac{1}{2}(\partial_\mu A^\nu)^2.$$ Differentiating this gets factors of $2$ and gives $$\pi^\mu_{\ \ \ \nu} = \partial_\rho A^\rho \delta^\mu_\nu - \partial^\mu A_\nu.$$ These two answers are different! (They do give the same equations of motion, at least.) I guess that means doing the integration by parts changed the canonical momenta. Is this something I should be worried about? In particular, I have another exercise that wants me to show that one of the canonical momenta vanishes -- this isn't true for the ones I get from the second method! Plus, my stress-energy tensor is changed too. When a problem asks for "the" canonical momenta, am I forbidden from integrating by parts?
Table of Contents The Uniqueness of Limits of Sequences in Metric Spaces Recall from the Limits of Sequences in Metric Spaces that if $(M, d)$ is a metric space then a sequence of elements from $M$ is of the form $(x_n)_{n=1}^{\infty}$ where $x_k \in M$ for all $k \in \{ 1, 2, ... \}$. We will now show that if the sequence $(x_n)_{n=1}^{\infty}$ is convergent then its limit $p \in M$ is unique. Theorem 1: Let $(M, d)$ be a metric space and let $(x_n)_{n=1}^{\infty}$ be a convergent sequence in $M$. Then the limit of $(x_n)_{n=1}^{\infty}$ is unique. Let $(M, d)$ be a metric space and $(x_n)_{n=1}^{\infty}$ be a convergent sequence in $M$. Suppose that $\lim_{n \to \infty} x_n = p$ and $\lim_{n \to \infty} x_n = q$ for some $p, q \in M$. We want to show that $p = q$ are equal - but this happens if and only if $d(p, q) = 0$. So, we will prove that $d(p, q) = 0$. Now since $\lim_{n \to \infty} x_n = p$ then $\lim_{n \to \infty} d(x_n, p) = 0$. So for $\epsilon_1 = \frac{\epsilon}{2} > 0$ there exists an $N_1 \in \mathbb{N}$ such that if $n \geq N_1$ then: Similarly, since $\lim_{n \to \infty} x_n = q$ then $\lim_{n \to \infty} d(x_n, q) = 0$. So for $\epsilon_2 = \frac{\epsilon}{2} > 0$ there exists an $N_2 \in \mathbb{N}$ such that if $n \geq N_2$ then: Let $N = \max \{ N_1, N_2 \}$ so that $(*)$ and $(**)$ both hold for $n \geq N$, and consider $d(p, q)$. Since $d$ is a metric, we have that the triangle inequality holds on $d $ ]] and so for [[$ n \geq N$ we have that: So for all $\epsilon > 0$ there exists an $N \in \mathbb{N}$ such that if $n \geq N$ we have that $d(p, q) < \epsilon$ which implies that $d(p, q) = 0$. So $p = q$. Hence if $(x_n)_{n=1}^{\infty}$ is convergent then its limit is unique. $\blacksquare$
Concentration of ground state solutions for quasilinear Schrödinger systems with critical exponents 1. School of Mathematical Sciences, Beijing Normal University, Beijing, 100875, China 2. Department of Mathematics, Tsinghua University, Beijing, 100084, China $ {\Bbb R}^N: $ $ \left\{\begin{array}{ll}-\Delta w+(\lambda a(x)+1)w-(\Delta|w|^2)w = \frac{p}{p+q}|w|^{p-2}w|z|^q+\frac{\alpha}{\alpha+\beta}|w|^{\alpha-2}w|z|^\beta\\\ -\Delta z+(\lambda b(x)+1)z-(\Delta|z|^2)z = \frac{q}{p+q}|w|^p|z|^{q-2}z+\frac{\beta}{\alpha+\beta}|w|^\alpha|z|^{\beta-2}z, \ \end{array}\right. $ $ \lambda>0 $ $ p>2, q>2, \alpha>2, \beta>2, $ $ 2\cdot(2^*-1) < p+q<2\cdot2^* $ $ \alpha+ \beta = 2\cdot2^*. $ $ \Omega = int \left\{a^{-1}(0)\right\}\cap int \left\{b^{-1}(0)\right\} $ $ \lambda $ Mathematics Subject Classification:Primary: 35Q55; Secondary: 35J655. Citation:Yongpeng Chen, Yuxia Guo, Zhongwei Tang. Concentration of ground state solutions for quasilinear Schrödinger systems with critical exponents. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2693-2715. doi: 10.3934/cpaa.2019120 References: [1] S. Adachi, M. Shibata and T. Watanabe, Asymptotic behavior of positive solutions for a class of quasilinear elliptic equations with general nonlinearities, [2] C. Alves, D. Filho and M. Sonto, On systems of elliptic equations involving subcritical or critical Sobolev eponents, [3] J. Bezerra do Ó, O. Miyagaki and S. Soares, Soliton solutions for quasilinear Schrödinger equations with critical growth, [4] A. Bouard, N. Hayashi and J. Saut, Global existence of small solutions to a relativistic nonlinear Schrödinger equation, [5] H. S. Brandi, C. Manus, G. mainfray, T. Lehner and G. Bonnaud, Relativistic and ponderomotive self-focusing of a laser beam in a radially inhomogeneous plasma, [6] [7] [8] Y. Deng, S. Peng and S. Yan, Positive solition solutions for generalized quasilinear Schrödinger equations with critical growth, [9] [10] [11] Y. Guo and J. Nie, Infinitely many solutions for quasilinear Schrödinger systems with finite and sign-changing potentials, [12] [13] X. He, A. Qian and W. Zou, Existence and concentration of positive solutions for quasilinear Schrö dinger equations with critical growth, [14] Y. He and G. Li, Concentrating solition solutions for quasilinear Schrödinger equations involving critical sobolev exponents, [15] [16] J. Liu, X. Liu and Z. Q. Wang, Multiple sign-changing solutions for quasilinear elliptic equations via perturbation method, [17] [18] J. Liu, X. Liu and Z. Q. Wang, Existence theory for quasilinear elliptic equations via regularization approach, [19] [20] [21] [22] [23] [24] [25] Y. Wang and Y. Shen, Existence and asymptotic behavior of positive solutions for a class of quasilinear Schrödinger equations, [26] Y. Wang, Y. Zhang and Y. Shen, Multiple solutions for quasilinear Schrödinger equations involving critical exponent, [27] [28] [29] [30] X. Zeng, Y. Zhang and H. Zhou, Positive solutions for a quasilinear Schrödinger equation involving Hardy potential and critical exponent, show all references References: [1] S. Adachi, M. Shibata and T. Watanabe, Asymptotic behavior of positive solutions for a class of quasilinear elliptic equations with general nonlinearities, [2] C. Alves, D. Filho and M. Sonto, On systems of elliptic equations involving subcritical or critical Sobolev eponents, [3] J. Bezerra do Ó, O. Miyagaki and S. Soares, Soliton solutions for quasilinear Schrödinger equations with critical growth, [4] A. Bouard, N. Hayashi and J. Saut, Global existence of small solutions to a relativistic nonlinear Schrödinger equation, [5] H. S. Brandi, C. Manus, G. mainfray, T. Lehner and G. Bonnaud, Relativistic and ponderomotive self-focusing of a laser beam in a radially inhomogeneous plasma, [6] [7] [8] Y. Deng, S. Peng and S. Yan, Positive solition solutions for generalized quasilinear Schrödinger equations with critical growth, [9] [10] [11] Y. Guo and J. Nie, Infinitely many solutions for quasilinear Schrödinger systems with finite and sign-changing potentials, [12] [13] X. He, A. Qian and W. Zou, Existence and concentration of positive solutions for quasilinear Schrö dinger equations with critical growth, [14] Y. He and G. Li, Concentrating solition solutions for quasilinear Schrödinger equations involving critical sobolev exponents, [15] [16] J. Liu, X. Liu and Z. Q. Wang, Multiple sign-changing solutions for quasilinear elliptic equations via perturbation method, [17] [18] J. Liu, X. Liu and Z. Q. Wang, Existence theory for quasilinear elliptic equations via regularization approach, [19] [20] [21] [22] [23] [24] [25] Y. Wang and Y. Shen, Existence and asymptotic behavior of positive solutions for a class of quasilinear Schrödinger equations, [26] Y. Wang, Y. Zhang and Y. Shen, Multiple solutions for quasilinear Schrödinger equations involving critical exponent, [27] [28] [29] [30] X. Zeng, Y. Zhang and H. Zhou, Positive solutions for a quasilinear Schrödinger equation involving Hardy potential and critical exponent, [1] [2] Wentao Huang, Jianlin Xiang. Soliton solutions for a quasilinear Schrödinger equation with critical exponent. [3] Jian Zhang, Shihui Zhu, Xiaoguang Li. Rate of $L^2$-concentration of the blow-up solution for critical nonlinear Schrödinger equation with potential. [4] Daniele Cassani, João Marcos do Ó, Abbas Moameni. Existence and concentration of solitary waves for a class of quasilinear Schrödinger equations. [5] [6] Xu Zhang, Shiwang Ma, Qilin Xie. Bound state solutions of Schrödinger-Poisson system with critical exponent. [7] Kaimin Teng, Xiumei He. Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent. [8] Yimin Zhang, Youjun Wang, Yaotian Shen. Solutions for quasilinear Schrödinger equations with critical Sobolev-Hardy exponents. [9] Marco A. S. Souto, Sérgio H. M. Soares. Ground state solutions for quasilinear stationary Schrödinger equations with critical growth. [10] Kun Cheng, Yinbin Deng. Nodal solutions for a generalized quasilinear Schrödinger equation with critical exponents. [11] [12] Myeongju Chae, Sunggeum Hong, Sanghyuk Lee. Mass concentration for the $L^2$-critical nonlinear Schrödinger equations of higher orders. [13] Abbas Moameni. Soliton solutions for quasilinear Schrödinger equations involving supercritical exponent in $\mathbb R^N$. [14] Guoyuan Chen, Youquan Zheng. Concentration phenomenon for fractional nonlinear Schrödinger equations. [15] [16] [17] Yinbin Deng, Wei Shuai. Positive solutions for quasilinear Schrödinger equations with critical growth and potential vanishing at infinity. [18] Yinbin Deng, Yi Li, Xiujuan Yan. Nodal solutions for a quasilinear Schrödinger equation with critical nonlinearity and non-square diffusion. [19] Jianhua Chen, Xianhua Tang, Bitao Cheng. Existence of ground state solutions for a class of quasilinear Schrödinger equations with general critical nonlinearity. [20] Yi He, Gongbao Li. Concentrating soliton solutions for quasilinear Schrödinger equations involving critical Sobolev exponents. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
One of the properties of Fourier Transform is that the derivative of a signal in time domain gets translated to multiplication of the signal spectrum by $j2\pi f$ in frequency domain. This property is usually derived as follows. For a signal $s(t)$ with Fourier Transform $S(f)$ \begin{equation*} s(t) = \frac{1}{2\pi}\int \limits _{-\infty}^{+\infty} S(f) e^{j2\pi ft}df, \end{equation*} we have \begin{align*} \frac{d}{dt} s(t) &= \frac{1}{2\pi}\int \limits_{-\infty}^{+\infty}S(f) j2\pi f e^{j2\pi ft}df\\ &= \frac{1}{2\pi}\int \limits_{-\infty}^{+\infty}\bigg\{j2\pi f S(f)\bigg\} e^{j2\pi ft}df \end{align*} which is the inverse Fourier Transform of $j2\pi f S(f)$. Now we want to understand this relation one level deeper, i.e., the reason behind the factor $j2\pi f$? There are two parts of this expression: one is $j$ and the other is $2\pi f$. We start with $2\pi f$. Notice from the definition of Fourier Transform that this operation decomposes a signal into a sequence of complex sinusoids with frequencies ranging from $-\infty$ to $+\infty$. This is shown in Figure 1 below. Figure 1: Three complex sinusoids and their decomposition into sines and cosines By Euler’s formula, \begin{equation*} e^{j2\pi ft} = \cos 2\pi ft + j\sin 2\pi ft \end{equation*} Naturally, the higher the frequency, the steeper the slope and hence larger the derivative. After all, a derivative is nothing but the slope of the line tangent to the curve at a point. This is where the factor $2\pi f$ comes from (simply put, the derivative of $\cos 2\pi ft$ is $-2\pi f\cdot \sin 2\pi ft$). The term $j$ is more interesting. The derivative of $\cos 2\pi ft$ is $-2\pi f\cdot \sin 2\pi ft$ while that of $\sin 2\pi ft$ is $2\pi f\cdot \cos 2\pi ft$. So from Euler’s formula and using $j^2=-1$, \begin{align*} \frac{d}{dt} e^{j2\pi ft} &= 2\pi f\bigg\{-\sin 2\pi ft + j\cos 2\pi ft \bigg\}\\ &= 2\pi f\cdot j\bigg\{j\sin 2\pi ft + \cos 2\pi ft \bigg\}\\ &= j 2\pi f\bigg\{\cos 2\pi ft + j\sin 2\pi ft\bigg\}\\ \end{align*} Remembering that $j=e^{j\pi/2}$, the factor $j$ is therefore necessary to rotate $\cos$ and $-\sin$ by their corresponding angles such that we get our basis signals $e^{j2\pi ft}$ back. This results in getting the same signal $S(f)$ at the output with multiplication by $2\pi f$.
(a) Prove that every finitely generated subgroup of $(\Q, +)$ is cyclic. Let $G$ be a finitely generated subgroup of $(\Q, +)$ and let $r_1, \dots, r_n$ be nonzero generators of $G$.Let us express\[r_i=\frac{a_i}{b_i},\]where $a_i, b_i$ are integers. Let\[s:=\frac{1}{\prod_{j=1}^n b_j} \in \Q.\]Then we can write each $r_i$ as\[r_i=\frac{a_i}{b_i}=\left(\, a_i\prod_{\substack{j=1\\j\neq i}}^n b_i \,\right)\cdot \frac{1}{s}.\] It follows from the last expressions that the elements $r_i$ is contained in the subgroup $\langle s \rangle$ generated by the element $s$.Hence $G$ is a subgroup of $\langle s \rangle$.Since every subgroup of a cyclic group is cyclic, we conclude that $G$ is also cyclic. (b) Prove that $\Q$ and $\Q \times \Q$ are not isomorphic as groups. Seeking a contradiction, assume that $\Q$ is isomorphic to the direct product $\Q \times \Q$:\[\Q\cong \Q\times \Q.\] Then consider the subgroup $\Z\times \Z$ of $\Q\times \Q$.We claim that the subgroup $\Z\times \Z$ is not cyclic.If it were cyclic, then there would be a generator $(a,b)\in \Z\times \Z$. However, for example, the element $(b, -a)$ cannot be expressed as an integer multiple of $(a, b)$.To see this, suppose that\[n(a,b)=(b,-a)\]for some integer $n$. Then we have $na=b$ and $nb=-a$. Substituting the first equality into the second one, we obtain\[n^2a=-a.\]If $a\neq 0$, then this yields that $n^2=-1$, which is impossible, and hence $a=0$. Then $na=b$ implies $b=0$ as well.However, $(a,b)=(0,0)$ is clearly not a generator of $\Z\times \Z$. Thus we have reached a contradiction and $\Z\times \Z$ is a non-cyclic subgroup of $\Q\times \Q$.This implies via the isomorphism $\Q\cong \Q \times \Q$ that $\Q$ has a non-cyclic subgroup.We saw in part (a) that this is impossible.Therefore, $\Q$ is not isomorphic to $\Q\times \Q$. The Group of Rational Numbers is Not Finitely Generated(a) Prove that the additive group $\Q=(\Q, +)$ of rational numbers is not finitely generated.(b) Prove that the multiplicative group $\Q^*=(\Q\setminus\{0\}, \times)$ of nonzero rational numbers is not finitely generated.Proof.(a) Prove that the additive […] Every Cyclic Group is AbelianProve that every cyclic group is abelian.Proof.Let $G$ be a cyclic group with a generator $g\in G$.Namely, we have $G=\langle g \rangle$ (every element in $G$ is some power of $g$.)Let $a$ and $b$ be arbitrary elements in $G$.Then there exists […] A Simple Abelian Group if and only if the Order is a Prime NumberLet $G$ be a group. (Do not assume that $G$ is a finite group.)Prove that $G$ is a simple abelian group if and only if the order of $G$ is a prime number.Definition.A group $G$ is called simple if $G$ is a nontrivial group and the only normal subgroups of $G$ is […] A Homomorphism from the Additive Group of Integers to ItselfLet $\Z$ be the additive group of integers. Let $f: \Z \to \Z$ be a group homomorphism.Then show that there exists an integer $a$ such that\[f(n)=an\]for any integer $n$.Hint.Let us first recall the definition of a group homomorphism.A group homomorphism from a […]
Calculations of Strings Downforce on the Bridge: $T\small(kg_{F})\normalsize = T\small(lb_{F})\normalsize \times 0.45359$ $\cos\alpha + \cos\beta = 2\cos(\frac{\alpha + \beta}{2})\cos(\frac{\alpha - \beta}{2})$ $F_{d}\small(kg_{F})\normalsize = T\small(kg_{F})\normalsize \times 2\cos(\frac{\alpha + \beta}{2})\cos(\frac{\alpha - \beta}{2})$ $F_{d}\small(N)\normalsize = F_{d}\small(kg_{F})\normalsize \times 9.80665$ $T$ = String Tension $F_{d}$ = Downforce on the Bridge Among the many equations that are used to determine the downforce applied by the strings to the bridge, foresaid equation encompasses a more general condition in which the angle of the string on two sides of the bridge can take different values. In the picture, the direction of the calculated force (corresponding to the central axis of the bridge) is shown. If a different direction is desired for the calculated force, use of vector calculation is required. It is also clear that in the case of two equal angles, the downforce will have its maximum value among the various states in which the overall angle ($\alpha + \beta$) is constant and also the transverse forces will cancel each other. Units of Froce: $lb_{F}$ = Pound-Force $kg_{F}$ = Kilogram-Force = $0.45359 \times lb_{F}$ $N$ = Newton = $9.80665 \times kg_{F}$ Strings Tension in $lb_{F}(kg_{F})$: (According to: Violin Strings Review) E String A String D String G String Full Set Dominant 17.2(7.8017) 12.1(5.4884) 9.1(4.1277) 9.9(4.4905) 48.3(21.9084) Vision Solo 17.8(8.0739) 12.1(5.4884) 9.9(4.4905) 10.1(4.5813) 49.9(22.6341) Evah Prazzi Gold 17.6(7.9832) 12.3(5.5792) 10.4(4.7173) 10.1(4.5813) 50.4(22.8609) Pirastro 17.2(7.8017) 12.8(5.8060) 10.6(4.8081) 11(4.9895) 51.6(23.4052) Strings Downforce Calculator Tension can also be derived through this formula: $f = \frac{1}{2L}\sqrt[]{\frac{T}{\mu}}$ $f$ = Fundamental Frequency $L$ = Length of the Vibrating Part of the String $T$ = String Tension $\mu$ = Linear Density (Mass Per Unit Length)
Scattering in the weighted $ L^2 $-space for a 2D nonlinear Schrödinger equation with inhomogeneous exponential nonlinearity 1. Laboratoire Paul Painlevé UMR 8524, Université Lille CNRS, 59655 Villeneuve d'Ascq Cedex, France 2. Laboratoire Paul Painlevé UMR 8524, Université de Lille CNRS, 59655 Villeneuve d'Ascq Cedex, France, Department of Mathematics, HCMC University of Pedagogy 3. Department of Mathematics, College of science, Imam Abdulrahman Bin Faisal University, P. O. Box 1982, Dammam, Saudi Arabia 4. Basic & Applied Scientific Research Center, Imam Abdulrahman Bin Faisal University, P.O. Box 1982, 31441, Dammam, Saudi Arabia $ i \partial_tu + \Delta u = |x|^{-b} \left({\rm e}^{\alpha|u|^2} - 1- \alpha |u|^2 \right) u, \quad u(0) = u_0, \quad x \in \mathbb{R}^2, $ $ 0<b<1 $ $ \alpha = 2\pi(2-b) $ $ u_0 $ $ \Sigma(\mathbb{R}^2) = \{\,u\in H^1(\mathbb{R}^2) \ : \ |x|u\in L^2(\mathbb{R}^2)\,\} $ $ \Sigma $ $ \frac{2}{(1+b)(2-b)} $ Keywords:Inhomogeneous nonlinear Schrödinger equation, virial identity, scattering, exponential nonlinearity, singular Moser-Trudinger inequality. Mathematics Subject Classification:Primary: 35L70, 35Q55, 35B40; Secondary: 35B33, 37K05, 37L50. Citation:Abdelwahab Bensouilah, Van Duong Dinh, Mohamed Majdoub. Scattering in the weighted $ L^2 $-space for a 2D nonlinear Schrödinger equation with inhomogeneous exponential nonlinearity. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2735-2755. doi: 10.3934/cpaa.2019122 References: [1] [2] A. Adam Azzam, [3] [4] C. Bennet and R. Sharply, [5] A. Bensouilah, D. Draouil and M. Majdoub, Energy critical Schrödinger equation with weighted exponential nonlinearity: Local and global well-posedness, [6] [7] [8] T. Cazenave, [9] [10] [11] [12] [13] L. Grafakos, [14] [15] [16] S. Ibrahim, M. Majdoub and N. Masmoudi, Global solutions for a semilinear, two-dimensional Klein-Gordon equation with exponential-type nonlinearity, [17] [18] S. Ibrahim, M. Majdoub, N. Masmoudi and K. Nakanishi, Scattering for the two-dimensional energy-critical wave equation, [19] S. Ibrahim, M. Majdoub, N. Masmoudi and K. Nakanishi, Scattering for the two-dimensional NLS with exponential nonlinearity, [20] P. G. Lemarié-Rieusset, [21] [22] [23] [24] [25] [26] [27] [28] M. de Souza and J. M. do Ò, On singular Trudinger-Moser type inequalities for unbounded domains and their best exponents, [29] E. M. Stein and G. Weiss, Introduction to Fourier Analysis on Euclidian Spaces, Princeton Mathematical Series, Princeton University Press, 1971. Google Scholar [30] [31] M. Struwe, Global well-posedness of the Cauchy problem for a super-critical nonlinear wave equation in two space dimensions, [32] [33] T. Tao, M. Visan and X. Zhang, The nonlinear Schrödinger equation with combined power-type nonlinearities, [34] T. Tao, show all references References: [1] [2] A. Adam Azzam, [3] [4] C. Bennet and R. Sharply, [5] A. Bensouilah, D. Draouil and M. Majdoub, Energy critical Schrödinger equation with weighted exponential nonlinearity: Local and global well-posedness, [6] [7] [8] T. Cazenave, [9] [10] [11] [12] [13] L. Grafakos, [14] [15] [16] S. Ibrahim, M. Majdoub and N. Masmoudi, Global solutions for a semilinear, two-dimensional Klein-Gordon equation with exponential-type nonlinearity, [17] [18] S. Ibrahim, M. Majdoub, N. Masmoudi and K. Nakanishi, Scattering for the two-dimensional energy-critical wave equation, [19] S. Ibrahim, M. Majdoub, N. Masmoudi and K. Nakanishi, Scattering for the two-dimensional NLS with exponential nonlinearity, [20] P. G. Lemarié-Rieusset, [21] [22] [23] [24] [25] [26] [27] [28] M. de Souza and J. M. do Ò, On singular Trudinger-Moser type inequalities for unbounded domains and their best exponents, [29] E. M. Stein and G. Weiss, Introduction to Fourier Analysis on Euclidian Spaces, Princeton Mathematical Series, Princeton University Press, 1971. Google Scholar [30] [31] M. Struwe, Global well-posedness of the Cauchy problem for a super-critical nonlinear wave equation in two space dimensions, [32] [33] T. Tao, M. Visan and X. Zhang, The nonlinear Schrödinger equation with combined power-type nonlinearities, [34] T. Tao, [1] Changliang Zhou, Chunqin Zhou. Extremal functions of Moser-Trudinger inequality involving Finsler-Laplacian. [2] Prosenjit Roy. On attainability of Moser-Trudinger inequality with logarithmic weights in higher dimensions. [3] Anouar Bahrouni. Trudinger-Moser type inequality and existence of solution for perturbed non-local elliptic operators with exponential nonlinearity. [4] Guozhen Lu, Yunyan Yang. Sharp constant and extremal function for the improved Moser-Trudinger inequality involving $L^p$ norm in two dimension. [5] [6] [7] [8] Fouad Hadj Selem, Hiroaki Kikuchi, Juncheng Wei. Existence and uniqueness of singular solution to stationary Schrödinger equation with supercritical nonlinearity. [9] [10] [11] Alp Eden, Elİf Kuz. Almost cubic nonlinear Schrödinger equation: Existence, uniqueness and scattering. [12] Hiroyuki Hirayama, Mamoru Okamoto. Random data Cauchy problem for the nonlinear Schrödinger equation with derivative nonlinearity. [13] Kanishka Perera, Marco Squassina. Bifurcation results for problems with fractional Trudinger-Moser nonlinearity. [14] Congming Peng, Dun Zhao. Global existence and blowup on the energy space for the inhomogeneous fractional nonlinear Schrödinger equation. [15] [16] Georgios Fotopoulos, Markus Harju, Valery Serov. Inverse fixed angle scattering and backscattering for a nonlinear Schrödinger equation in 2D. [17] Satoshi Masaki. A sharp scattering condition for focusing mass-subcritical nonlinear Schrödinger equation. [18] Chenmin Sun, Hua Wang, Xiaohua Yao, Jiqiang Zheng. Scattering below ground state of focusing fractional nonlinear Schrödinger equation with radial data. [19] J. Cuevas, J. C. Eilbeck, N. I. Karachalios. Thresholds for breather solutions of the discrete nonlinear Schrödinger equation with saturable and power nonlinearity. [20] Tsukasa Iwabuchi, Kota Uriya. Ill-posedness for the quadratic nonlinear Schrödinger equation with nonlinearity $|u|^2$. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
Consider the following $2\times 2$ matrix $$ A = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}. $$ Let $\Delta = a_{11}a_{22}-a_{21}a_{12}$ be its determinant. Then $A$ has full rank iff $\Delta \ne 0.$ I noticed that it holds that $$ A \begin{pmatrix} a_{22} \\ -a_{21} \end{pmatrix} = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix} \begin{pmatrix} a_{22} \\ -a_{21} \end{pmatrix} = \begin{pmatrix} \Delta \\ 0 \end{pmatrix}. $$ That is: A linear combination of the columns of $A$ with coefficients that, up to their signs, consist of the elements of the second row of $A$ yields a vector that consists of a zero and the determinant of $A.$ This immediately yields that $A$ is singular if $\Delta=0:$ If all numbers in the second row of $A$ are zero, $A$ is singular. Otherwise we have found a nontrival vector in the kernel. Is there a similar intuition for the determinant of $n\times n$ matrices? Question:
Consider the well-known fact that correlation is bounded between $-1$ and $1$: $$ -1 \le \text{corr}(X,Y) = \frac{E[(X - E[X])(Y - E[Y])]}{\sigma_X \sigma_Y} \le 1. $$ I've been trying to wrap my mind intuitively around why this is so. Question: Is this because (or is it true that) $$ \frac{E[(|X - E[X]|)(|Y - E[Y]|)]}{\sigma_X \sigma_Y} = 1? $$ (Notice the absolute value signs in the numerator). Example: I notice that this is true in case $X$ were to take on values $\{1, 3\}$ and $Y$ were to take on values $\{2, 6\}$ in a uniform distribution. That is: $$ \frac{\frac{(1-2) + (3-2)}{2} \cdot \frac{(2-4)+(6-4)}{2}}{1 \cdot 2} = \frac{0 }{2} = 0 $$ yet $$ \frac{\frac{|(1-2)| + |(3-2)|}{2} \cdot \frac{|(2-4)|+|(6-4)|}{2}}{1 \cdot 2} = 1. $$ So is it true in general? If so, this would make understanding why correlation is bounded between $-1$ and $1$ quite easy for my mind to wrap around. EDIT: The claim also seems to work on uniform $\{1,5\}$ and $\{1,7\}$: $$ \frac{\frac{(1-3) + (5-3)}{2} \cdot \frac{(1-4)+(7-4)}{2}}{2 \cdot 3} = \frac{0 \cdot 0}{6} = 0 $$ yet $$ \frac{\frac{|(1-3)| + |(5-3)|}{2} \cdot \frac{|(1-4)|+|(7-4)|}{2}}{2 \cdot 3} = \frac{2 \cdot 3}{6} = 1. $$
It looks like a voronoi diagram with a non-Euclidean distance metric. Probably not Manhattan L1 but something close related, but maybe Mahalanobis with some kind of restriction on seed point generation and movement.A similar result may be calculated with Weight-proportional Space Partitioning Using Adaptive Voronoi Diagrams when reducing spatial resolution ... Lots of things here."When reading papers". What papers? If the topic of the paper is about something other than the spatial partitioning structure, it could be fair to use whatever knowing that the basic ideas will translate to other structures. Or not, hard to say."For example for ray tracing an oct tree, near misses will cause you to iterate through a ... I don't have any direct experience doing this so I might be missing an obvious solution or tool. That said, what you describe is in programming terms comparatively easy to achieve. The basic structure of such a custom processing would be:Open the image file and get access to the array of pixels it contains.Iterate over all the pixels and inspect/transform ... In the painter's algorithm, you first sort all graphics elementson depth (deepest first) andthen one-by-one fully paint them into the image on top of each other.That way, deeper elements are obscured by less deep element.(Intersecting graphics element require special attention.)In the depth-buffering algorithm,you store the current depth of each pixel ... In short painter's algorithm can't deal with intersecting geometry.Suppose that you draw a plane angled away from the camera, and a plane angled towards the camera. The planes intersect in an 'X' shape.Camera ------> XWith painter's algorithm no such ordering exists that will render the shape exactly. You would only see whichever plane you decided ... My 2 cents from writting the Chipmunk2D physics engine is that spatial hashing is great when you have a lot of objects that are all the same size. I had a demo 10 years ago that ran with 20k interacting particles on a Core 2 Duo in real time. The spatial hash worked great for that if you tuned it.I've since replaced the spatial hash with a binary AABB tree ... The Flurry screensaver written by Calum Robinson is available as a part of the XScreenSaver package. You can download its source code from https://www.jwz.org/xscreensaver/download.htmlThe flurry* files are in the hacks/glx directory. It's not an easy job to reverse engineer the algorithm from the source code, but debugging might help. In games and other 3D scenes, generally when the user clicks the mouse, a ray is cast into the scene in the direction the camera is facing, and a check is done to see what geometry in the scene it intersects. If it intersects nothing, it is ignored. If it intersects a single object, then the application processes a click on that object (at the location of ... Like Rahul said that algorithm is only in the case where $0 \lt \Delta y \lt \Delta x$.So you need to adjust the coordinates to fit within that.There are 8 cases:$0 \lt \Delta y \lt \Delta x$ the normal base case$0 \lt \Delta x \lt \Delta y$ swap x and y except for the drawPixel call.$\Delta x \lt 0 \lt \Delta y$ and $|\Delta y < \Delta x|$ use ... Yes, there is a fairly general algorithm to calculate this scaling factor, which works for all shapes with a known parametric representation. First, substitute the parametric equation of the shape (e.g. a torus, cylinder or cone) into the implicit equation of the sphere, $x^2 + y^2 + z^2 = S^2$. Then, solve this equation for the radius of the sphere. The ... An interesting problem. I've done a bit of work in texture compression and this sounds something like a generalisation of Campbell et al's "Color Cell Compression". It's also a little like a feature we were asked to include in the Dreamcast VQ compressor so that sub-palettes could be swapped to create different colour schemes on textures.I was thus ... The straightforward solution is to simple not render the high poly model at all. Have a lower detail model that you can switch to once the model is too far away for it to make a difference. There is no point in rendering 20 triangles that all end up in the same pixel.Next option is to partition the model and then cull the individual parts. You can also ... The problem you are seeing, i.e. "jaggies" or "staircasing", is an example of the more general problem known as "aliasing" and, in the graphics field, the term you want to search for is "Antialiasing".Aliasing occurs when you undersample a signal. If a signal contains frequencies at or above the Nyquist Frequency, which is 1/2 the sampling frequency, ... This seems to be eluding to a Marching Cubes LOD algorithm such as: Place the entire volume in one giant cube. Break that volume into NxNxN cubes. And continue doing so until the cubes are at the finest granularity needed for the density function about 8 level works. Each volume then responds with either: No voxels - stop processing that sub volume OR Yes ... Well, for the first octant you can either step EAST or NORTH-EAST. Depending on the distance to the actual line you choose the appropriate. In many integer implementations, this is done with regard to the sign of D. Your code, as seen on your post at the time of posting this answer, looks sort of like this (the following was thrown together rather quickly and may or may not work) :function fill ( x, y, touched, elem ) {if ( count <= 0 ) return;if ( isOutOfBounds(x,y) ) return;const idx = y*24 + x;if ( gArr[idx][0] != 0 || touched[idx] ) return;... Your understanding of the matrix structure in Q3 is correct. This code just does not construct a matrix explicitly and the matrix multiplication is applied implicitly. I think this part might cause your confusion.Instead of deciphering the code, I would rather derive the transform and compare it with the code.The affine (6 degrees of freedom) and ... I guess your question is how to compute the inertia tensor automatically. A parametric equation may not be always possible, but we could write a program to avoid hand calculations as much as possible.The inertia tensor $\mathbf{I}$ is defined as an integral over the object domain $\Omega$ (see Inertia tensor):$$\newcommand{\V}[1]{\mathbf{#1}}\begin{... Perlin noise not good for real planet surface because planet surface is not random. Planet structure is create by geology/physics and interaction between different parts.This video show geology simulator have name PlaTec (have link in text below video):https://www.youtube.com/watch?v=bi4b45tMEPELink have source code at SourceForge web site too.
see title. An algorithm is 'good' if it is able to distinguish between zero Eigenvalues and nonzero Eigenvalues. MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community One method is to reduce the computation to that of computing matrix multiplication of $n \times n$ matrices. In particular, the determinant of a symbolic matrix can be computed in $O(n^{\omega})$ arithmetic operations, where $\omega < 2.376$ is the matrix multiplication exponent, and from a symbolic determinant of course one can recover all eigenvalues. However, since the operations here will be over polynomials of degree $n$ with coefficients in $m$ bits, this method would take about $O(n^{1+\omega} m)$ time to get $m$ bits of the eigenvalues. More complex methods can get you the eigenvalues in $O(n^3 + n^2 \log^2 n \log b)$ time, where the eigenvalues are approximated to within $2^{-b}$. For some structured matrices you can get about $O(n^{\omega})$. See Victor Y. Pan, Zhao Q. Chen: The Complexity of the Matrix Eigenproblem. STOC 1999: 507-516 (Actually it appears this paper never appeared in a journal form, so study it very carefully if you are serious about this problem.) I don't see a simple way to exploit the fact that (a) it is symmetric and (b) you just want to find the smallest nonzero eigenvalue. It seems doubtful to me that you could do this much faster than $O(n^{\omega})$ (without finding all nonzero eigenvalues faster than this), but this is just based on intuition, not fact. Tao and Vu (1) have shown that the distribution of the smallest singular value of a random matrix is "universal", i.e. independent of the particular random variable populating the matrix. They are interested in analyzing distributions, not particular matrices, but it appears that it might be possible to use their machinery for addressing this problem. First, several caveats. 1. Their analysis assumes there are no zero eigenvalues. With random matrices, this isn't much of a restriction, but it might be for your application. If there were at most a small number of zero eigenvalues, one could pick them off somewhat efficiently and then reduce to the invertible case. 2. The results hold with high probability, not certainty. 3. The results recover estimates of eigenvalues, not the exact values, although they can be made arbitrarily accurate. 4. Tao and Vu's analysis is not aimed at this particular application, so there very well may be some issue translating them to this domain. Now that the pussy-footing is out of the way, the basic idea is as follows: We wish to find the smallest eigenvalue $\lambda_n$ of a matrix $A$. Suppose that $A$ is invertible. Then the largest eigenvalue of $A^{-1}$ is $1/\lambda_n$. Since finding the largest eigenvalue is a much easier problem (by, e.g. the power method), we might be in a better position. But computing $A^{-1}$ is (generically) as difficult as computing all the eigenvalues! Tao and Vu dodge this problem by taking random subsets of the columns of $A$ and considering the orthogonal complement; this is inspired by "property testing" arguments in complexity theory. Then with high probability one can estimate the largest eigenvalue of $A^{-1}$ from these restrictions, and we are done. (1) Terence Tao and Van Vu. "Random matrices: the distribution of the smallest singular values". March, 2009. http://arxiv.org/abs/0903.0614v1 The QR algorithm gives quite rapidly a good approximation of the eigenvalues of a real symmetric (or complexHermitian). But overall, it gives first the smallest eigenvalues. The reason is that the ratio $\lambda_{n-1}/\lambda_n$ is the convergence rate. First, a disclaimer: I know absolutely nothing about numerical algorithms for finding the eigenvalue of a matrix, symmetric or not. So my feelings will not be hurt if my answer gets downvoted into oblivion. It seems to me that an obvious but perhaps overly naive approach is the following: Let $A$ be the symmetric matrix in question. a) Use some standard minimization algorithm (maybe the conjugate gradient method?) to minimize $$\frac{x\cdot Ax}{x\cdot x}$$ over nonzero $x$ (an obvious thing to do is to restrict to $|x| = 1$ but you might save some arithmetic if you don't bother with this normalization). b) See what eigenvalue you get. If it's the eigenvalue you want, then you're done. If not, save the eigenvector you found and proceed to c) c) Repeat a), except restrict to the subspace orthogonal to all of the eigenvectors you've found so far. See what eigenvalue you get. If it's the one you want, you're done. Otherwise, save the eigenvector and repeat this step again. Eventually, you'll have all of the eigenvalues and eigenvectors. Depending on what "smallest" means, you may or may not be able to stop before you have found all of the eigenvectors. Actually, if "smallest" means "eigenvalue with the smallest nonzero absolute value", then just do the steps above with $A^2$ instead of $A$. For small matrices this seems like a practical approach to me. But, as I said, I don't know anything about this stuff. This is the method they use with LaPACK, which is usually the fastest for general problems (fastest noncommercial anyway) and here's a discussion regarding this computation I honestly am not sure if you can hunt down the smallest eigenvalue without finding all of them. However, I would not under any circumstances do the symbolic determinant. So far as I know if you're doing a numerical calculation, an introduction of symbols will give you a major slowdown. I don't have the time at the moment to look up the exact computation, but I think that outside of quantum computing symbolic factorization is NP-hard/NP-complete. In the multivariate case (using Groebner Bases) it's doubly exponential. Probably still your best bet, after of course reducing your original symmetric matrix to tridiagonal form, would be either bisection (with the help of Gerschgorin bounds) or an appropriate modification of the dqd/MRRR algorithm of Parlett, Fernando, and Dhillon. If your matrix has additional structure apart from symmetry (e.g. it is a Toeplitz or arrowhead matrix), there of course may be even more slick apporaches. I suggest looking at the references given in LAPACK and other numerical linear algebra books. If smallest means closest to zero, than Rayleigh quotient iterations gives cubic convergence, but still requires matrix inverse, or actually to solve the system of linear equations. A quick search led me to this paper, which deals specifically with sparse symmetric matrices, although some of its references might be useful. Jang, Ho-Jong, and Lee, Sung-Ho, "NUMERICAL STABILITY OF UPDATE METHOD FOR SYMMETRIC EIGENVALUE PROBLEM," J. Appl. Math. & Computing Vol. 22(2006), No. 1 - 2, pp. 467 - 474. A PDF copy is available here: http://www.mathnet.or.kr/mathnet/kms_tex/986075.pdf I should also mention that "best" is a difficult superlative to qualify without knowing the structure of your matrices. Probably the best algorithm for a sparse symmetric matrix is not the best algorithm for a symmetric Toeplitz matrix. Here is a simple idea, with no complexity analysis. Compute a basis $v_1$, ..., $v_{n-r}$ for the kernel of $A$; this can be done with exact arithmetic in $n^3$ operations by Gaussian elimination. Compute a basis $w_1$, ..., $w_{r}$ for the othogonal complement: Again, doable in exact arithmetic with $n^3$ operations. $\mathrm{Span}(w_1, ..., w_r)$ is the orthogonal complement to the $0$-eigenspace of $A$ and hence, since $A$ is symmetric, it is the span of the nonzero eigenspaces of $A$. So $A$ maps $\mathrm{Span}(w_1, \ldots, w_r)$ to itself. Let $X$ be the $r \times r$ matrix of this map. This is again a matrix of rational numbers, computable with exact arithmetic in reasonable time, whose eigenvalues are the same as the nonzero eigenvalues of $A$. Note, however, that $X$ is not symmetric. Invert $X$ and find its largest eigenvalue by one of the standard methods. What I am gambling here is that the advantages of working in exact arithmetic are greater than the disadvantages of passing to a nonsymmetric matrix.
Determine Whether Given Subsets in $\R^4$ are Subspaces or Not Problem 480 (a) Let $S$ be the subset of $\R^4$ consisting of vectors $\begin{bmatrix} x \\ y \\ z \\ w \end{bmatrix}$ satisfying \[2x+4y+3z+7w+1=0.\] Determine whether $S$ is a subspace of $\R^4$. If so prove it. If not, explain why it is not a subspace. (b) Let $S$ be the subset of $\R^4$ consisting of vectors $\begin{bmatrix} x \\ y \\ z \\ w \end{bmatrix}$ satisfying \[2x+4y+3z+7w=0.\] Determine whether $S$ is a subspace of $\R^4$. If so prove it. If not, explain why it is not a subspace. (These two problems look similar but note that the equations are different.) ( The Ohio State University, Linear Algebra Final Exam Problem) Add to solve later Contents Solution. (a) $2x+4y+3z+7w+1=0$ We claim that $S$ is not a subspace of $\R^4$. If $S$ is a subspace of $\R^4$, then the zero vector $\mathbf{0}=\begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}$ in $\R^4$ must lie in $S$. However, the zero vector $\mathbf{0}$ does not satisfy the equation \[2x+4y+3z+7w+1=0.\] So $\mathbf{0} \not \in S$, and we conclude that $S$ is not subspace of $\R^4$. (b) $2x+4y+3z+7w=0$ In a set theoretical notation, we have \[S=\left\{\, \begin{bmatrix} x \\ y \\ z \\ w \end{bmatrix}\in \R^4 \quad \middle| \quad 2x+4y+3z+7w=0 \,\right\}.\] Let $A$ be the $1\times 4$ matrix defined by \[A=\begin{bmatrix} 2 & 4 & 3 & 7 \end{bmatrix}.\] Then the equation $2x+4y+3z+7w=0$ can be written as \[A\begin{bmatrix} x \\ y \\ z \\ w \end{bmatrix}=0.\] So we have \begin{align*} S&=\left\{\, \begin{bmatrix} x \\ y \\ z \\ w \end{bmatrix}\in \R^4 \quad \middle| \quad A\begin{bmatrix} x \\ y \\ z \\ w \end{bmatrix}=0 \,\right\}\\ &=\calN(A), \end{align*} the null space of $A$. Recall that the null space of a matrix is always a subspace. Hence the subset $S$ is a subspace of $\R^4$ as it is the null space of the matrix $A$. Final Exam Problems and Solution. (Linear Algebra Math 2568 at the Ohio State University) This problem is one of the final exam problems of Linear Algebra course at the Ohio State University (Math 2568). The other problems can be found from the links below. Find All the Eigenvalues of 4 by 4 Matrix Find a Basis of the Eigenspace Corresponding to a Given Eigenvalue Diagonalize a 2 by 2 Matrix if Diagonalizable Find an Orthonormal Basis of the Range of a Linear Transformation The Product of Two Nonsingular Matrices is Nonsingular Determine Whether Given Subsets in ℝ4 R 4 are Subspaces or Not (This page) Find a Basis of the Vector Space of Polynomials of Degree 2 or Less Among Given Polynomials Find Values of $a , b , c$ such that the Given Matrix is Diagonalizable Idempotent Matrix and its Eigenvalues Diagonalize the 3 by 3 Matrix Whose Entries are All One Given the Characteristic Polynomial, Find the Rank of the Matrix Compute $A^{10}\mathbf{v}$ Using Eigenvalues and Eigenvectors of the Matrix $A$ Determine Whether There Exists a Nonsingular Matrix Satisfying $A^4=ABA^2+2A^3$ Add to solve later
Some quick electoral prediction math Matthew Martin10/20/2015 12:17:00 PM Tweetable Do the prediction market numbers make sense? A little probability analysis: Let [$]A[$] denote the event where Clinton wins the nomination, and [$]B[$] denote that a democratic nominee wins the presidency. Tankersley then provides the following probabilities from the prediction markets: \begin{align} p \left( A \right) &=0.77 \\ p \left( B \right) &=0.55 \\ p \left( A \cap B \right) &=0.47 \end{align} Thus prediction markets think that if Clinton is nominated, theres a [$$]p \left( B \vert A \right)=\frac{p \left( A \cap B \right)}{p \left( A \right)}=\frac{0.47}{0.77}=0.61[$$] chance of her beating the GOP candidate. Note that [$$]p \left( A \right) p \left( B \right)=0.42 \lt p \left( A \cap B \right) =0.47 [$$] so market participants do think that who gets nominated matters for which party wins the White House, and they think Clinton has a better shot than all the other democrats combined. By what margin though? It helps me to write it out. So let [$]\bar{A}[$] be the complement of [$]A[$], that is, the event that someone other than Clinton wins the nomination. The two are mutually exclusive complements so [$$]p \left( \left( A \cap B \right) \cup \left(\bar{A} \cap B \right)\right)=p\left( A \cap B \right) + p \left(\bar{A} \cap B \right)=p \left( B \right)=0.55[$$] which tells us that the entire row of Ptolemies 1together have a probability of just [$]p \left(\bar{A} \cap B \right)=0.08[$] of winning the presidency, despite the 0.23 probability that one of them will be nominated. So based on prediction market figures, if the democrats nominate a Ptolemy, he'll have a [$$]p \left( B \vert \bar{A} \right)=\frac{p \left( \bar{A} \cap B \right)}{p \left( \bar{A} \right)}=\frac{0.08}{0.23}=0.35[$$] chance of winning the presidency. So Clinton, according to the people who bet on this stuff, is not-quite twice as likely to win the general election if nominated. 1. Based on the democratic debate, I've started referring to all of the non-Clinton candidates collectively as the Ptolemies. This explains the reference.
Transpose of a Matrix and Eigenvalues and Related Questions Problem 12 Let $A$ be an $n \times n$ real matrix. Prove the followings. (a) The matrix $AA^{\trans}$ is a symmetric matrix. (b) The set of eigenvalues of $A$ and the set of eigenvalues of $A^{\trans}$ are equal. (c) The matrix $AA^{\trans}$ is non-negative definite. (An $n\times n$ matrix $B$ is called non-negative definite if for any $n$ dimensional vector $\mathbf{x}$, we have $\mathbf{x}^{\trans}B \mathbf{x} \geq 0$.) (d) All the eigenvalues of $AA^{\trans}$ is non-negative. Contents Facts. Before the proofs, we first review several basic properties of the transpose of a matrix. For any matrices $A$ and $B$ so that the product $AB$ is defined, we have $(AB)^{\trans}=B^{\trans}A^{\trans}$ We have $(A^{\trans})^{\trans}=A$ for any matrix $A$. Also recall that the eigenvalues of a matrix $A$ are the solutions of the characteristic polynomial $p_A(x)=\det(A-xI)$. Proof. (a) The matrix $AA^{\trans}$ is a symmetric matrix. We compute $(AA^{\trans})^{\trans}=(A^{\trans})^{\trans}A^{\trans}=AA^{\trans}$ and thus $AA^{\trans}$ is a symmetric matrix. (We used the Fact 1 and 2.) (b) The set of eigenvalues of $A$ and the set of eigenvalues of $A^{\trans}$ are equal. We show that the characteristic polynomials of $A$ and $A^{\trans}$ are the same, hence they have exactly same eigenvalues. Let $p_A(x)$ and $p_{A^{\trans}}(x)$ be the characteristic polynomials of $A$ and $A^{\trans}$, respectively. Then we have \begin{align*} p_A(x)=\det(A-xI)=\det(A-xI)^{\trans} =\det(A^{\trans}-xI)=p_{A^{\trans}}(x). \end{align*} The first and last equalities are the definition of the characteristic polynomial. The second equality is true because in general we have $\det(B)=\det(B^{\trans})$ for a square matrix $B$. This completes the proof of (b). (c) The matrix $AA^{\trans}$ is non-negative definite. Let $\mathbf{x}$ be an $n$ dimensional vector. Then we have \begin{align*} \mathbf{x}^{\trans}AA^{\trans}\mathbf{x}=(A^{\trans}\mathbf{x})^{\trans}(A^{\trans}\mathbf{x})=||A^{\trans}\mathbf{x}|| \geq 0, \end{align*} since a norm (length) of a vector is always non-negative. Thus $AA^{\trans}$ is non-negative definite. (d) All the eigenvalues of $AA^{\trans}$ is non-negative. Let $\lambda$ be an eigenvalue of $AA^{\trans}$ and let $\mathbf{x}$ be an eigenvector corresponding to the eigenvalue $\lambda$. Then we compute \begin{align*} \mathbf{x}^{\trans}AA^{\trans}\mathbf{x}=\mathbf{x}^{\trans}\lambda\mathbf{x}=\lambda ||\mathbf{x}||. \end{align*} Here the first equality follows from the definitions of the eigenvalue $\lambda$ and eigenvector $\mathbf{x}$. In part (c), we proved that $AA^{\trans}$ is non-negative definite, hence we have $\lambda ||\mathbf{x}|| \geq 0$. Therefore $\lambda \geq 0$ and this completes the proof of (d). Related Question. For a solution, see the post ↴ “Positive definite real symmetric matrix and its eigenvalues“. Add to solve later
Hyperplane Through Origin is Subspace of 4-Dimensional Vector Space Problem 371 Let $S$ be the subset of $\R^4$ consisting of vectors $\begin{bmatrix} x \\ y \\ z \\ w \end{bmatrix}$ satisfying \[2x+3y+5z+7w=0.\] Then prove that the set $S$ is a subspace of $\R^4$. ( Linear Algebra Exam Problem, The Ohio State University) Add to solve later Proof. First, in set theoretical notation, the definition of $S$ can be written as \[S=\left\{\, \begin{bmatrix} x \\ y \\ z \\ w \end{bmatrix}\in \R^4 \quad \middle| \quad 2x+3y+5z+7w=0 \,\right\}.\] Let $A=\begin{bmatrix} 2 & 3 & 5 & 7 \end{bmatrix}$ be the $1 \times 4$ matrix. Then the defining equation $2x+3y+5z+7w=0$ can be written as \[A\mathbf{x}=0,\] where \[\mathbf{x}=\begin{bmatrix} x \\ y \\ z \\ w \end{bmatrix}.\] It follows that the set $S$ is the null space of $A$, that is, $S=\calN(A)$. Since every null space is a subspace, we see that $S$ is also a subspace of $\R^4$. Linear Algebra Midterm Exam 2 Problems and Solutions True of False Problems and Solutions: True or False problems of vector spaces and linear transformations Problem 1 and its solution: See (7) in the post “10 examples of subsets that are not subspaces of vector spaces” Problem 2 and its solution: Determine whether trigonometry functions $\sin^2(x), \cos^2(x), 1$ are linearly independent or dependent Problem 3 and its solution: Orthonormal basis of null space and row space Problem 4 and its solution: Basis of span in vector space of polynomials of degree 2 or less Problem 5 and its solution: Determine value of linear transformation from $R^3$ to $R^2$ Problem 6 and its solution: Rank and nullity of linear transformation from $R^3$ to $R^2$ Problem 7 and its solution: Find matrix representation of linear transformation from $R^2$ to $R^2$ Problem 8 and its solution (current problem): Hyperplane through origin is subspace of 4-dimensional vector space Add to solve later
The Archimedean Property We will now look at a very important property known as the Archimedean property which tells us that for any real number $x$ there exists a natural number $n_x$ that is greater or equal to $x$. This is formalized in the following theorem: Theorem 1 (The Archimedean Property): For every element $x \in \mathbb{R}$ there exists an element $n_x \in \mathbb{N}$ such that $x ≤ n_x$. Proof of Theorem: Consider the case where $x ≤ 0$. Then let $n_x = 1$. We've proven that $0 < 1$ already so any $x$ such that $x ≤ 0$ also has that $x ≤ 1$. Now for the case where $x > 0$ we will do proof by contradiction. Suppose that there exists an element $x > 0$ such that for all $n_x \in \mathbb{N}$, $x > n_x$. In other words, $x$ is an upper bound for the natural numbers and hence $\mathbb{N}$ would be bounded above. Since $\mathbb{N} \subset \mathbb{R}$ it follows by the axiom of completeness that there exists an element $s \in \mathbb{R}$ such that $s = \sup (\mathbb{N})$, that is, $\forall n \in \mathbb{N}$, $n ≤ s$. Now we note that $n + 1 \in \mathbb{N}$, and so since $s$ is the supremum of the natural numbers then $n + 1 ≤ s$ which implies that $n ≤ s - 1$ $\forall n \in \mathbb{N}$. Therefore $s - 1$ is an upper bound for $\mathbb{N}$. But $s - 1 < s = \sup \mathbb{N}$ which is a contradiction since the supremum is defined to be the least upper bound. So $\forall x > 0$ there exists a natural number $n_x \in \mathbb{N}$ such that $x ≤ n_x$. $\blacksquare$ We will now look at some important corollaries regarding the Archimedean property. Corollary 1: If $S := \left \{ \frac{1}{n} : n \in \mathbb{N} \right \}$ then $\inf S = 0$. Proof: We note that $S$ is a nonempty set since $1 \in S$. Furthermore we note that $S$ is bounded below by $0$ since as $n$ gets very large, $\frac{1}{n}$ approaches 0 but does not equal 0. Therefore this set has an infimum in $\mathbb{R}$. Let $w = \inf S$. We note that $w ≥ 0$ since we have already deduced that $0$ is a lower bound for $S$. Now by the Archimedean property, for any $\epsilon > 0$ there exists a natural number $n$ such that $\frac{1}{n} < \epsilon$, and so we have that $0 ≤ w ≤ \frac{1}{n} < \epsilon$. Now recall that if $0 ≤ w < \epsilon$ for any $\epsilon > 0$ then $w = 0$. And therefore $\inf S = w = 0$. $\blacksquare$ Corollary 2: If $a$ is a real number such that $a > 0$ then there exists a natural number $n_a \in \mathbb{N}$ such that $0 < \frac{1}{n_a} < a$. Proof: Let $S := \left \{ \frac{1}{n} : n \in \mathbb{N} \right \}$. We note that $\inf S = 0$ and since $a > 0$ then $a$ is not a lower bound for $S$ and so there exists a natural number $n_a \in \mathbb{N}$ where $0 < \frac{1}{n_a} < a$. $\blacksquare$ Corollary 3: If $a$ is a real number such that $a > 0$ then there exists a natural number $n_a \in \mathbb{N}$ such that $n_a - 1 ≤ a < n_a$. Proof: Consider the set $S := \{ y \in \mathbb{N} : a < y \}$. This subset of $N$ is nonempty by the Archimedean property. Furthermore, recall that the Well Ordering Principle says that any nonempty subset of the natural numbers has a least element. which we will denote as $n_a$. Therefore $(n_a - 1) \not \in S$ and so $n_a - 1 ≤ a < n_a$. $\blacksquare$
The Annals of Statistics Ann. Statist. Volume 39, Number 1 (2011), 82-130. ℓ 1-penalized quantile regression in high-dimensional sparse models Abstract We consider median regression and, more generally, a possibly infinite collection of quantile regressions in high-dimensional sparse models. In these models, the number of regressors p is very large, possibly larger than the sample size n, but only at most s regressors have a nonzero impact on each conditional quantile of the response variable, where s grows more slowly than n. Since ordinary quantile regression is not consistent in this case, we consider ℓ 1-penalized quantile regression ( ℓ 1-QR), which penalizes the ℓ 1-norm of regression coefficients, as well as the post-penalized QR estimator (post- ℓ 1-QR), which applies ordinary QR to the model selected by ℓ 1-QR. First, we show that under general conditions ℓ 1-QR is consistent at the near-oracle rate $\sqrt{s/n}\sqrt{\log(p\vee n)}$, uniformly in the compact set $\mathcal{U}\subset(0,1)$ of quantile indices. In deriving this result, we propose a partly pivotal, data-driven choice of the penalty level and show that it satisfies the requirements for achieving this rate. Second, we show that under similar conditions post- ℓ 1-QR is consistent at the near-oracle rate $\sqrt{s/n}\sqrt{\log(p\vee n)}$, uniformly over $\mathcal{U}$, even if the ℓ 1-QR-selected models miss some components of the true models, and the rate could be even closer to the oracle rate otherwise. Third, we characterize conditions under which ℓ 1-QR contains the true model as a submodel, and derive bounds on the dimension of the selected model, uniformly over $\mathcal{U}$; we also provide conditions under which hard-thresholding selects the minimal true model, uniformly over $\mathcal{U}$. Article information Source Ann. Statist., Volume 39, Number 1 (2011), 82-130. Dates First available in Project Euclid: 3 December 2010 Permanent link to this document https://projecteuclid.org/euclid.aos/1291388370 Digital Object Identifier doi:10.1214/10-AOS827 Mathematical Reviews number (MathSciNet) MR2797841 Zentralblatt MATH identifier 1209.62064 Citation Belloni, Alexandre; Chernozhukov, Victor. ℓ 1 -penalized quantile regression in high-dimensional sparse models. Ann. Statist. 39 (2011), no. 1, 82--130. doi:10.1214/10-AOS827. https://projecteuclid.org/euclid.aos/1291388370 Supplemental materials Supplementary material: Supplement to “ℓ1-penalized quantile regression in high-dimensional sparse models”. We included technical proofs omitted from the main text: Examples of simple sufficient conditions, VC index bounds and Gaussian sparse eigenvalues.
\usepackage{mathtools, amssymb, amsthm}\begin{align}\label{eq1}\sum_{n}\lambda^{n}(i)-\sum_{n}\lambda^{n}(j) \nonumber\\=\begin{cases} \lambda_{n}, &\text{if } n = i\\ -\lambda_{n}, &\text{if } n = i\\ 0, &\text{otherwise} \end{cases} \text{ }\forall i, j\end{align} First line of the equation must be left aligned (or at least adjustable, \hspace not working). \sum_{n}\lambda^{n}(i)-\sum_{n}\lambda^{n}(j) \nonumber then according to first line cases will be aligned. However, equation number should be at extreme right. As per clarification request by @Mico The first pic from the answer. The alignment of the starting of the equation is preferably left (or adjustable)
Injective and Surjective Linear Maps Examples 3 Recall from the Injective and Surjective Linear Maps page that a linear map $T : V \to W$ is said to be injective if: $T(u) = T(v)$ implies that $u = v$. $\mathrm{null} (T) = \{ 0 \}$. Furthermore, the linear map $T : V \to W$ is said to be surjective if: If for every $w \in W$ there exists a $v \in V$ such that $T(v) = w$. $\mathrm{range} (T) = W$. We will now look at some more examples regarding injective/surjective linear maps. Example 1 Let $\{ v_1, v_2, ..., v_n\}$ be a linearly independent set of vectors in $V$ and let $T \in \mathcal L (V, W)$ be an injective linear map. Prove that then $\{ T(v_1), T(v_2), ..., T(v_n) \}$ is a linearly independent set of vectors in $W$. For $a_1, a_2, ..., a_n \in \mathbb{F}$, consider the following vector equation:(1) Now since $T$ is injective, we have that then $a_1v_1 + a_2v_2 + ... + a_nv_n = 0$. We are given that $\{ v_1, v_2, ..., v_n \}$ is a linearly independent set of vectors in $V$ which implies that $a_1 = a_2 = ... = a_n = 0$. Therefore $\{ T(v_1), T(v_2), ..., T(v_n) \}$ is a linearly independent set of vectors in $W$. Example 2 Reprove that if $T \in \mathcal L (V, W)$ then the linear map $T$ is injective if and only if $\mathrm{null} (T) = \{ 0 \}$. $\Rightarrow$ Suppose that $T$ is injective. We want to show that $\mathrm{null} (T) = \{ 0 \}$. Let $v \in \mathrm{null} (T)$. Then $T(v) = 0 = T(0)$. Now since $T$ is injective, we have that $T(v) = T(0)$ implies that $v = 0$, so $v \in \{ 0 \}$. Furthermore, if we let $v \in \{ 0 \}$ then $v = 0$ and clearly $0 \in \mathrm{null} (T)$ since $T(0) = 0$. Therefore $\mathrm{null} (T) = \{ 0 \}$. $\Leftrightarrow$ Suppose now that $\mathrm{null} (T) = \{ 0 \}$. We want to show that $T$ is injective. Let $u, v \in V$ and suppose that $T(u) = T(v)$. Then we have that $T(u) - T(v) = 0$ so $T(u - v) = 0$ which implies that $(u - v) \in \mathrm{null} (T) = \{ 0 \}$ so $u - v = 0$ and so $u = v$. Therefore $T$ is injective.
Image shows a typical layer somewhere in a feed forward network: $a_i^{(k)}$ is the activation value of the $i^{th}$ neuron in the $k^{th}$ layer. $W_{ij}^{(k)}$ is the weight connecting $i^{th}$ neuron in the $k^{th}$ layer to the $j^{th}$ neuron in the $(k+1)^{th}$ layer. $z_j^{(k+1)}$ is the pre-activation function value for the $j^{th}$ neuron in the $(k+1)^{th}$ layer. Sometimes this is called the "logit", when used with logistic functions. The feed forward equations are as follows: $z_j^{(k+1)} = \sum_i W_{ij}^{(k)}a_i^{(k)}$ $a_j^{(k+1)} = f(z_j^{(k+1)})$ For simplicity, bias is included as a dummy activation of 1, and implied used in iterations over $i$. I can derive the equations for back propagation on a feed-forward neural network, using chain rule and identifying individual scalar values in the network (in fact I often do this as a paper exercise just for practice): Given $\nabla a_j^{(k+1)} = \frac{\partial E}{\partial a_j^{(k+1)}}$ as gradient of error function with respect to a neuron output. 1. $\nabla z_j^{(k+1)} = \frac{\partial E}{\partial z_j^{(k+1)}} = \frac{\partial E}{\partial a_j^{(k+1)}} \frac{\partial a_j^{(k+1)}}{\partial z_j^{(k+1)}} = \nabla a_j^{(k+1)} f'(z_j^{(k+1)})$ 2. $\nabla a_i^{(k)} = \frac{\partial E}{\partial a_i^{(k)}} = \sum_j \frac{\partial E}{\partial z_j^{(k+1)}} \frac{\partial z_j^{(k+1)}}{\partial a_i^{(k)}} = \sum_j \nabla z_j^{(k+1)} W_{ij}^{(k)}$ 3. $\nabla W_{ij}^{(k)} = \frac{\partial E}{\partial W_{ij}^{(k)}} = \frac{\partial E}{\partial z_j^{(k+1)}} \frac{\partial z_j^{(k+1)}}{\partial W_{ij}^{(k)}} = \nabla z_j^{(k+1)} a_{i}^{(k)}$ So far, so good. However, it is often better to recall these equations using matrices and vectors to represent the elements. I can do that, but I am not able to figure out the "native" representation of the equivalent logic in the middle of the derivations. I can figure out what the end forms should be by referring back to the scalar version and checking that the multiplications have correct dimensions, but I have no idea why I should put the equations in those forms. Is there actually a way of expressing the tensor-based derivation of back propagation, using only vector and matrix operations, or is it a matter of "fitting" it to the above derivation? Using column vectors $\mathbf{a}^{(k)}$, $\mathbf{z}^{(k+1)}$, $\mathbf{a}^{(k+1)}$ and weight matrix $\mathbf{W}^{(k)}$ plus bias vector $\mathbf{b}^{(k)}$, then the feed-forward operations are: $\mathbf{z}^{(k+1)} = \mathbf{W}^{(k)}\mathbf{a}^{(k)} + \mathbf{b}^{(k)}$ $\mathbf{a}^{(k+1)} = f(\mathbf{z}^{(k+1)})$ Then my attempt at derivation looks like this: 1. $\nabla \mathbf{z}^{(k+1)} = \frac{\partial E}{\partial \mathbf{z}^{(k+1)}} = ??? = \nabla \mathbf{a}^{(k+1)} \odot f'(\mathbf{z}^{(k+1)})$ 2. $\nabla \mathbf{a}^{(k)} = \frac{\partial E}{\partial \mathbf{a}^{(k)}} = ??? = {\mathbf{W}^{(k)}}^{T} \nabla \mathbf{z}^{(k+1)}$ 3. $\nabla \mathbf{W}^{(k)} = \frac{\partial E}{\partial \mathbf{W}^{(k)}} = ??? = \nabla\mathbf{z}^{(k+1)} {\mathbf{a}^{(k)}}^T $ Where $\odot$ represents element-wise multiplication. I've not bothered showing equation for bias. Where I have put ??? I am not sure of the correct way to go from the feed-forward operations and knowledge of linear differential equations to establish the correct form of the equations? I could just write out some partial derivative terms, but have no clue as to why some should use element-wise multiplication, others matrix multiplication, and why multiplication order has to be as shown, other than clearly that gives the correct result in the end. I am not even sure if there is a purely tensor derivation, or whether it is all just a "vectorisation" of the first set of equations. But my algebra is not that good, and I'm interested to find out for certain either way. I feel it might do me some good comprehending work in e.g. TensorFlow if I had a better native understanding of these operations by thinking more with tensor algebra. Sorry about ad-hoc/wrong notation. I understand now that $\nabla a_j^{(k+1)}$ is more properly written $\nabla_{a_j^{(k+1)}}E$ thanks to Ehsan's answer. What I really wanted there is a short reference variable to substitute into the equations, as opposed to the verbose partial derivatives.
Sorry for sparse updation due to the mid-term exmamination of Genaral Relativity and Statistical Mechanics……( 我不会说早上才考完广义相对论期中。。) \(\quad\)We are going to deduce the Einstein’s equation in two ways, one is based on physics analysis, anther one utilizing the variational methods. Sec 1.1 Energy-Momentum Tensor Def: \(\quad\)Given the action \(I\), the energy momentum tensor of a physical system is defined as $$T_{\mu\nu}:=-\dfrac{2}{\sqrt{-g}}\dfrac{\delta I}{\delta g^{\mu\nu}},$$ where \(\displaystyle\dfrac{\delta I}{\delta g^{\mu\nu}}\) is variational derivatives and \(g\) represents the determinant of matrix \((g_{\mu\nu})\). \(\quad\)Take the electromagnetism as example. In electrodynamics, the action is(for you there may be some puzzles about the coefficients, but that does not matter at all) \begin{equation}\label{1.1.1}I_{em}=-\dfrac{1}{4}\int d^{4}x\sqrt{-g}F^{\mu\nu}F_{\mu\nu}.\end{equation} \(\quad\)Here I would like to bring in the \(\text{magnetic vector protential }A_{\mu}\) such that $$F_{\mu\nu}\equiv\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}.$$ It is a valuable exercise to check that this definition automatically satisfies one of the 4-dimensional Maxell’s equation \(\nabla_{[\alpha}F_{\mu\nu]}=0\). Additionally, you can re-express the Maxell’s equation by \(\text{Hodge star }*:\Lambda^{k}(V)\rightarrow\Lambda^{n-k}(V)\), which you can see in most textbooks about Differential Manifolds. Here I just skip it because it’s far off the main beam of this chapter : ) To show that our energy momentum is compatible with what we are familiar with, we are to prove that \begin{equation}\label{1.1.2}T_{\rho\sigma}^{em}=-\dfrac{2}{\sqrt{-g}}\dfrac{\delta I_{em}}{\delta g^{\rho\sigma}}=-\dfrac{1}{4}g_{\rho\sigma}F^{2}+F_{\rho\nu}F_{\sigma}^{~\nu}.\end{equation} Proof: (temporarily skipped) Sec 1.2 Derivation Based on Functional Derivatives \(\quad\)Undoubtedly it is based on talent analyses on physics that Einstein firstly write down the Field Equation. The reason why I give you another approach at first is because of some pedagogical consideration. Axiom:(Hilbert Action) \(\quad\)The action of general relativity is given by Hilbert that $$I_{G}=\dfrac{1}{16\pi G}\int dx^{4}\sqrt{-g}R,$$ where \(R\) is Ricci scalar. Proposition(Einstein’s Field Equation): \begin{equation}\label{1.2.1}R_{\mu\nu}-\dfrac{1}{2}g_{\mu\nu}R=-8\pi GT_{\mu\nu}\end{equation} Proof: $$\delta I_{G}=\dfrac{1}{16\pi G}\int dx^{4}\bigg[(\delta\sqrt{-g})g^{\mu\nu}R_{\mu\nu}+\sqrt{-g}(\delta g^{\mu\nu})R_{\mu\nu}+\sqrt{-g}g^{\mu\nu}\delta R_{\mu\nu}\bigg].$$ There is no harm to consider the derivation in the local inertial coordinate( think why?), that is, \(g_{\mu\nu}\rightarrow\eta_{\mu\nu}\) and \(\varGamma_{\mu\nu}^{\sigma}=0\). Thus, \begin{align*}\delta R_{\mu k}=\partial_{k}\delta\varGamma_{\mu\lambda}^{\lambda}-\partial_{\lambda}\delta\varGamma_{\mu k}^{\lambda}+0+0-0-0.\end{align*} And it is easy to check that \(\delta\varGamma\) is always a tensor. (verify!) (Hint: Consider two distinct connections defined on the manifold and their affine connection’s behaviors under the coordinate tansformation.) \(\quad\)In this way, \begin{align*}\sqrt{-g}g^{\mu\nu}R_{\mu\nu}&=\sqrt{-g}g^{\mu\nu}(\partial_{\nu}\delta\varGamma_{\mu\lambda}^{\lambda})-\sqrt{-g}g^{\mu\nu}(\partial_{\lambda}\delta\varGamma_{\mu\nu}^{\lambda})\\=&\partial_{\nu}(\sqrt{-g}g^{\mu\nu}\delta\varGamma_{\mu\lambda}^{\lambda})-\partial_{\lambda}(\sqrt{-g}g^{\mu\nu}\delta\varGamma_{\mu\nu}^{\lambda}),\end{align*} which is deleted at the \(\infty\) boundary of integration(Gauss Theorem). \(\quad\)So \begin{align*}\delta I_{G}&=\dfrac{1}{16\pi G}\int dx^{4}\bigg[(\delta\sqrt{-g})g^{\mu\nu}R_{\mu\nu}+\sqrt{-g}(\delta g^{\mu\nu})R_{\mu\nu}\bigg]\\&=\dfrac{1}{16\pi G}\int dx^{4}\left[\dfrac{\sqrt{-g}}{2}g^{\rho\sigma}g^{\mu\nu}R_{\mu\nu}\delta g_{\rho\sigma}+\sqrt{-g}(-g^{\mu\rho}g^{\nu\sigma}\delta g_{\rho\sigma})R_{\mu\nu}\right]\\&=\dfrac{1}{16\pi G}\int dx^{4}\sqrt{-g}\delta g_{\rho\sigma}\left(\dfrac{1}{2}g^{\rho\sigma}R-R^{\rho\sigma}\right),\end{align*} or ( there are some nontivial indice contraction tricks in the above computation that I believe you can handle, so I just left them as exercises.) $$\displaystyle\dfrac{\delta I_{G}}{\delta g^{\rho\sigma}}=\dfrac{\sqrt{-g}}{16\pi G}\left(-\dfrac{1}{2}g^{\rho\sigma}R-R^{\rho\sigma}\right).$$ \(\quad\)On the other aspect, \(\displaystyle\dfrac{\delta I_{G}}{\delta g^{\rho\sigma}}=-\dfrac{\sqrt{-g}}{2}T^{\rho\sigma}\). \(\quad\)At last, by reducing the indices we get \(\text{Einstein’s Field Equation}\): $$R_{\mu\nu}-\dfrac{1}{2}g_{\mu\nu}R=-8\pi GT_{\mu\nu}.$$
Newform invariants Coefficients of the \(q\)-expansion are expressed in terms of a basis \(1,\beta_1,\beta_2,\beta_3\) for the coefficient ring described below. We also show the integral \(q\)-expansion of the trace form. Basis of coefficient ring in terms of a root \(\nu\) of \(x^{4}\mathstrut -\mathstrut \) \(2\) \(x^{3}\mathstrut -\mathstrut \) \(16\) \(x^{2}\mathstrut -\mathstrut \) \(5\) \(x\mathstrut +\mathstrut \) \(4\): \(\beta_{0}\) \(=\) \( 1 \) \(\beta_{1}\) \(=\) \( \nu \) \(\beta_{2}\) \(=\) \((\)\( \nu^{3} - \nu^{2} - 20 \nu - 10 \)\()/3\) \(\beta_{3}\) \(=\) \((\)\( -2 \nu^{3} + 5 \nu^{2} + 28 \nu - 1 \)\()/3\) \(1\) \(=\) \(\beta_0\) \(\nu\) \(=\) \(\beta_{1}\) \(\nu^{2}\) \(=\) \(\beta_{3}\mathstrut +\mathstrut \) \(2\) \(\beta_{2}\mathstrut +\mathstrut \) \(4\) \(\beta_{1}\mathstrut +\mathstrut \) \(7\) \(\nu^{3}\) \(=\) \(\beta_{3}\mathstrut +\mathstrut \) \(5\) \(\beta_{2}\mathstrut +\mathstrut \) \(24\) \(\beta_{1}\mathstrut +\mathstrut \) \(17\) For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below. For more information on an embedded modular form you can click on its label. This newform does not admit any (nontrivial) inner twists. \( p \) Sign \(23\) \(1\) This newform can be constructed as the kernel of the linear operator \(T_{2}^{4} \) \(\mathstrut -\mathstrut 2 T_{2}^{3} \) \(\mathstrut -\mathstrut 24 T_{2}^{2} \) \(\mathstrut +\mathstrut 61 T_{2} \) \(\mathstrut +\mathstrut 2 \) acting on \(S_{4}^{\mathrm{new}}(\Gamma_0(23))\).
Preprints (rote Reihe) des Fachbereich Mathematik Refine Year of publication 1998 (5) (remove) Keywords 319 The Kallianpur-Robbins law describes the long term asymptotic behaviour of the distribution of the occupation measure of a Brownian motion in the plane. In this paper we show that this behaviour can be seen at every typical Brownian path by choosing either a random time or a random scale according to the logarithmic laws of order three. We also prove a ratio ergodic theorem for small scales outside an exceptional set of vanishing logarithmic density of order three. 306 In this paper we study the space-time asymptotic behavior of the solutions and derivatives to th incompressible Navier-Stokes equations. Using moment estimates we obtain that strong solutions to the Navier-Stokes equations which decay in \(L^2\) at the rate of \(||u(t)||_2 \leq C(t+1)^{-\mu}\) will have the following pointwise space-time decay \[|D^{\alpha}u(x,t)| \leq C_{k,m} \frac{1}{(t+1)^{ \rho_o}(1+|x|^2)^{k/2}} \] where \( \rho_o = (1-2k/n)( m/2 + \mu) + 3/4(1-2k/n)\), and \(|a |= m\). The dimension n is \(2 \leq n \leq 5\) and \(0\leq k\leq n\) and \(\mu \geq n/4\) 300 299 We propose a new discretization scheme for solving ill-posed integral equations of the third kind. Combining this scheme with Morozov's discrepancy principle for Landweber iteration we show that for some classes of equations in such method a number of arithmetic operations of smaller order than in collocation method is required to appoximately solve an equation with the same accuracy.
I have been trying to understand why one should look into $c=9, N=2$ superconformal models like the Gepner models or the Kazama-suzuki models, and I am quite confused.. This is what I understood from the arguments I've seen: (mainly from [1]) The usual way to deal with the $6$ spacetime dimensions which we do not observe in reality is by assuming that we can write the $10$-dimensional spacetime manifold as a product $M_4 \times K$, where $M_4$ is Minkowski space together with the $4$-dimensional superstring action with $4$ free bosons and $4$ free fermions, forming a $c= 4\times \frac{3}{2} = 6$, $N=1$ superconformal field theory, and the six dimensional manifold $K$ is called the internal manifold and has to be compactified. It's field theory must be supersymmetric and have central charge $c=15-6=9$. In order to ensure $\mathcal{N}=1$ spacetime supersymmetry, one forces both the internal and the external CFTs to actually be $N=2$ SCFTs: the spectral flow plays the part of spacetime supersymmetry generator $Q$. Therefore, if we want to understand the internal manifold, we should study $c=9$, $N=2$ SCFTs. My problems with this: We deduced that $c=9$ because we used that the superstring action was just an action with free bosons and free fermions (the RNS action). This action is not $N=2$ supersymmetric (only $N=1$), so by demanding the theory to have $N=2$ SUSY don't we have to throw away all the results we obtained from the RNS superstring? In particular, $c\ne 15$ if $N=2$, because in that case we would have two fermionic superpartners for each boson, not one. This seems to ensure 10D spacetime SUSY, but don't we also want 4D spacetime SUSY? What is the connection between this and the Calabi-Yau approach? Is ine approach more general than the other? Do these SCFTs somehow correspond to specific CY manifolds? Please answer if even if you only know the answer to one of my questions. Thank you. -------------------------------------------------------------- [1] Greene, B. (1997). String theory on calabi-yau manifolds. arXiv preprint hep-th/9702155.
$165=(3) (5) (11)$ $\sigma (p^a)$=$p^{a+1}-1 \over {p-1}$=$3$ $p^{a+1}-1$=$3p-3$ $p^{a+1}=3p-2$ Got stuck here. How do I proceed? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community $165=(3) (5) (11)$ $\sigma (p^a)$=$p^{a+1}-1 \over {p-1}$=$3$ $p^{a+1}-1$=$3p-3$ $p^{a+1}=3p-2$ Got stuck here. How do I proceed? There is a product-of-prime-series formula for $\sigma(n)$. If the prime factorisation of $n$ is $\prod_{i=1}^rp_i^{a_i}$: $$\sigma(n)=\prod_{i=1}^r\sum_{j=0}^{a_i}p_i^j$$ We make a table of $\sum_{k=0}^ap^k$ for small primes $p$ and exponents $a$. For each prime, we stop listing sums of exponents if they would surpass 165: p | 0 1 2 3 4 5 6 2 | 1 (3) 7 (15) 31 63 127 3 | 1 4 13 40 121 5 | 1 6 31 156 7 | 1 8 57 11 | 1 12 133 13 | 1 14... | ...163 | 1 164 We seek the numbers 3, 5, 11, 15, 33, 55 and 165 – the divisors of 165 except 1 – in the table. If we can find any subset of those numbers that lies in distinct rows and whose product is 165, we can use the multiplicative property of the divisor function to construct a $k$ with $\sigma(k)=165$. Except that we can't find any such subset: only 3 and 15 appear in the table (they are marked with brackets). Hence we conclude that there is no $k$ with $\sigma(k)=165$. Indeed, A007369, the numbers $n$ such that $\sigma(x)=n$ has no solution, contains 165. The easy way: look at a table of $\sigma(k)$ for $k \le 164$, e.g. the one at OEIS sequence A000203 and pick out the value $165$. Proceeding step-by-step to look for possible $n$ such that $\sigma(n) = 165$: For prime $p$, $\sigma(p)=p+1$. Since $164$ is not prime, this is not a possibility for $\sigma(n) = 165$. Next for prime power $p^k$, $\sigma(p^k)=p^k + p^{k-1} + \cdots + p+1$. For $p=2$ we also know that $\sigma(2^k)=2^{k+1}-1$ . Since $166$ is not a power of $2$, $n=2^k$ is not feasible. For other primes, we can also observe that for odd $p$, this sum is odd iff $k$ is even. So we can quickly evaluate $\sigma(p^k)$ for the small even powers of primes $3,5,7,11$ and we will know that higher primes are not feasible (since $13^2>165$): \begin{array}{c|c} p & k & \sigma(p^k)\\\hline 3 & 2 & 13\\ 3 & 4 & 121\\ 5 & 2 & 31\\ 7 & 2 & 57\\ 11 & 2 & 133\\ \end{array} None of these numbers are $165$, showing that $n = p^k$ is not a feasible solution. Also, none of them divide $165$ (even though $\sigma(2)$ and $\sigma(8)$ do). Since $\sigma$ is multiplicative between prime powers (that is, $\sigma(p^kq^m) = \sigma(p^k)\sigma(q^m)$), this means that there are no solutions to $\sigma(n) = 165$, since we require more than one prime divisor. Since it seems that you are looking for the "hard" way: Let $n$ be a positive integer such that $\sigma(n)=165.$ Then $\prod_{p\mid n}(p^{\alpha_p+1}-1)/(p-1)=3\cdot5\cdot11,$ where the product is taken over all primes dividing $n$ and where $\alpha_p$ is the exponent of $p$ in the prime decomposition of $n.$ Now note that if $p\mid n$ then $\alpha_p+1\geqslant2$ so $(p^{\alpha_p+1}-1)/(p-1)>1.$ Hence, since the RHS of the equality above has exactly three prime factors, $n$ has at most three prime factors. If $n=p^a$ for some prime $p$ and some integer $a>0$ then $\sigma(n)=(p^{a+1}-1)/(p-1)$ so $164=2^2\cdot41=p(165-p^a)$ so $p\in\{2,41\}.$ It is easy to check that none of these works. Thus $n$ has either two or three prime factors. If $n=p^aq^b,$ with $p$ and $q$ primes, $p\neq q$ and $a,b>0$ then we have three possibilities. If we set $\alpha:=(p^{a+1}-1)/(p-1)$ and $\beta:=(q^{b+1}-1)/(q-1),$ the three possibilities are: $(i)$ $\alpha=15$ and $\beta=11.$ Then $14=2\cdot7=p(15-p^a)$ so $p\in\{2,7\}.$ In the same way we find that $q\in\{2,5\}.$ If $p=2$ then $a=3$ and also since $p\neq q$ and $10=2\cdot5=q(11-q^b)$ then $q=5$ so $5^b=9,$ which is impossible. If $p=7$ then $7^a=13,$ which is impossible. $(ii)$ $\alpha=33$ and $\beta=5.$ Then $32=2^5=p(33-p^a)$ so $p=2.$ Also $4=2^2=q(5-q^b)$ so $q=2,$ which contradicts the fact that $p\neq q.$ $(iii)$ $\alpha=3$ and $\beta=55.$ Then $2=p(3-p^a)$ so $p=2$ and $a=1.$ Also $54=2\cdot3^3=q(55-q^b)$ and since $q\neq p$ then $q=3$ and thus $3^b=37,$ which is impossible. It remains to see what happens when $n=p^aq^br^c,$ where $p,q,r$ are pairwise distinct primes and $a,b,c$ are positive integers. Now note that there is only one possibility, which is $(p^{a+1}-1)/(p-1)=3,$ $(q^{b+1}-1)/(q-1)=5$ and $(r^{c+1}-1)/(r-1)=11.$ Then $p=q=2,$ which contradicts the fact that $p\neq q. $ Thus there is no $n$ with $\sigma(n)=165.$
The reasoning of your proof follows: "if the limit $\lim_{n \to \infty}na_n$ exists and is equal to some $a$, then $a = 0$. This tells us nothing of the truth of the statement because you need to prove that the first part is true i.e. the limit actually exists. To prove that $\lim_{n \to \infty}na_n$ exists, we will have to consider what we have available to us: $\sum_{n=1}^{\infty} a_n$ converges. For any $n$, we have $a_{n} > a_{n+1}$. Since the partial sums converge by list item 1, by the Cauchy criterion, for any $\epsilon\over 2$ from the devil, there exists an $N_0$ such that for any $n > N_0$ and $p \geq 1$, we have that: $$\left \lvert\sum_{i=n}^{n+p}a_i \right \rvert < \frac{\epsilon}{2}$$ in other words, there is some point $N_0$ after which you have: $$\left \lvert a_n + a_{n+1} + \cdots + a_{n+p} \right \rvert < \frac{\epsilon}{2}$$ for any choice of $n > N_0$ and $p \geq 1$. Now we want to introduce the $na_n$ sequence into this inequality somehow and maybe bound it. What can you say about the summation? Well the sequence is decreasing so we have $a_{n+p} < a_{n+i}$ for all $i < p$. So with our inequality, we get: $$\left \lvert pa_{n+p} \right \rvert = \left \lvert a_{n+p} + a_{n+p} + \cdots + a_{n+p} \right \rvert < \left \lvert a_n + a_{n+1} + \cdots + a_{n+p} \right \rvert < \frac{\epsilon}{2}$$ Since we want to investigate the sequence $na_n$ we can set $p = n$ and multiply everything by $2$ to get: $$\left \lvert 2na_{2n} \right \rvert < 2 \left \lvert a_n + a_{n+1} + \cdots + a_{2n} \right \rvert < \epsilon \\$$ So we have bounded the sequence $2na_{2n}$ by $\epsilon$ and thus it converges to $0$. However $2na_{2n}$ is only a subsequence i.e. just the even terms of $na_n$ so we aren't done. It is known that if both the even and odd terms of a sequence converge to the same limit, then the entire sequence converges to that limit. We have proved the even terms converge to $0$. Can you extend this to the odd terms? What can you say about the sequence $(2n+1)a_{2n+1}$ in terms of what it must be smaller or bigger than? Hint: use the information from the bullet points.
The goal of this case study is to validate the Conjugate Heat Transfer analysis type on SimScale platform, particularly focusing on applications with volumetric heat generation, i.e. assignment of heat sources to solid regions. Validation is performed against experiments data and analytical calculations of Ventola et al. [1] who analyzed the thermo-fluid dynamic behavior of air-cooled, unshrouded plate-fin heat sinks (PFHS) under forced convection. The source of heat in that study is a transistor enclosed in a plastic casing. Heat generated by the transistor spreads through the aluminum body of the heat sink, which raises its temperature. Air with lower temperature, on the other side, is forced over the heat sink and between the fins in order to cool the solid. Sufficient airflow is usually ensured by a cooling fan. In the present case, both the heat conduction through solid material and convective heat transfer to the air are simulated, hence the word Conjugate in the name. Present case can be a reference for any cooling/heating of material studies which deal with similar physics. In the SimScale workbench, heat sources can be added to the simulation in the Advanced Concepts sub-menu as showed in Fig.1. Heat sources can be assigned in two ways: Both workflows are validated in the present study. Below you can find links to SimScale projects with corresponding simulations. Fig.2. Geometric model of the heat sink and its dimensions [right figure is from Ventola et al. [1]] The geometry of a heat sink analyzed in this work is showed in Fig.2. It is characterized by its length (L), height (H), thickness(t), number (N), spacing between neighboring fins (p), baseplate width (W) and thickness (tb). All relevant parameters are showed in Table 1. T H t N p W tb 57.2 mm 21.8 mm 1 mm 14 2.1 mm 41.4 mm 8.4 mm Except for contact surface area between the heat sink and the heat source, which is As=1.555 cm2, other spatial dimensions of the transistor heat source are not mentioned in Ventola et al. [1]. For that reason, an assumption is made that the heat source is cubic in shape, with a side length of a=12.45 mm. This ensures that the area requirement of the contact face is satisfied. Tool Type : OPENFOAM® Analysis Type : Conjugate heat transfer (Laminar) The computational domain’s cross-section dimensions correspond to the experimental setup, where channel width and height are 7.47 cm and 13.07 cm, respectively. The distance between the heat sink and inlet and outlet surfaces is decided based on the standard CFD guidelines, as showed on Fig.3. Laminar analysis type is chosen, since in the most relevant part of the domain for this study, which is between heat sink’s fins, a flow with low Reynolds numbers develops. In this case Re<1000 is expected. Mesh and Element types : A hex-dominant unstructured grid was generated with appropriate refinements in the areas near the heat sink surface and regions where high gradients in the air flow variables are expected. Figure 4. shows the mesh over the whole computational domain, while Fig.5. shows refinements around the heat sink. The heat sink is made out of extruded aluminum alloy with conductivity κs=209 W/m/K. The cooling fluid is air, while the transistor heat source specifications given by the manufacturer are: No material properties of the heat source are provided, so making an assumption of cubic heat source with side length of a=12.45mm, together with the given thermal resistance, the conductivity of the heat source is defined as: (1)\[ \kappa_{hs} = \frac{\frac{1}{2} a}{R_{jc} A_s},\] with its value being κhs=80 K/W. Additionally, the following boundary conditions are prescribed in the simulation: Automatic relaxation was used in all cases. In the reference study [1], experimental and analytical results are expressed as overall heat resistance between the heat source and the ambient air Rja. For illustration purposes, the complete array of thermal resistances between junction and air is shown in Fig.6. Therefore, Rja is composed of four components (Fig.6.): Rja=Rjc+Rcs+Rsa+Rspr where where Rjc, Rcs, Rsa, and Rspr are the junction-to-case, case-to-sink, sink-to-ambient, and spreading resistances, respectively. A comparison of the simulation results for intermediate mesh fineness (see project link: Rectangular fins – Intermediate at the bottom of the page) with experimental and analytical values for different inlet velocities can be seen in Table 2., as well as on Fig.7. Quantities in the table and result plots are: The value of Tj,s is obtained in post-processing using the following relation: \[T_{j,s} = T_{interface} + P \cdot R_{jc}\] where Tinterface is the area average of temperature on the surface connecting heat sink and heat source, P is thermal power output of the heat source and Rjc is thermal resistance between them, a provided by the manufacturer (see Simulation Setup). Tinterface is obtained using a Result Control Item in SimScale Workbench. Overall thermal resistance between cooling air and the junction, is obtained by: \[R_{ja,s} = \frac{T_{j,s} – T_a}{P}\] va [m/s] Ta [K] P [W] Tj,e [K] Rja,e [K/W] Rja,t [K/W] Tj,s [K] Rja,s [K/W] 5.6 296.95 56.64 371.25 1.312 1.203 367 1.237 7.2 297.35 71.4 384.15 1.216 1.142 381.67 1.181 8.8 297.95 82.36 391.85 1.140 1.1 392.73 1.151 10.2 298.25 87.32 395.05 1.109 1.071 396.24 1.122 11.5 298.95 85.07 391.15 1.084 1.05 392.85 1.104 12.8 299.25 76.3 377.45 1.025 1.031 382.17 1.087 13.9 299.65 60.24 357.55 0.961 1.017 364.16 1.071 A trend of thermal resistance decrease can be observed. This can be attributed to increased convective heat transfer from heat sink surface to the stream of air, which occurs with higher fluid velocities. Overall thermal resistance depends on both convective heat transfer and conduction through the solid material. If only one of the two heat transfer modes is intensified, the overall thermal resistance drops. Once the material is chosen, and an object is designed and manufactured, conduction through solid can’t be influenced any more. Only way to enhance cooling at that point is through enhancing the convective heat transfer. A mesh convergence study was performed to assess grid independence of simulation results. Mesh specifications and cell counts can be found in Table 3. A case with largest difference between experimental and simulation results is chosen for the mesh convergence study (va=13.9 [m/s]). Simulations were done in projects with corresponding names: Rectangular fins – Coarse, Rectangular fins – Intermediate and Rectangular fins – Fine, links to which you can find at the bottom of this page. Fineness Mesh Operation Number of cells Mesh Type Coarse Hex-dominant parametric 2 904 901 3D hexahedral Medium Hex-dominant parametric 5 128 288 3D hexahedral Fine Hex-dominant parametric 11 615 447 3D hexahedral For Coarse and Fine meshes, simulations were completed for all experimental conditions (see Table 2.), results of which are in Fig.9. Fig.9. Comparison of junction-to-air thermal resistance for Coarse and Fine meshes In sections Geometry and Simulation Setup assumptions that had to be made about dimensions and conductivity of the heat source are explained. With only thermal resistance between heat source and heat sink Rjc given by the authors [1] , conductivity and thickness of transistor heat source were calculated using equation (1). However, one can argue there is still a degree of freedom in doing that. Do we assume the thickness and calculate the conductivity, or vice-versa? A study was conducted in order to make sure that the result is invariant to thickness and conductivity changes by varying both parameters in such way that thermal resistance Rjc remains 0.5 K/W, as it was in the experiment. It was found that as long as this condition is satisfied there is no significant difference in the result (up to ± 0.15%). Similarly, different numerical settings were compared, with both first order and second order schemes and with or without non-orthogonal correctors. Results for temperature and overall thermal resistance Rja,s varied up to ± 0.25%. All simulation setups and results are available at project links. Sensitivity studies were performed on the coarsest mesh (Project: CHT: Rectangular fins – Coarse) having in mind the mesh convergence study, which demonstrated its validity (Fig.8. and 9.). [1] (1, 2, 3, 4, 5, 6, 7, 8, 9) Ventola, L., Curcuruto, G., Fasano, M., Fotia, S., Pugliese, V., Chiavazzo, E. and Asinari, P., Unshrouded Plate Fin Heat Sinks for Electronics Cooling: Validation of a Comprehensive Thermal Model and Cost Optimization in Semi-Active Configuration, Energies 8 (9), 2016, DOI:10.3390/en9080609 This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software and owner of the OPENFOAM® and OpenCFD® trade marks. OPENFOAM® is a registered trade mark of OpenCFD Limited, producer and distributor of the OpenFOAM software.
A Group Homomorphism that Factors though Another Group Problem 490 Let $G, H, K$ be groups. Let $f:G\to K$ be a group homomorphism and let $\pi:G\to H$ be a surjective group homomorphism such that the kernel of $\pi$ is included in the kernel of $f$: $\ker(\pi) \subset \ker(f)$. Define a map $\bar{f}:H\to K$ as follows.For each $h\in H$, there exists $g\in G$ such that $\pi(g)=h$ since $\pi:G\to H$ is surjective.Define $\bar{f}:H\to K$ by $\bar{f}(h)=f(g)$. (a) Prove that the map $\bar{f}:H\to K$ is well-defined. (b) Prove that $\bar{f}:H\to K$ is a group homomorphism. (a) Prove that the map $\bar{f}:H\to K$ is well-defined. Let $h\in H$. Suppose that there are two elements $g, g’\in G$ such that $\pi(g)=h, \pi(g’)=h$.Then we have\begin{align*}\pi(gg’^{-1})=\pi(g)\pi(g’)^{-1}=hh^{-1}=1\end{align*}since $\pi$ is a homomorphism.Thus,\[gg’^{-1}\in \ker(\pi) \subset \ker(f).\]It yields that $f(gg’^{-1})=1$.It follows that\begin{align*}1=f(gg’^{-1})=f(g)f(g’)^{-1},\end{align*}and hence we have\[f(g)=f(g’).\] Therefore, the definition of $\bar{f}$ does not depend on the choice of elements $g\in G$ such that $\pi(g)=h$, hence it is well-defined. (b) Prove that $\bar{f}:H\to K$ is a group homomorphism. Our goal is to show that for any elements $h, h’\in H$, we have\[\bar{f}(hh’)=\bar{f}(h)\bar{f}(h’).\] Let $g, g’$ be elements in $G$ such that\[\pi(g)=h \text{ and } \pi(g’)=h’.\]Then by definition of $\bar{f}$, we have\[\bar{f}(h)=f(g) \text{ and } \bar{f}(h’)=f(g’) \tag{*}.\] Since $\pi$ is a homomorphism, we have\begin{align*}hh’=\pi(g)\pi(g’)=\pi(gg’).\end{align*}By definition of $\bar{f}$, we have\[\bar{f}(hh’)=f(gg’).\] Since $f$ is a homomorphism, we obtain\begin{align*}\bar{f}(hh’)&=f(gg’)\\&=f(g)f(g’)\\&\stackrel{(*)}{=} \bar{f}(h)\bar{f}(h’).\end{align*}This proves that $\bar{f}$ is a group homomorphism. Group Homomorphism from $\Z/n\Z$ to $\Z/m\Z$ When $m$ Divides $n$Let $m$ and $n$ be positive integers such that $m \mid n$.(a) Prove that the map $\phi:\Zmod{n} \to \Zmod{m}$ sending $a+n\Z$ to $a+m\Z$ for any $a\in \Z$ is well-defined.(b) Prove that $\phi$ is a group homomorphism.(c) Prove that $\phi$ is surjective.(d) Determine […] A Group Homomorphism is Injective if and only if MonicLet $f:G\to G'$ be a group homomorphism. We say that $f$ is monic whenever we have $fg_1=fg_2$, where $g_1:K\to G$ and $g_2:K \to G$ are group homomorphisms for some group $K$, we have $g_1=g_2$.Then prove that a group homomorphism $f: G \to G'$ is injective if and only if it is […] Group Homomorphism, Preimage, and Product of GroupsLet $G, G'$ be groups and let $f:G \to G'$ be a group homomorphism.Put $N=\ker(f)$. Then show that we have\[f^{-1}(f(H))=HN.\]Proof.$(\subset)$ Take an arbitrary element $g\in f^{-1}(f(H))$. Then we have $f(g)\in f(H)$.It follows that there exists $h\in H$ […] Subgroup of Finite Index Contains a Normal Subgroup of Finite IndexLet $G$ be a group and let $H$ be a subgroup of finite index. Then show that there exists a normal subgroup $N$ of $G$ such that $N$ is of finite index in $G$ and $N\subset H$.Proof.The group $G$ acts on the set of left cosets $G/H$ by left multiplication.Hence […] Group Homomorphisms From Group of Order 21 to Group of Order 49Let $G$ be a finite group of order $21$ and let $K$ be a finite group of order $49$.Suppose that $G$ does not have a normal subgroup of order $3$.Then determine all group homomorphisms from $G$ to $K$.Proof.Let $e$ be the identity element of the group […] Abelian Normal subgroup, Quotient Group, and Automorphism GroupLet $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$.Let $\Aut(N)$ be the group of automorphisms of $G$.Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime.Then prove that $N$ is contained in the center of […]
Integrals of Step Functions on General Intervals Recall from the Step Functions on General Intervals page that a function $f$ is said to be a step function on the general interval $I$ if there exists a closed and bounded interval $[a, b] \subseteq I$ such that $f$ is a step function in the usual sense on $[a, b]$ and such that $f(x) = 0$ for all $x \in I \setminus [a, b]$. Now let $f$ be a step function on the interval $I$. Then for some closed and bounded interval $[a, b]$ we have that $f$ is a usual step function on this interval, and so there exists a partition $P = \{ a = x_0, x_1, ..., x_n = b \} \in \mathscr{P}[a, b]$ such that $f$ is constant on each open subinterval $(x_{k-1}, x_k)$. The set of discontinuities of $f$ occur at the endpoints of these intervals, but nevertheless, this set of discontinuities must be a finite set since the partition $P$ breaks $[a, b]$ into at most finitely many subintervals. Regardless, the measure of the set of discontinuities is $0$ and so by Lebesgue's criterion for the Riemann integrability of a function we have that $\int_a^b f(x) \: dx$ exists. Furthermore, if $f(x) = c_k$ for $x \in (x_{k-1}, x_k)$ for each $k \in \{1, 2, ..., n \}$ we have that:(1) Now, since $[a, b] \subseteq I$ and $f(x) = 0$ for all $x \in I \setminus [a, b]$ we have that the integral of $f$ on $I$ will equal to that of above. We formally define this integral below. Definition: Let $f$ be a step function on the interval $I$. Then there exists an $[a, b] \subseteq I$ such that $f$ is a step function in the usual sense on $[a, b]$ and such that $f(x) = 0$ for all $x \in I \setminus [a, b]$. The Integral of $f$ over $I$ is defined to be $\displaystyle{\int_I f(x) \: dx = \int_a^b f(x) \: dx}$. For brevity, the notation $\int_I f$ can be used in place of $\int_I f(x) \: dx$ when no ambiguity arises. For example, consider the following step function $f$ on $[0, \infty)$ given by:(2) Then the integral of $f$ over $I$ is:(3)