text
stringlengths
256
16.4k
Polar Coding Notes: A Simple Proof For any B-DMC $W$, the channels $\{W_N^{(i)}\}$ polarize in the sense that, for any fixed $\delta \in (0, 1)$, as $N$ goes to infinity through powers of two, the fraction of indices $i \in \{1, \dots, N\}$ for which $I(W_N^{(i)}) \in (1 − \delta, 1]$ goes to $I(W)$ and the fraction for which $I(W_N^{(i)}) \in [0, \delta)$ goes to $1−I(W)^{[1]}$. Mrs. Gerber’s Lemma Mrs. Gerber’s Lemma provides a lower bound on the entropy of the modulo-$2$ sum of two binary random vectors$^{[2][3]}$. Let $h^{-1} : [0, 1] \to [0, 1/2]$ be the inverse of the binary entropy function $h(p) = -p\log p - (1-p)\log(1-p)$. Here we set a binary symmetric channal with crossover probability $p_0$. The convolution of $a$ and $b$ is denoted by Convex: The function $f(u) = h(h^{-1}(u)\ast p_0), u \in [0,1]$ is convex in $u$ for every fixed $p_0 \in (0,1/2]^{[2]}$. Scalar MGL: Let $X$ be a binary random variable and let $U$ be an arbitrary random variable. If $Z \sim Bern(p)$ is independent of $(X, U)$ and $Y = X \oplus Z$, then Vector MGL: Let $X^n$ be a binary random vector and $U$ be an arbitrary random variable. If $Z^n$ is a vector of independent and identically distributed $Bern(p)$ random variables independent of $(X^n, U)$ and $Y^n = X^n \oplus Z^n$, then Let $X,Y$ are two binary random variable, from$^{[2]}$ Let Then from conditional entropy definition So that and consider $\beta_0(y)=p(X=1|Y)$ as a RV Let us consider first Here $\beta_k = p(X_k=1 | X_1, ... , X_{k-1}), 1 \le k \le n$. Now we have and Strict Polarization for Binary-Input Channels First notice that and Since $H(X_1|Y_1) \in (0, 1)$, there exists an $\alpha \in (0, 1/2)$ such that $H(X_1|Y_1) = h(\alpha)$. Thus, we have that So what we have concluded is that for every $\delta > 0$, there exists $\kappa(\delta) > 0$ such that if $I(W) \in (\delta, 1 − \delta)$, we have Proof of Channel Polarization Given $W$ and $\delta > 0$, define$^{[4][5]}$ Let We have $\begin{align}\mu_{n+1} &= {1\over 2^{n+1}} \sum_{s \in \{\pm\}^{n+1}} I(W^s) \\&= {1\over 2^n} \sum_{t \in \{\pm\}^n} {1\over 2} [I(W^{t+})+I(W^{t-})] \\&= {1\over 2^n} \sum_{t \in \{\pm\}^n} I(W^t) \\&= \mu_n = \mu_0 = I(W) \\ \nu_{n+1} &= {1\over 2^{n+1}} \sum_{s \in \{\pm\}^{n+1}} [I(W^s)]^2 \\&= {1\over 2^n} \sum_{t \in \{\pm\}^n} {[I(W^{t+})]^2+[I(W^{t-}]^2 \over 2} \\&= {1\over 2^n} \sum_{t \in \{\pm\}^n} [I(W^t)]^2+[\Delta(W^t)]^2 \tag{${a^2+b^2 \over 2} = ({a+b \over 2})^2+({a-b \over 2})^2$} \\&\ge \nu_n + \theta_n(\delta)\kappa(\delta)^2 \tag{definition of $\theta_n(\delta)$} \end{align}$ The sequence $\nu_n$ is thus bounded and monotone and consequently convergent; in particular $\nu_{n+1}−\nu_n$ converges to zero. As $\theta_n$ is sandwiched by between two quantities both convergent to zero, we conclude This means that for large enough $n$, the fraction of mediocre channels (i.e., those with symmetric capacities in $(\delta, 1 − \delta)$) vanishes to zero. But by preservation of mutual information, we also know that if we define we automatically have This M.Alsan and E.Telatar's proof is much simpler than the martingale convergence theorem used by Arıkan$^{[1]}$. Reference: 1. E. Arikan. Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels. IEEE Trans. on Information Theory, vol.55, no.7, pp.3051–3073, July 2009. 2. A. D. Wyner and J. Ziv. A theorem on the entropy of certain binary sequences and applications (Part I). IEEE Trans.Inform.Theory, vol.19, no.6, pp.769-772, Nov.1973. 3. Abbas El Gamal and Young-Han Kim. Network Information Theory. Cambridge University Press. 2011. 4. M.Alsan and E.Telatar. A simple proof of polarization and polarization for non-stationary memoryless channels. IEEE Trans.Info.Theory, vol.62, no.9,pp.4873-4878. 2016. 5. Vincent Y. F. Tan. EE5139R: Information Theory for Communication Systems:2016/7, Semester 1. https://www.ece.nus.edu.sg/ 6. Eren Sasoglu. Polarization and Polar Codes. Foundations and Trends in Communications and Information Theory Vol. 8, No. 4 (2011) 259–381 Previous post by Lyons Zhang: Polar Coding Notes: Channel Combining and Channel Splitting Next post by Lyons Zhang: 5G NR QC-LDPC Encoding Algorithm To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments. Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.
Avinash Singh Articles written in Pramana – Journal of Physics Volume 29 Issue 5 November 1987 pp 509-516 Condensed Matter Physics The spin-correlation length is used to set up a RG analysis of the Hubbard model (within RPA). We demonstrate that an identical critical behaviour is obtained by performing the macroscopic renormalization group analysis with the antisymmetric Landau interaction parameter. The beta functions for the half-filled and quarter-filled band cases have been evaluated. Volume 44 Issue 1 January 1995 pp 1- Rapid Communication Static, non-magnetic impurities give rise to gap states in a doped Mott-Hubbard antiferromagnetic insulator. The spectral and spatial features of these gap states are discussed, and it is argued that these gap states are responsible for the observed local-moment behaviour in zinc-doped cuprates. Volume 49 Issue 5 November 1997 pp 505-514 The three-band Hubbard model — both pure and with static non-magnetic impurities — has been studied within a self-consistent numerical Hartree-Fock (HF) scheme. The system shows nesting properties only in the absence of direct O-O hopping. Spin excitations in the system are gapless with the existence of a Goldstone mode in the broken-symmetry state. The variation of spinwave velocity with Cu-site Coulomb repulsion shows a (1/(2 d)+(1/Δ)) dependence in the strong-coupling limit. Each non-magnetic impurity in the system gives rise to two gap states for a particular spin and the local moment produced is robust even at finite concentration of mobile hole doping. The gapless Goldstone mode is preserved even in case of unequal concentration of impurities on the two sublattices. Volume 70 Issue 1 January 2008 pp 163-171 Research Articles The recent neutron scattering data for spin-wave dispersion in HoMnO 3 are well-described by an anisotropic Hubbard model on a triangular lattice with a planar (XY) spin anisotropy. Best fit indicates that magnetic excitations in HoMnO 3 correspond to the strong-coupling limit $U/t > \sim 15$, with planar exchange energy $J = 4t^{2}/U \simeq 2.5$ meV and planar anisotropy $\Delta U \simeq 0.35 $meV. Current Issue Volume 93 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
M Fabiane Articles written in Pramana – Journal of Physics Volume 72 Issue 6 June 2009 pp 979-988 Research Articles We consider corrections to scaling within an approximate theory developed by Mazenko for nonconserved order parameter in the limit of low $(d \rightarrow 1)$ and high $(d \rightarrow \infty)$ dimensions. The corrections to scaling considered here follows from the departures of the initial condition from the scaling morphology. Including corrections to scaling, the equal time correlation function has the form: $C(r, t) = f_{0} (r/L) + L^{−\omega} f_{1} (r/L) + \cdots$, where 𝐿 is a characteristic length scale (i.e. domain size). The correction-to-scaling exponent ω and the correction-to-scaling functions $f_{1}(x)$ are calculated for both low and high dimensions. In both dimensions the value of ω is found to be ω = 4 similar to 1D Glauber model and OJK theory (the theory developed by Ohta, Jasnow and Kawasaki). Current Issue Volume 93 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
This is the first part of a series on bubbles in the U.S. Equity Market I will be publishing this summer. Introduction The problem with writing about bubbles is that bubbles are hard to define. More troubling is that bubbles are almost always identified ex-post. For example, some claim there was a bubble in the U.S. housing market from 2001-2007 (see Philips and Yu, 2011) because prices went up a lot and subsequently fell. While mechanically this is true, I’m not convinced a price rise and fall is enough to constitute a bubble. To justify the use of the word bubble, some argue that the asset somehow deviated from fundamental value. This too, can be tricky, as during the growth of the housing “bubble”, the U.S. had low interest rate, large foreign capital inflows and relaxed lending standards. These are all “fundamental” reasons why housing prices should have gone up during that time. Asset Pricing Approach In an introduction to asset pricing, you might see the following: Consider a 2-period asset pricing model, an agent chooses asset to maximize: \begin{equation} u(c_t)+E_t[\beta u(c_{t+1})] \end{equation} Such that: \begin{equation} \begin{split} c_t=e_t-a_t p_t c_{t+1}=e_{t+1}+a_t x_{t+1} \end{split} \end{equation} Solution: \begin{equation} p_t=E_t\Bigg[\beta\frac{u’(c_{t+1})}{u’(c_t)}x_{t+1}\Bigg] \end{equation} Now, suppose the asset pays dividends, and we extend the model to infinitely many periods, then the solution is: \begin{equation} p_t=E_t \Big[\sum\limits_{j=1}^{\infty} \beta^j\frac{u’(c_{t+j})}{u’(c_t)}D_{t+j} \Big] \end{equation} Conditional on imposing the transversality condition: \begin{equation} lim_{j\rightarrow \infty}E_t\Big[\beta^j \frac{u’(c_{t+j})}{u’(c_t)} p_{t+j}\Big]=0 \end{equation} This is often called the “no-bubbles” condition, as with this assumption, the price is determined entirely by fundamental value (dividends). Another way to think about this is that the price is not going up so fast that people are buying just to re-sell at a higher price. Some authors like Phillips and Yu (2011) use this idea to create a purely statistical definition of a bubble. If the “no-bubbles” condition holds, then prices should be a random walk with drift, but if not, prices will exhibit exponential growth. The authors use right-sided Dickey-Fuller tests to determine if stocks are exhibiting bubble-like behavior (the discussion here is cut short as this will be the topic of a future blog post). Back to Basics The literature on bubbles is vast, and it will be discussed more in future posts. Rather than use econometric techniques or write down a theory model, I wanted to identify “bubble” stocks with very few assumptions. To start, I downloaded all the daily price data from The Center for Research in Security Prices (CRSP) for 2015. I then applied the following filters: 1) Remove all stocks that were not trading for the whole years. 2) Take a 10-day moving average of the price. For each stock, split the year into two types of regimes, when the stock is trading above/below the 10-day moving average. Take the worst return among all regimes, and call this the burst period (call the pre-burst period the “rise” and the post-bust period the “after”). Then keep only those stocks with at least a 50-percent rise, and at least a 30-percent fall. As a first pass, these are the stocks we’re interested in. Results Going in, I had a few names that I thought would be good bubble candidates. I was interested in Sunedison (SUNE) and Lumber Liquidators (LL), see below for the plots, and the regimes the algorithm identified. To see if we have the right idea, I randomly picked one of the identified stocks, here is the chart: The chart seems promising, as we get a rapid rise and a rapid fall. Doing a little more research, we can see that RXDX is a biotechnology firm that produces oncology medicines. They lost money in every quarter of 2015 (with the largest loss in the 4th quarter) so this does not explain the price behavior. The biggest decline during the burst was September 25 (Friday)-September 28 (Monday). Looking at their press releases we can see that they released a clinical trial result on September 27 (Sunday), so it seems like perhaps investors were expecting a better result in the clinical trial (see press release) that did not materialize. It’s not immediately obvious that this price rise and fall were not justified by fundamentals. Maybe investors thought the new drug was very promising, and given the potential upside, it was “rational” to buy at such a high price. We can only say they were wrong ex-post. Future Work The method discussed above identified 165 stocks in 2015 that exhibited bubble-like behavior. The next step is to look more closely into these stocks, and determine what caused their rise and fall. The goal is identify a systematic pattern among stocks that exhibit this kind of price behavior.
2018-09-11 04:29 Proprieties of FBK UFSDs after neutron and proton irradiation up to $6*10^{15}$ neq/cm$^2$ / Mazza, S.M. (UC, Santa Cruz, Inst. Part. Phys.) ; Estrada, E. (UC, Santa Cruz, Inst. Part. Phys.) ; Galloway, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; Gee, C. (UC, Santa Cruz, Inst. Part. Phys.) ; Goto, A. (UC, Santa Cruz, Inst. Part. Phys.) ; Luce, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; McKinney-Martinez, F. (UC, Santa Cruz, Inst. Part. Phys.) ; Rodriguez, R. (UC, Santa Cruz, Inst. Part. Phys.) ; Sadrozinski, H.F.-W. (UC, Santa Cruz, Inst. Part. Phys.) ; Seiden, A. (UC, Santa Cruz, Inst. Part. Phys.) et al. The properties of 60-{\mu}m thick Ultra-Fast Silicon Detectors (UFSD) detectors manufactured by Fondazione Bruno Kessler (FBK), Trento (Italy) were tested before and after irradiation with minimum ionizing particles (MIPs) from a 90Sr \b{eta}-source . [...] arXiv:1804.05449. - 13 p. Preprint - Full text Rekord szczegółowy - Podobne rekordy 2018-08-25 06:58 Charge-collection efficiency of heavily irradiated silicon diodes operated with an increased free-carrier concentration and under forward bias / Mandić, I (Ljubljana U. ; Stefan Inst., Ljubljana) ; Cindro, V (Ljubljana U. ; Stefan Inst., Ljubljana) ; Kramberger, G (Ljubljana U. ; Stefan Inst., Ljubljana) ; Mikuž, M (Ljubljana U. ; Stefan Inst., Ljubljana) ; Zavrtanik, M (Ljubljana U. ; Stefan Inst., Ljubljana) The charge-collection efficiency of Si pad diodes irradiated with neutrons up to $8 \times 10^{15} \ \rm{n} \ cm^{-2}$ was measured using a $^{90}$Sr source at temperatures from -180 to -30°C. The measurements were made with diodes under forward and reverse bias. [...] 2004 - 12 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 533 (2004) 442-453 Rekord szczegółowy - Podobne rekordy 2018-08-23 11:31 Rekord szczegółowy - Podobne rekordy 2018-08-23 11:31 Effect of electron injection on defect reactions in irradiated silicon containing boron, carbon, and oxygen / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Yakushevich, H S (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) Comparative studies employing Deep Level Transient Spectroscopy and C-V measurements have been performed on recombination-enhanced reactions between defects of interstitial type in boron doped silicon diodes irradiated with alpha-particles. It has been shown that self-interstitial related defects which are immobile even at room temperatures can be activated by very low forward currents at liquid nitrogen temperatures. [...] 2018 - 7 p. - Published in : J. Appl. Phys. 123 (2018) 161576 Rekord szczegółowy - Podobne rekordy 2018-08-23 11:31 Rekord szczegółowy - Podobne rekordy 2018-08-23 11:31 Characterization of magnetic Czochralski silicon radiation detectors / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) Silicon wafers grown by the Magnetic Czochralski (MCZ) method have been processed in form of pad diodes at Instituto de Microelectrònica de Barcelona (IMB-CNM) facilities. The n-type MCZ wafers were manufactured by Okmetic OYJ and they have a nominal resistivity of $1 \rm{k} \Omega cm$. [...] 2005 - 9 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 548 (2005) 355-363 Rekord szczegółowy - Podobne rekordy 2018-08-23 11:31 Silicon detectors: From radiation hard devices operating beyond LHC conditions to characterization of primary fourfold coordinated vacancy defects / Lazanu, I (Bucharest U.) ; Lazanu, S (Bucharest, Nat. Inst. Mat. Sci.) The physics potential at future hadron colliders as LHC and its upgrades in energy and luminosity Super-LHC and Very-LHC respectively, as well as the requirements for detectors in the conditions of possible scenarios for radiation environments are discussed in this contribution.Silicon detectors will be used extensively in experiments at these new facilities where they will be exposed to high fluences of fast hadrons. The principal obstacle to long-time operation arises from bulk displacement damage in silicon, which acts as an irreversible process in the in the material and conduces to the increase of the leakage current of the detector, decreases the satisfactory Signal/Noise ratio, and increases the effective carrier concentration. [...] 2005 - 9 p. - Published in : Rom. Rep. Phys.: 57 (2005) , no. 3, pp. 342-348 External link: RORPE Rekord szczegółowy - Podobne rekordy 2018-08-22 06:27 Numerical simulation of radiation damage effects in p-type and n-type FZ silicon detectors / Petasecca, M (Perugia U. ; INFN, Perugia) ; Moscatelli, F (Perugia U. ; INFN, Perugia ; IMM, Bologna) ; Passeri, D (Perugia U. ; INFN, Perugia) ; Pignatel, G U (Perugia U. ; INFN, Perugia) In the framework of the CERN-RD50 Collaboration, the adoption of p-type substrates has been proposed as a suitable mean to improve the radiation hardness of silicon detectors up to fluencies of $1 \times 10^{16} \rm{n}/cm^2$. In this work two numerical simulation models will be presented for p-type and n-type silicon detectors, respectively. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 2971-2976 Rekord szczegółowy - Podobne rekordy 2018-08-22 06:27 Technology development of p-type microstrip detectors with radiation hard p-spray isolation / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) ; Díez, S (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) A technology for the fabrication of p-type microstrip silicon radiation detectors using p-spray implant isolation has been developed at CNM-IMB. The p-spray isolation has been optimized in order to withstand a gamma irradiation dose up to 50 Mrad (Si), which represents the ionization radiation dose expected in the middle region of the SCT-Atlas detector of the future Super-LHC during 10 years of operation. [...] 2006 - 6 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 566 (2006) 360-365 Rekord szczegółowy - Podobne rekordy 2018-08-22 06:27 Defect characterization in silicon particle detectors irradiated with Li ions / Scaringella, M (INFN, Florence ; U. Florence (main)) ; Menichelli, D (INFN, Florence ; U. Florence (main)) ; Candelori, A (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Bruzzi, M (INFN, Florence ; U. Florence (main)) High Energy Physics experiments at future very high luminosity colliders will require ultra radiation-hard silicon detectors that can withstand fast hadron fluences up to $10^{16}$ cm$^{-2}$. In order to test the detectors radiation hardness in this fluence range, long irradiation times are required at the currently available proton irradiation facilities. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 589-594 Rekord szczegółowy - Podobne rekordy
Title: Time duration for the yellow traffic light Post by: Fu-Kwun Hwang on July 12, 2008, 01:59:37 pm A yellow light at a traffic intersection should last long enough that a car traveling at the suggested speed can either apply the brakes and decelerate to a stop prior to reaching the front of the intersection, or maintain the same speed and pass through the intersection before the yellow light turns red If a driver traveling at the suggested speed cannot do either of the two options, then the traffic signal (specifically the time duration of the yellow light) is considered unsafe. If the speed of the car is v, the friction coefficient between tires of the car and the road is $\mu$. The maximum brake force $F= -\mu N = -\mu m g $, and $F=ma $ So $a=- g\mu$. The distance required for the car to fully stopped (after brake is activated) is $s=\frac{v^2}{2a} =\frac{v^2}{2g\mu}$. Assume the reaction for the driver is $\Delta t$, then the total distance required to stop the car after the driver find the yellow light just turn on is: $D_{min}=v \Delta t + \frac{v^2}{2g\mu}$. The car has to be $D_{min}$ away from the intersection., for the car to be fully stopped behind the intersection. of the road. If the distance is smaller than $D_{min}$, the time for the yellow should be enough for the car to pass the interaction. Assume the width of the intersection is $W$, and the time for the yellow is $T$. Then it requires that $v T\ge D_{min}+W= v \Delta t + \frac{v^2}{2g\mu}+W$. So $T\ge \Delta t +\frac{v}{2 g\mu}+\frac{W}{v}$ This is the minimum required time for car to pass the intersection. However, if the car need to stop before the traffic light, the minimum distance is $D_{min}=v\Delta t + \frac{v^2}{2g\mu}$ For the car to stop from initial velocity v and acceleration $a=-g\mu$, it need $t_{brake}=\frac{v}{gu}$ from $v(t)=v_0+at$ So the total time required is $T'_{min}=\Delta t+\frac{v}{gu}$ It means that the time for the yellow light $T_{yellow}$ need to satisfy two equations: $T_{yellow}\ge T_1= \Delta t+\frac{v}{2gu}+\frac{W}{v} $ and $T_{yellow}\ge T_2= \Delta+\frac{v}{gu} $ So the time for yellow should be larger that the maximum of $T_1,T_2$ The condition for $T_2>T_1$ is $\frac{v}{gu}>\frac{v}{2gu}+W/v$, which imply $\frac{v}{2gu}>W/v$ i.e. $ \frac{v^2}{2gu}>W$ The above condition is the same as stopping distance $ge$ width of intersection which is the case for normal speed limit and traffic light. However, if $W \ge \frac{v^2}{2gu}$, then the minimum time for traffic light is $\frac{v}{2gu}+W/v$ For v=72km/hr=20m/s, $\mu=1, \Delta t=0.8s, W=20 m$. $T_1= 0.8+ \frac{20}{2*10*1}+\frac{20}{20} =2.8 s$ $T_2= 0.8+ \frac{20}{10*1}=2.8s$ So the minimum time required is 2.8s. However, if the width of the interaction is less than 20m, then $1.8 The following simulation let you play as a traffic light control manager: You can change the width W of the interaction, the reaction time for the driver, the time for the green light and yellow light. (If you click the right most checkbox, the program will show suggested time for yellow light) Code for the car: green: moving at constant speed. red: decelerate yellow: accelerate *** the maximum speed and maximum acceleration for each car is randomly selected in the simulation , to make the simulation closer to the real case. I hope you can enjoy it! -*- [ejsapplet] Let's apply physics principle to estimate yellow light time duration. Suppose the reaction for the driver is RT, the speed of the car is V, the friction coefficient between tire and the road is mu, mass of the car is m, gravity is g. Then the friction force Fr= - m*g*mu = m*a so the deceleration a=g*mu The minimum stopping distance when driver saw the light term yellow is D min=V*RT+ V*V/(2*g*mu) You can adjust the deceleration a directly with slider control. he friction mu= 1.0-1.2 for normal tire. But it is a strong brake. Normally, we did not brake the car with maximum deceleration. So the default value is set to a=0.5 The above analysis ignore the width of the car. If the car want to stop before s/he reach the front of the interaction, the minimum distance is D min. If distance is less than Dmin, the car has to pass the interaction before the end of the yellow light. Suppose the length of the car is d, the width of the interaction is W, and the time for yellow light is YT. V*YT >= D min+ W+d For the car to pass the traffic light, the minimum time for yellow light should be $YT_{min}= \frac{D_{min}+W+d}{V} = \frac{W+d}{V} + RT + \frac{V}{2*g*mu}$ If the yellow light is too short, then some car would not be able to pass the intersection safely. However, if the driver do not want to brake the car so quickly (want to be more comfortably), replace 2*g*mu with 2*g*mu/k. the above simulation use k=2 to estimate the time for yellow light). If the yellow traffic light last too long, the driver might not want to stop the car, and ,when the light turn RED, s/he would not be able to fully stopped before the interaction. If we want the car to stop before the traffic light, the minimum time for yellow light is $RT+\frac{v}{g\mu}$ Summary: For very long intersection $W\ge\frac{v^2}{2g\mu}-d$, (i.e. width of interaction + width of car larger than stopping distance for the car),The extra time is required because we need to make decision ahead of time. the minimum time required is $RT+ \frac{v^2}{2g\mu}+\frac{W+d}{v}$ : Reaction time + braking time+ time to pass intersection. For short intersection where $W\le\frac{v^2}{2g\mu}-d$, the minimum time required is $RT+ \frac{v^2}{g\mu}$: Reaction time + braking time *2 You can check out Tale Of The 3-Second Yellow Light (http://www.cbsnews.com/stories/2003/06/12/eveningnews/main558431.shtml), Traffic Light Logic (http://www.pearsonified.com/2006/03/traffic_light_logic.php), THE YELLOW LIGHT (http://www.glenbrook.k12.il.us/gbssci/phys/projects/q1/ylover.html) for more story. Title: Re: Time duration for the yellow traffic light Post by: enalice on March 16, 2009, 06:02:47 pm How is the approach speed normally determined? If the 85th percentile speed is used, that's probably realistic. BUT, if an arbitrarily low posted speed limit is used, then the yellow interval is likely to be unreasonably short for actual traffic conditions, resulting in a high number of UNintentional red light runners. -*- Title: Re: Time duration for the yellow traffic light Post by: Fu-Kwun Hwang on March 16, 2009, 06:52:12 pm Yes. It depends on the speed of the car. You can adjust the maximum car speed (Vmax) with the slider at the lower right region. Title: Re: Time duration for the yellow traffic light Post by: arnanbd on August 16, 2009, 02:16:19 am can you create a counter that will count the cars passing the junction? Is the red light duration equal to the sum of the green & the yellow durations? Thanks! Title: Re: Time duration for the yellow traffic light Post by: Fu-Kwun Hwang on August 16, 2009, 07:38:51 am The purpose of the above simulation is to find out suitable time for yellow light. The maximum speed and maximum acceleration for each car is randomly selected in the simulation, so the number of cars passing the juntcton is not the same even with all the same parameters. The red light duration is set to be twice the green light duration in the above simulation. It is not necessary that read light duration= green light duration+ yellow light duration.
Identically Distributed (i.i.d.) Consider a sequence of random variables: . The are mutually independent if for any finite subset and any finite sequence of numbers :\begin{equation}P \left[ \bigcup\limits_{i=1}^{n} (X_i\leq a_i) \right] = \prod\limits_{i=1}^{n} P(X_i\leq a_i)\end{equation}This may look more familiar if you replace with , an arbitrary event related to the realization of . If each has the same probability distribution and all are mutually independent, then we say the are An example is a sequence of coin flips - the realization of each flip does not depend on any of the previous flips, nor will it affect future flips. If a coin comes up heads 100 times in a row, the probability of heads on the next flip is still 1/2 (assuming the coin is fair, but even if it isn’t, the sequence of flips will still be !). Are SPY Returns ? Many papers in finance treat stock returns as A simple example is modeling stock prices, , as a random walk:\begin{equation}p_t=p_{t-1} + \epsilon_t\end{equation}where , the returns, are with mean and variance (not necessarily normal). SPY is a popular ETF that tracks the S&P 500 index. Looking at daily SPY returns (the ’s), we see that volatility, , is changing over time. Around the technology “bubble” in 2000 and the financial crisis in 2008, the magnitude of returns is much larger than the rest of the sample (this is called volatility clustering). Compare this to a drawing from a Normal(0,1) distribution for each day in the sample (an series). There is such volatility clustering: Time Varying Volatility The plots above suggest volatility is time varying. To account for this, we can model as a GARCH(1,1) process:\begin{equation}\epsilon_t=z_t \sigma_t\end{equation}\begin{equation}\sigma_t^2 = \omega + \alpha \sigma_{t-1}^2 + \beta \epsilon_{t-1}^2\end{equation}where . The unconditional variance of is: \begin{equation} Var(\epsilon_t)=E[\sigma_t^2 z_t^2] =E[\sigma_t^2] \end{equation} \begin{equation} = \omega + \alpha E[\sigma_{t-1}^2] + \beta E[\epsilon_{t-1}^2] \end{equation} \begin{equation} =\omega + (\alpha +\beta) E[\epsilon_{t-1}^2] \end{equation} Where the last equality follows from . We then apply the stationarity of () and get: \begin{equation} Var(\epsilon_t)=\frac{\omega}{1-\alpha-\beta} \end{equation} The figure below shows simulated paths for a random walk with standard errors (blue), and with GARCH(1,1) standard errors (orange). The GARCH parameters are calibrated to daily SPY data. In both series, the unconditional variance of is the same. The GARCH series seems to exhibit a “crisis” at . This is not just cherry-picking, and can be replicated by seeding random numbers in MATLAB - use “rng(1,’twister’)” for the case and “rng(2,’twister’)” for the GARCH case. The figure below shows the innovations (the ’s) for both series. The GARCH(1,1) model generates volatility clustering, similar to that observed in the SPY returns, while the model does not. Cleaning for Volatility Stock returns are not , but we can normalize them using intraday data to make this assumption roughly correct. Let denote the daily return for stock at time , while denotes the return in a 5-minute window . We normalize returns as follows:\begin{equation}r_{i,t}^{norm}=\frac{r_{i,t}^d}{\sqrt{\sum\limits_{w \in t} r_{i,w}^2}}\end{equation}5-minute returns are computed using TAQ trade data from 1993-2014. The following filters are applied: 1) Remove all trades with trade condition “Z” (out of sequence) 2) Only consider trades between 9:30 AM and 4:00 PM EST 3) If there are no trades in a 5-minute window , set =0 The figure below shows the evolution of : The figure below shows the distribution of returns before and after the normalization. The blue line is a normal distribution with mean and standard deviation corresponding to each series. The raw data exhibits a lot of kurtosis (the peak around 0 is very “sharp”), while much of this is alleviated in the normalized data. If we superimpose the volatility-cleaned return series over the realized return series, we can see that the volatility clustering has essentially been eliminated. Price Path in an world Let and be the mean and standard deviation of the normalized returns, , while and are the mean and standard deviation of daily returns, . Apply the following transformation, so the normalized returns have the same mean and standard deviation as realized returns:\begin{equation}\tilde{r}_n=\frac{\sigma_r}{\sigma_n}\left[r_n - \mu_n + \frac{\sigma_n}{\sigma_r} \mu_r \right]\end{equation} Applying this transformation to monthly return data, we can recover the price path under the volatility-adjusted returns. The dip after the financial crisis is dampened by high realized (intraday) volatility, while recent returns have been amplified by low realized volatility.
The only meaning I can attach to $\overline{xxxx}$ is that you expect a result made of 4 equal digits (I can't read your source since it seems in Arabic or something similar). If $n$ is our middle odd number, the sum of the square of 3 consecutive odd numbers is $$(n-2)^2 +n^2+(n+2)^2 = 3n^2+8$$and we want this to be equal to $k\cdot 1111$, where $1\le k\le 9$. We can actually try the 9 possible values for $k$ and solve for $n$, but we can restrict the possibilities. Since $n$ is odd, $3n^2+8$ is odd. Thus $k$ must be odd, otherwise $k\cdot 1111$ will be even. Then we can reduce the equation $3n^2+8 = k\cdot 1111$ modulo 3 and we obtain$2\equiv k \pmod 3$, since $3n^2$ is obviously 0 mod 3 and $1111\equiv 1\pmod 3$ and $8\equiv 2\pmod 3$. Thus $k$ can only be equal to 2, 5 or 8. But we have determined that $k$ must be odd so it can only be $k=5$. Now we check $3n^2+8 = 5555$ and we find that indeed $n=43$ is the solution, and that $41^2 + 43^2+ 45^2=5555$ as requested.
Let $X$ be a Hausdorff locally compact in $x \in X$. Show that for each open nbd $U$ of $x$ there exists an open nbd $V$ of $x$ such that $\overline{V}$ is compact and $\overline{V} \subset U$. My work: Since $X$ is Hausdorff and locally compact then $X$ is regular. Let $U$ be an open nbd of $x$. By assumption $X$ is locally compact so there exists some open nbd $W$ of $x$ such that $\overline{W}$ is compact. Now consider the open set $W \cap U$ this is non-empty since $x$ lies in the intersection. By regularity find an open set $V$ such that: $x\in V \subset \overline{V} \subset W \cap U$ Then in particular $\overline{V} \subset U$. But also $\overline{V} \subset W \subset \overline{W}$. Since $\overline{W}$ is compact then $\overline{V}$ is a closed subset of a compact set, hence compact. Is the above OK? Thank you.
ä is in the extended latin block and n is in the basic latin block so there is a transition there, but you would have hoped \setTransitionsForLatin would have not inserted any code at that point as both those blocks are listed as part of the latin block, but apparently not.... — David Carlisle12 secs ago @egreg you are credited in the file, so you inherit the blame:-) @UlrikeFischer I was leaving it for @egreg to trace but I suspect the package makes some assumptions about what is safe, it offers the user "enter" and "exit" code for each block but xetex only has a single insert, the interchartoken at a boundary the package isn't clear what happens at a boundary if the exit of the left class and the entry of the right are both specified, nor if anything is inserted at boundaries between blocks that are contained within one of the meta blocks like latin. Why do we downvote to a total vote of -3 or even lower? Weren't we a welcoming and forgiving community with the convention to only downvote to -1 (except for some extreme cases, like e.g., worsening the site design in every possible aspect)? @Skillmon people will downvote if they wish and given that the rest of the network regularly downvotes lots of new users will not know or not agree with a "-1" policy, I don't think it was ever really that regularly enforced just that a few regulars regularly voted for bad questions to top them up if they got a very negative score. I still do that occasionally if I notice one. @DavidCarlisle well, when I was new there was like never a question downvoted to more than (or less?) -1. And I liked it that way. My first question on SO got downvoted to -infty before I deleted it and fixed my issues on my own. @DavidCarlisle I meant the total. Still the general principle applies, when you're new and your question gets donwvoted too much this might cause the wrong impressions. @DavidCarlisle oh, subjectively I'd downvote that answer 10 times, but objectively it is not a good answer and might get a downvote from me, as you don't provide any reasoning for that, and I think that there should be a bit of reasoning with the opinion based answers, some objective arguments why this is good. See for example the other Emacs answer (still subjectively a bad answer), that one is objectively good. @DavidCarlisle and that other one got no downvotes. @Skillmon yes but many people just join for a while and come form other sites where downvoting is more common so I think it is impossible to expect there is no multiple downvoting, the only way to have a -1 policy is to get people to upvote bad answers more. @UlrikeFischer even harder to get than a gold tikz-pgf badge. @cis I'm not in the US but.... "Describe" while it does have a technical meaning close to what you want is almost always used more casually to mean "talk about", I think I would say" Let k be a circle with centre M and radius r" @AlanMunn definitions.net/definition/describe gives a websters definition of to represent by drawing; to draw a plan of; to delineate; to trace or mark out; as, to describe a circle by the compasses; a torch waved about the head in such a way as to describe a circle If you are really looking for alternatives to "draw" in "draw a circle" I strongly suggest you hop over to english.stackexchange.com and confirm to create an account there and ask ... at least the number of native speakers of English will be bigger there and the gamification aspect of the site will ensure someone will rush to help you out. Of course there is also a chance that they will repeat the advice you got here; to use "draw". @0xC0000022L @cis You've got identical responses here from a mathematician and a linguist. And you seem to have an idea that because a word is informal in German, its translation in English is also informal. This is simply wrong. And formality shouldn't be an aim in and of itself in any kind of writing. @0xC0000022L @DavidCarlisle Do you know the book "The Bronstein" in English? I think that's a good example of archaic mathematician language. But it is still possible harder. Probably depends heavily on the translation. @AlanMunn I am very well aware of the differences of word use between languages (and my limitations in regard to my knowledge and use of English as non-native speaker). In fact words in different (related) languages sharing the same origin is kind of a hobby. Needless to say that more than once the contemporary meaning didn't match a 100%. However, your point about formality is well made. A book - in my opinion - is first and foremost a vehicle to transfer knowledge. No need to complicate matters by trying to sound ... well, overly sophisticated (?) ... The following MWE with showidx and imakeidx:\documentclass{book}\usepackage{showidx}\usepackage{imakeidx}\makeindex\begin{document}Test\index{xxxx}\printindex\end{document}generates the error:! Undefined control sequence.<argument> \ifdefequal{\imki@jobname }{\@idxfile }{}{... @EmilioPisanty ok I see. That could be worth writing to the arXiv webmasters as this is indeed strange. However, it's also possible that the publishing of the paper got delayed; AFAIK the timestamp is only added later to the final PDF. @EmilioPisanty I would imagine they have frozen the epoch settings to get reproducible pdfs, not necessarily that helpful here but..., anyway it is better not to use \today in a submission as you want the authoring date not the date it was last run through tex and yeah, it's better not to use \today in a submission, but that's beside the point - a whole lot of arXiv eprints use the syntax and they're starting to get wrong dates @yo' it's not that the publishing got delayed. arXiv caches the pdfs for several years but at some point they get deleted, and when that happens they only get recompiled when somebody asks for them again and, when that happens, they get imprinted with the date at which the pdf was requested, which then gets cached Does any of you on linux have issues running for foo in *.pdf ; do pdfinfo $foo ; done in a folder with suitable pdf files? BMy box says pdfinfo does not exist, but clearly do when I run it on a single pdf file. @EmilioPisanty that's a relatively new feature, but I think they have a new enough tex, but not everyone will be happy if they submit a paper with \today and it comes out with some arbitrary date like 1st Jan 1970 @DavidCarlisle add \def\today{24th May 2019} in INITEX phase and recompile the format daily? I agree, too much overhead. They should simply add "do not use \today" in these guidelines: arxiv.org/help/submit_tex @yo' I think you're vastly over-estimating the effectiveness of that solution (and it would not solve the problem with 20+ years of accumulated files that do use it) @DavidCarlisle sure. I don't know what the environment looks like on their side so I won't speculate. I just want to know whether the solution needs to be on the side of the environment variables, or whether there is a tex-specific solution @yo' that's unlikely to help with prints where the class itself calls from the system time. @EmilioPisanty well th eenvironment vars do more than tex (they affect the internal id in teh generated pdf or dvi and so produce reproducible output, but you could as @yo' showed redefine \today oor teh \year, \month\day primitives on teh command line @EmilioPisanty you can redefine \year \month and \day which catches a few more things, but same basic idea @DavidCarlisle could be difficult with inputted TeX files. It really depends on at which phase they recognize which TeX file is the main one to proceed. And as their workflow is pretty unique, it's hard to tell which way is even compatible with it. "beschreiben", engl. "describe" comes from the math. technical-language of the 16th Century, that means from Middle High German, and means "construct" as much. And that from the original meaning: describe "making a curved movement". In the literary style of the 19th to the 20th century and in the GDR, this language is used. You can have that in englisch too: scribe(verb) score a line on with a pointed instrument, as in metalworking https://www.definitions.net/definition/scribe @cis Yes, as @DavidCarlisle pointed out, there is a very technical mathematical use of 'describe' which is what the German version means too, but we both agreed that people would not know this use, so using 'draw' would be the most appropriate term. This is not about trendiness, just about making language understandable to your audience. Plan figure. The barrel circle over the median $s_b = |M_b B|$, which holds the angle $\alpha$, also contains an isosceles triangle $M_b P B$ with the base $|M_b B|$ and the angle $\alpha$ at the point $P$. The altitude of the base of the isosceles triangle bisects both $|M_b B|$ at $ M_ {s_b}$ and the angle $\alpha$ at the top. \par The centroid $S$ divides the medians in the ratio $2:1$, with the longer part lying on the side of the corner. The point $A$ lies on the barrel circle and on a circle $\bigodot(S,\frac23 s_a)$ described by $S$ of radius…
I am facing some problems in understanding what is the importance of a Killing vector field? I will be grateful if anybody provides an answer, or, refer me to some review or books. In terms of classical general relativity: Einstein's equations $$ G_{ab} = 8\pi T_{ab} $$ can be formulated, in local coordinates, as a system of second order partial differential equations for the metric unknown $g_{ab}$. The matter field equations further generate some family of partial differential equations. Given a continuous symmetry (as guaranteed by a Killing vector field), one has tools and tricks one can use to help solve the PDEs. Noether's theorem tells us that for Einstein's equation (which admits a Lagrangian formulation), associated to each Killing vector field $X^a$ is a conservation law. One can simply see this by considering the current $$ J^{(X)}_a = T_{ab} X^b $$ Its divergence is $$ \nabla^a J^{(X)}_a = \nabla^a T_{ab} X^b + T_{ab} \nabla^a X^b $$ The first term vanishes since the energy momentum tensor is divergence free. Using that the energy-momentum tensor is symmetric, we write $$ \nabla^a J^{(X)}_a = \frac12 T_{ab} \left( \nabla^a X^b + \nabla^b X^a\right) $$ As a consequence of Killing's equations, if $X^a$ is a Killing field, the term inside the parenthesis evaluates to 0. So $J^{(X)}$ is divergence free. Applying Stokes' theorem we then see a conservation law. Symmetry reduction: given a continuous symmetry for a PDE, we can try to perform a symmetry reduction of the equations. This reduces the number of independent variables for the PDE, and often makes it easier to see an exact solution (or to examine features of symmetric solutions). For a survey of how symmetry can help, I recommend checking out Exact Solutions of Einstein's Field Equationsby Stephani et al (Cambridge University Press). Chapters 8, 9, 10, and all of Part II of that book addresses the use of symmetry groups to help solve Einstein's equations. The application of the Stokes' theorem in this case is slightly indirect, as the last equation contains covariant derivative as opposed to the partial derivative over a coordinate in the standard conservation law. This is not a problem, however, because $$ 0 = \nabla_\nu (X^{\mu} T_{\mu}^{\nu}) = \frac{1}{\sqrt{-g}} \partial_\nu( \sqrt{-g} \ X^{\mu} T_{\mu}^{\nu}), $$ and so the conserved current includes the extra factor of $$ {\sqrt{-g}} $$
High Productivity Close to Maths SAC aims to help application programmers' productivity. The abstraction facilities in SAC enable programs that closely reflect the underlying mathematics of scientific problems. As an example, consider naive n-body simulation. Using $m$ for planet masses, $p$ for positions (in 3D space), $v$ for velocities and $a$ for accelerations, time discretisation over $k$ yields for all bodies $i$: \begin{eqnarray} \label{eq:num:pos} \overset{\tiny k+1}{p_i} &=& \overset{k}{p_i} + \overset{k+1}{v_i} dt \\ \label{eq:num:vel} \overset{k+1}{v_i} &=& \overset{k}{v_i} + \overset{k+1}{a_i} dt \\ \label{eq:num:acc} \overset{k+1}{a_i} &=& \sum\limits_{j \neq i}^{n} \dfrac{m_j (\overset{k}{p_j} - \overset{k}{p_i})} {\left|\overset{k}{p_j} - \overset{k}{p_i} \right|^3} \end{eqnarray} In SAC, this can be specified as: p = p + v * dt; v = v + a * dt; a = {[i] -> vsum ({[j] -> (i == j ? [0.0, 0.0, 0.0] : m[j] * (p[j] - p[i]) / (l2norm (p[j] - p[i]) ^ 3) }) }; Note here, that $k$ is mapped into the actual compute time and that $i$ in the first two assignments as well as the 3D nature of all positions, velocities and accelerations are implicitly derived by the compiler! More details as well as further examples can be found in our case studies. Swift FFI (Foreign Function Interface) There is no need to re-write entire applications in SAC. SAC can be compiled into C-libraries that can be called from any C-linking-based context.Dually, SAC can easily interface with existing libraries. Many examples, including parts of C's standard OS interface such as SDL as well as interfaces to tools such as gnuplot or dislin can be found in the current standard library of SAC. Here an example taken from our tutorial for interactively navigating through the Mandelbrot Set. Note here, that all the code that interacts with the SDL library are contained in this code snippet initDisplay generates a display, drawArray displays a SaC array, and getSelection obtains a rectangular selection made with the mouse. All arrays that are passed into or obtained as result from these functions are normal SaC arrays; there is no special treatment required. #define XRES 6 #define YRES 4 #define EXPAND 128 #define DEPTH 2048 int main () { disp = initDisplay (EXPAND * [YRES,XRES]); cmin = [toc (-2.2, 1.0)]; cmax = [toc (0.8, -1.0)]; while (true) { expand = EXPAND; cur_shape = [YRES, XRES]; do { plane = genComplexArray (cur_shape, cmin[[0]], cmax[[0]]); ts, vs = escapeTimeAndValue( plane, DEPTH); nvs = normalizedIterationCount( ts, vs); rgbs = doubleArrayToRGB( nvs); drawArray (disp, stretchRgb (rgbs, expand)); expand = expand / 2; cur_shape = 2 * cur_shape; } while (expand >= 1); zoom_coords = getSelection (disp); if (all( (zoom_coords[1] - zoom_coords[0]) > 0)) { cmin = [plane[zoom_coords[[0]]]] ++ cmin; cmax = [plane[zoom_coords[[1]]]] ++ cmax; } else { cmin = all (shape(cmin) > 1) ? drop ([1], cmin) : cmin; cmax = all (shape(cmax) > 1) ? drop ([1], cmax) : cmax; } } destroyDisplay (disp); return 0; } Further details of the FFI and examples on how to use it, e.g. on how to call SAC libraries from FORTRAN, can be found in several FFI-related publications. High Performance Performance Rivals that of Hand-Optimised, Target-Architecture-Tuned C Codes It is the central goal of SAC to achieve performance levels that rival hand-optimised C-codes.Key to achieving that goal is a highly optimising compiler sac2c which translates highly abstract SAC programs into performance-tuned C-code, CUDA-code or similar, depending on the chosen target machine. High Portability Single Source - Many Different Targets sac2c supports target-hardware-specific program transformations (see publications for details).From a single, unmodified source file highly-tuned codes for a range of architectures can be generated.These include: Shared-Memory Multi-Cores, GPUs, clusters, FPGAs, as well as experimental platforms.
Here I have a free abelian group $A$ on $\Sigma$ and a grouping operator $()$ on $\Sigma^*$ which turns it into a magma. The grouping could mean arguments to a function (later) in which case this is just regular group cohomology. I came up with these formulas. We do have $d^3 \circ d^2 = 0$ but not for the higher ones (they have to compose down to $2$ for that to happen, I think): I've identified strings over $\Sigma$ with tuples, so that's why the weird string notations: $$ \partial^3(abcd) = a(bcd) + (abc)d \\ \partial^2(abc) = a(bc) - (ab)c \\ \partial^3\partial^2(abcd) = a\partial^2(bcd) + \partial^2(abc)d \\ = a(b(cd) - (bc)d) + (a(bc) - (ab)c)d = \\ (ab)(cd) - a(bc)d + a(bc)d - (ab)(cd) = 0 \\ \ \ \\ \partial^4(\partial^3)(abcde) = a\partial^3(bcde) - \partial^3(abcd)e = \\ a (b(cde) + (bcd)e) - (a(bcd) + (abc)d)e = \\ ab(cde) + a(bcd)e - a(bcd)e + (abc)de \neq 0 $$ The sign middle sign is $+$ in $\partial^n$ when $n$ is odd, else $-$. So in this case is cohomology possible to do, or not? Thanks. I guess a more standard way that still would end up grouping strings would be: $$ \partial^4(abcde) = a(bcde) - ((ab)cde) + (a(bc)de) - (ab(cd)e) + (abc(de)) - (abcd) $$ but it's not a very symmetric idea with respect to strings.
TLDR; The equations: $$r_{B} = \sqrt{\frac{F}{2.46\times 10^{-14}}}$$ or rearranging for$$F = r_{B}^{2} \times 2.46\times 10^{-14}$$ Where $F$ is the fraction of light blocked ($F=0.01$ gives your $1\%$) and $r_{B}$ is the radius of your satellite in meters which will achieve this. For one percent reduction, using the equations above, we need a satellite of radius $6.376 \times 10^{5}$ m , or $637.6$ km - pretty big to say the least! (roughly the size of Alaska). The Maths Initially you added a 'mathematics' tag onto this question - I'm assuming you wanted something more along the lines of a hard science tag (rather than asking about building a mathematical system as the tag is intended). Distance to $L_1$ The wiki for Lagrangian points gives this equation: $$d_{E} \approx D \sqrt[3]{\frac{M_{E}}{3M_{S}}} $$ Where $d_E$ is the distance $L_{1}$ is from Earth, $D$ is the distance between the Sun and Earth and $M_S$ and $M_{E}$ are the masses of the sun and earth respectively.Using: $$D = 149597870700 \text{ m}$$(This is 1 Au, the average distance, so will change but the equation is already approximate)$$M_S = 1.9885 \times 10^{30} \text{ kg}$$$$M_E = 5.9724 \times 10^{24} \text{ kg}$$As given by the NASA factsheet. Giving us $d_{E} \approx 1.49656 \times 10^{9} \text{ m}$ or $1.5$ million kilometers. Now lets look at what this means for how large a satellite you'll need. The radius, $r_B$, of the Blocker projected onto the Earth gives a shadow with size $r_B^{'} = \frac{D}{d_{S}}r_B$ where $d_{S}$ is the distance the satellite is from the sun ($d_{S} = D - d_{E}$). If we want to know the fraction, $F$, of light the satellite will block we can compare the areas of circles presented (the earth is actually a sphere so this won't be exact). $$F = \frac{\pi r_{B}^{'2}}{\pi r_{E}^{2}} = \frac{(r_{B} \frac{D}{D-d_{E}})^{2}}{r_{E}^{2}} = r_{B}^{2} \times 2.46 \times 10^{-14}$$ Which you can use to calculate how much light you would block out for a satellite of a particular radius or rearrange to get the radius needed for a particular fraction ($r_{B}^{2} = \sqrt{\frac{F}{2.46\times 10^{-14}}}$ ).
Difference between revisions of "NTS ABSTRACTSpring2019" (→April 4) Line 126: Line 126: {| style="color:black; font-size:100%" table border="2" cellpadding="10" width="700" cellspacing="20" {| style="color:black; font-size:100%" table border="2" cellpadding="10" width="700" cellspacing="20" |- |- − | bgcolor="#F0A0A0" align="center" style="font-size:125%" | '''Adebisi Agboola ''' + | bgcolor="#F0A0A0" align="center" style="font-size:125%" | '''Adebisi Agboola''' |- |- | bgcolor="#BCD2EE" align="center" |Relative K-groups and rings of integers | bgcolor="#BCD2EE" align="center" |Relative K-groups and rings of integers Line 140: Line 140: {| style="color:black; font-size:100%" table border="2" cellpadding="10" width="700" cellspacing="20" {| style="color:black; font-size:100%" table border="2" cellpadding="10" width="700" cellspacing="20" |- |- − | bgcolor="#F0A0A0" align="center" style="font-size:125%" | '''Wei-Lun Tsai ''' + | bgcolor="#F0A0A0" align="center" style="font-size:125%" | '''Wei-Lun Tsai''' |- |- | bgcolor="#BCD2EE" align="center" |Hecke L-functions and $\ell$ torsion in class groups | bgcolor="#BCD2EE" align="center" |Hecke L-functions and $\ell$ torsion in class groups Line 152: Line 152: Pierce, and Wood concerning bounds for $\ell$-torsion in class groups of Pierce, and Wood concerning bounds for $\ell$-torsion in class groups of number fields. This is joint work with Byoung Du Kim and Riad Masri. number fields. This is joint work with Byoung Du Kim and Riad Masri. + + + + + + + + + + + + + |} |} </center> </center> Revision as of 23:50, 6 April 2019 Return to [1] Contents Jan 23 Yunqing Tang Reductions of abelian surfaces over global function fields For a non-isotrivial ordinary abelian surface $A$ over a global function field, under mild assumptions, we prove that there are infinitely many places modulo which $A$ is geometrically isogenous to the product of two elliptic curves. This result can be viewed as a generalization of a theorem of Chai and Oort. This is joint work with Davesh Maulik and Ananth Shankar. Jan 24 Hassan-Mao-Smith--Zhu The diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$ Abstract: Assume a polynomial-time algorithm for factoring integers, Conjecture~\ref{conj}, $d\geq 3,$ and $q$ and $p$ prime numbers, where $p\leq q^A$ for some $A>0$. We develop a polynomial-time algorithm in $\log(q)$ that lifts every $\mathbb{Z}/q\mathbb{Z}$ point of $S^{d-2}\subset S^{d}$ to a $\mathbb{Z}[1/p]$ point of $S^d$ with the minimum height. We implement our algorithm for $d=3 \text{ and }4$. Based on our numerical results, we formulate a conjecture which can be checked in polynomial-time and gives the optimal bound on the diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$. Jan 31 Kyle Pratt Breaking the $\frac{1}{2}$-barrier for the twisted second moment of Dirichlet $L$-functions Abstract: I will discuss recent work, joint with Bui, Robles, and Zaharescu, on a moment problem for Dirichlet $L$-functions. By way of motivation I will spend some time discussing the Lindel\"of Hypothesis, and work of Bettin, Chandee, and Radziwi\l\l. The talk will be accessible, as I will give lots of background information and will not dwell on technicalities. Feb 7 Shamgar Gurevich Harmonic Analysis on $GL_n$ over finite fields Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}: $$trace (\rho(g))/dim (\rho),$$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (TAMU). Feb 14 Tonghai Yang The Lambda invariant and its CM values Abstract: The Lambda invariant which parametrizes elliptic curves with two torsions (X_0(2)) has some interesting properties, some similar to that of the j-invariants, and some not. For example, $\lambda(\frac{d+\sqrt d}2)$ is a unit sometime. In this talk, I will briefly describe some of the properties. This is joint work with Hongbo Yin and Peng Yu. Feb 28 Brian Lawrence Diophantine problems and a p-adic period map. Abstract: I will outline a proof of Mordell's conjecture / Faltings's theorem using p-adic Hodge theory. Joint with Akshay Venkatesh. March 7 Masoud Zargar Sections of quadrics over the affine line Abstract: Abstract: Suppose we have a quadratic form Q(x) in d\geq 4 variables over F_q[t] and f(t) is a polynomial over F_q. We consider the affine variety X given by the equation Q(x)=f(t) as a family of varieties over the affine line A^1_{F_q}. Given finitely many closed points in distinct fibers of this family, we ask when there exists a section passing through these points. We study this problem using the circle method over F_q((1/t)). Time permitting, I will mention connections to Lubotzky-Phillips-Sarnak (LPS) Ramanujan graphs. Joint with Naser T. Sardari March 14 Elena Mantovan p-adic automorphic forms, differential operators and Galois representations A strategy pioneered by Serre and Katz in the 1970s yields a construction of p-adic families of modular forms via the study of Serre's weight-raising differential operator Theta. This construction is a key ingredient in Deligne-Serre's theorem associating Galois representations to modular forms of weight 1, and in the study of the weight part of Serre's conjecture. In this talk I will discuss recent progress towards generalizing this theory to automorphic forms on unitary and symplectic Shimura varieites. In particular, I will introduce certain p-adic analogues of Maass-Shimura weight-raising differential operators, and discuss their action on p-adic automorphic forms, and on the associated mod p Galois representations. In contrast with Serre's classical approach where q-expansions play a prominent role, our approach is geometric in nature and is inspired by earlier work of Katz and Gross. This talk is based joint work with Eishen, and also with Fintzen--Varma, and with Flander--Ghitza--McAndrew. March 28 Adebisi Agboola Relative K-groups and rings of integers Abstract: Suppose that F is a number field and G is a finite group. I shall discuss a conjecture in relative algebraic K-theory (in essence, a conjectural Hasse principle applied to certain relative algebraic K-groups) that implies an affirmative answer to both the inverse Galois problem for F and G and to an analogous problem concerning the Galois module structure of rings of integers in tame extensions of F. It also implies the weak Malle conjecture on counting tame G-extensions of F according to discriminant. The K-theoretic conjecture can be proved in many cases (subject to mild technical conditions), e.g. when G is of odd order, giving a partial analogue of a classical theorem of Shafarevich in this setting. While this approach does not, as yet, resolve any new cases of the inverse Galois problem, it does yield substantial new results concerning both the Galois module structure of rings of integers and the weak Malle conjecture. April 4 Wei-Lun Tsai Hecke L-functions and $\ell$ torsion in class groups Abstract: The canonical Hecke characters in the sense of Rohrlich form a set of algebraic Hecke characters with important arithmetic properties. In this talk, we will explain how one can prove quantitative nonvanishing results for the central values of their corresponding L-functions using methods of an arithmetic statistical flavor. In particular, the methods used rely crucially on recent work of Ellenberg, Pierce, and Wood concerning bounds for $\ell$-torsion in class groups of number fields. This is joint work with Byoung Du Kim and Riad Masri. April 11 Taylor Mcadam Almost-prime times in horospherical flows Abstract: Equidistribution results play an important role in dynamical systems and their applications in number theory. Often in such applications it is desirable for equidistribution to be effective (i.e. the rate of convergence is known). In this talk I will discuss some of the history of effective equidistribution results in homogeneous dynamics and give an effective result for horospherical flows on the space of lattices. I will then describe an application to studying the distribution of almost-prime times in horospherical orbits and discuss connections of this work to Sarnak’s Mobius disjointness conjecture.
Your score is simply the sum of difficulties of your solved problems. Solving the same problem twice does not give any extra points. Note that Kattis' difficulty estimates vary over time, and that this can cause your score to go up or down without you doing anything. Scores are only updated every few minutes – your score and rank will not increase instantaneously after you have solved a problem, you have to wait a short while. If you have set your account to be anonymous, you will not be shown in ranklists, and your score will not contribute to the combined score of your country or university. Your user profile will show a tentative rank which is the rank you would get if you turned off anonymous mode (assuming no anonymous users with a higher score than you do the same). The combined score for a group of people (e.g., all users from a given country or university) is computed as a weighted average of the scores of the individual users, with geometrically decreasing weights (higher weights given to the larger scores). Suppose the group contains $n$ people, and that their scores, ordered in non-increasing order, are $s_0 \ge s_1 \ge \ldots \ge s_{n-1}$ Then the combined score for this group of people is calculated as \[ S = \frac{1}{f} \sum_{i=0}^{n-1} \left(1-\frac{1}{f}\right)^i \cdot s_i, \] where the parameter $f$ gives a trade-off between the contribution from having a few high scores and the contribution from having many users. In Kattis, the value of this parameter is chosen to be $f = 5$. For example, if the group consists of a single user, the score for the group is 20% of the score of that user. If the group consists of a very large number of users, about 90% of the score is contributed by the 10 highest scores. Adding a new user with a non-zero score to a group always increases the combined score of the group. Kattis has problems of varying difficulty. She estimates the difficulty for different problems by using a variant of the ELO rating system. Broadly speaking, problems which are solved by many people using few submissions get low difficulty scores, and problems which are often attempted but rarely solved get high difficulty scores. Problems with very few submissions tend to get medium difficulty scores, since Kattis does not have enough data about their difficulty. The difficulty estimation process also assigns an ELO-style rating to you as a user. This rating increases when you solve problems, like your regular score, but is also affected by your submission accuracy. We use your rating to choose which problems to suggest for you to solve. If your rating is higher, the problems we suggest to you in each category (trivial, easy, medium, hard) will have higher difficulty values.
This question already has an answer here: Let $G$ be an abelian group with all proper subgroup finite, then is $G$ finite or at least finitely generated ? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: Let $G$ be an abelian group with all proper subgroup finite, then is $G$ finite or at least finitely generated ? Let $G$ be the group of all $z\in\mathbb{C}$ such that $z^{2^n}=1$ for some $n\in\mathbb{Z}^+$ (under the usual multiplication). Then $G$ is infinite, but every proper subgroup is finite. If $p$ is a prime number, then consider the ring $\mathbb{Z}[\frac{1}{p}]$ of all rational numbers whose denominator is a power of $p$. Taking quotients of the underlying additive group, the group $$ \mathbb{Z}\left[\frac{1}{p} \right] / \mathbb{Z} $$ is an example of a group with this property. This group is sometimes denoted as $\mathbb{Z} / p^\infty$, since it is the "limit" (in a suitable sense) of the cyclic groups $\mathbb{Z} / p^k$ as $k \to \infty$. In the case of $p=2$, this group is isomorphic to the one described in the other answer.
My book gives the following definition: Let $A$ be a $(n\times n)$-matrix with real components. A nonzero vector $v\in\mathbb R^n$ is an eigenvector of $A$ if there exists $\lambda\in\mathbb R$, such that $A\cdot v=\lambda v$. This value $\lambda$ is the eigenvalue that belongs to eigenvector $V$. The way I interpret this definition, is that we are only given an eigenvalue, when we are given an eigenvector. But does a scalar being an eigenvalue imply there is an eigenvector? How I see the definition: $$ v=\text{eigenvector}\implies\exists\lambda\in\mathbb R:A\cdot v=\lambda v. $$ But does the definition really say the following: $$ \lambda(\in\mathbb R)=\text{eigenvalue}\implies\exists v\in\mathbb R^n:A\cdot v=\lambda v? $$ I was thinking of using the contrapositive: assume that for each $v\in\mathbb R^n:A\cdot v\neq \lambda v$. This means that for each $v$, $\lambda$ is not an eigenvalue. Therefore $\lambda$ is not an eigenvalue. Am I correct that I have to use this contrapositive? Or is the definition lacking?
The book I am using for my Introduction of Topology course is Principles of Topology by Fred H. Croom. Show that Hilbert Space is not locally compact at any point. This is what I understand: A space $X$ is compactprovided that every open cover of $X$ has a finite subcover. A space $X$ is locally compact at a point$x$ in $X$ provided that there is an open set $U$ containing $x$ for which $\overline{U}$ is compact. $X$ is locally compact provided that it is locally compact at each point. A metric space is compact if and only if it has the Bolzano-Weierstrass property. Local compactness does not imply compactness. My rough attempt: Seeking to prove Hilbert Space $H$ is not locally compact at any point by contradiction. Suppose $H$ is locally compact at a point $p = (x_1,x_2,...)$. Let $U$ be an open set containing $p$. Since $H$ is locally compact, $\overline{U}$ is compact; thus $\exists r>0:B(p, r)\subset U$. Then $\overline{B(p,r)} = B[p,r] \subset \overline{U}$. However, the set $P= \{p_n\}_{n=1}^{\infty}$ of points $p_n= (x_1,x_2,...,x_{n-1},x_n+r/2,x_{n+1},...)$ is an infinite subset of $B[p,r]$ with no limit point. Since compactness is equivalent to the Bolzano-Weierstrass property in metric spaces, we must conclude that $B[p,r]$ is not compact. Thus $\overline{U}$ is not compact and $H$ is not locally compact at any point. Is there anything I need to change regarding the proof? Any suggestions? Anything I need to clarify? I sincerely thank you for taking the time to read this question and my attempt at proving it. I greatly appreciate any assistance you may provide.
ISSN: 1935-9179 eISSN: 1935-9179 All Issues Electronic Research Announcements Open Access Articles Abstract: A compact Riemannian manifold may be immersed into Euclidean space by using high frequency Laplace eigenfunctions. We study the geometry of the manifold viewed as a metric space endowed with the distance function from the ambient Euclidean space. As an application we give a new proof of a result of Burq-Lebeau and others on upper bounds for the sup-norms of random linear combinations of high frequency eigenfunctions. Abstract: We improve a result in [9] by proving the existence of a positive measure set of $(3n-2)$-dimensional quasi-periodic motions in the spacial, planetary $(1+n)$-body problem away from co-planar, circular motions. We also prove that such quasi-periodic motions reach with continuity corresponding $(2n-1)$-dimensional ones of the planar problem, once the mutual inclinations go to zero (this is related to a speculation in [2]). The main tool is a full reduction of the SO(3)-symmetry, which retains symmetry by reflections and highlights a quasi-integrable structure, with a small remainder, independently of eccentricities and inclinations. Abstract: We calculate the the sharp constant and characterize the extremal initial data in $\dot{H}^{\frac{3}{4}} \times \dot{H}^{-\frac{1}{4}}$ for the $L^4$ Sobolev--Strichartz estimate for the wave equation in four spatial dimensions. Abstract: In this paper, we study a combined incompressible and vanishing capillarity limit in the barotropic compressible Navier-Stokes-Korteweg equations for weak solutions. For well prepared initial data, the convergence of solutions of the compressible Navier-Stokes-Korteweg equations to the solutions of the incompressible Navier-Stokes equation are justified rigorously by adapting the modulated energy method. Furthermore, the corresponding convergence rates are also obtained. Abstract: We announce an analogue of the celebrated theorem by Campbell, Baker, Hausdorff, and Dynkin for the $q$-exponential $\exp_q(x)=\sum_{n=0}^{\infty} \frac{x^n}{[n]_q!}$, with the usual notation for $q$-factorials: $[n]_q!:=[n-1]_q!\cdot(q^n-1)/(q-1)$ and $[0]_q!:=1$. Our result states that if $x$ and $y$ are non-commuting indeterminates and $[y,x]_q$ is the $q$-commutator $yx-q\,xy$, then there exist linear combinations $Q_{i,j}(x,y)$ of iterated $q$-commutators with exactly $i$ $x$'s, $j$ $y$'s and $[y,x]_q$ in their central position, such that $\exp_q(x)\exp_q(y)=\exp_q\!\big(x+y+\sum_{i,j\geq 1}Q_{i,j}(x,y)\big)$. Our expansion is consistent with the well-known result by Schützenberger ensuring that one has $\exp_q(x)\exp_q(y)=\exp_q(x+y)$ if and only if $[y,x]_q=0$, and it improves former partial results on $q$-deformed exponentiation. Furthermore, we give an algorithm which produces conjecturally a minimal generating set for the relations between $[y,x]_q$-centered $q$-commutators of any bidegree $(i,j)$, and it allows us to compute all possible $Q_{i,j}$. Abstract: We show that 3-dimensional polyhedral manifolds with nonnegative curvature in the sense of Alexandrov can be approximated by nonnegatively curved 3-dimensional Riemannian manifolds. Abstract: Loebl, Komlós and Sós conjectured that every $n$-vertex graph $G$ with at least $n/2$ vertices of degree at least $k$ contains each tree $T$ of order $k+1$ as a subgraph. We give a sketch of a proof of the approximate version of this conjecture for large values of $k$. For our proof, we use a structural decomposition which can be seen as an analogue of Szemerédi's regularity lemma for possibly very sparse graphs. With this tool, each graph can be decomposed into four parts: a set of vertices of huge degree, regular pairs (in the sense of the regularity lemma), and two other objects each exhibiting certain expansion properties. We then exploit the properties of each of the parts of $G$ to embed a given tree $T$. The purpose of this note is to highlight the key steps of our proof. Details can be found in [arXiv:1211.3050]. Readers Authors Editors Referees Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
when we have a covering $p:Y\to X$ choose $x\in X$ and we have two actions on the fiber $p^{-1}(x)$: 1) The monodromy action of $\pi_1(X,x)$ defined as follows: $[\gamma].\tilde x=\tilde \gamma (1)$ that is; the action gives the endpoint of the unique lift of $\gamma$ starting at $\tilde x$. 2)The Deck group action of the group of Deck transformations $Deck(Y,X)$ on $p^{-1}(x)$ defined as follows: $\phi. \tilde x=\phi(\tilde x)$ When $Y=\tilde X$ is the universal covering we have that the two groups $Deck(\tilde X,X)$ and $\pi_1(X,x)$ are isomorphic. Now my problem is the following : when defining homology of $X$ with local coefficients in a $\mathbb Z[\pi_1(X)]-$module $A$, many authors say that $\pi_1(X,x)$ acts on $\tilde X$ by Deck transformations and set $C_n(X,A):=C_n(\tilde X,\mathbb Z)\otimes A$ so first why they don't say $\pi_1(X)$ acts on $\tilde X$ by monodromy action as defined above instead of saying that it acts by Deck transformatioins and second why taking the universal cover when they could take any cover of $X$ and we still have monodromy and Deck actions.
In lecture, we went through solving a Taylor error bound for arcsine. I followed most of it except for where it talks about the odds divided by the evens divided by $2n+1$ gaining in accuracy by a factor of 1/10 for each successive term (see bolded sentence below), which comes from this part of the series: $$ \sum_{x=1}^n \frac{1}{2n+1} \prod_{k=1}^n \frac{2n-1}{2n}$$ Where does the last factor of 1/10 come from? This is not obvious to me from the above expression and I am wondering whether someone could help me see what's going on. Lecture notes: $$\arcsin(x) = x + \sum \limits_{n=1}^{\infty} \frac{ 1 \cdot 3 \cdot 5 \cdots (2n-1) } { 2 \cdot 4 \cdot 6 \cdots (2n) } \cdot \frac{ x^{2n+1} }{ 2n+1 } $$ Asuming we have the Taylor expansion for arcsin(x), $$\arcsin(\frac{1}{10}) = \frac{1}{10} + \frac{1}{2} \cdot \frac{1}{3} \cdot (\frac{1}{10})^3 + \frac{3}{8}\cdot \frac{1}{5} \cdot (\frac{1}{10})^5 + \frac{5}{16}\cdot\frac{1}{7}\cdot(\frac{1}{10})^7 + \cdots + E_N$$ The Taylor error bound is $$E_N < \frac{C}{(N+1)!} (\frac{1}{10})^{N+1} $$ $$\left( \frac{d}{dx} \right)^{N+1} \arcsin(x) < C$$ $$0 \le x \le \frac{1}{10}$$ $C$ is an upper bound for the $N+1$st derivative of arcsin(x) for all $x$ between 0 and 1/10. Even though we don't have a good bound for the $N+1$st derivative, if we look at the terms in this series, we can make a good guess. Each step we gain 1/100 fold increase in accuracy because of $\left( \frac{1}{10} \right)^n$ If we look at the coefficients, the $2n+1$ and the product of odds over the product of evens, then we're picking up another factor of 10 in the denominator. We claim that $a_{n+2}$, the next term in the series, is less than the previous term, $a_n$, divided by 1000. We're picking up three decimal places of accuracy with each subsequent term. That means that if we want to get within $10^{-10}$, it's going to suffice choose $N$ bigger than or equal to 7. The first four terms suffice to approximate $\arcsin(\frac{1}{10})$ within $10^{-10}$ $$N \ge 7$$
Given a curve in a frictionless environment with parameterization $\displaystyle \mathbf{r}(\theta)=x(\theta)\hat{\mathbf{i}}+y(\theta)\hat{\mathbf{j}}$ for $\theta\in[0,\theta_f]$, how can I find the position of a particle, which starts at $\mathbf{r}(0)$ and which slides down $\mathbf{r}$ under only the force of gravity, as a function of time? Furthermore, what if the particle has an initial velocity $v_i$ in the direction of travel? I attempted the first part, but as I am not well-versed in physics I was unsure how to do the second, and I am not even sure if my work for the first part is right. I did some hand-waving and said $\displaystyle v=\sqrt{2gy(\theta)}$ from the conversion of PE to KE, and from the curve parameterization we have $\displaystyle v=\sqrt{{[x'(\theta)]}^2+{[y'(\theta)]}^2}\,\frac{d\theta}{dt}$. So simply solve $\displaystyle \frac{d\theta}{dt}=\frac{\sqrt{2gy(\theta)}}{\sqrt{{[x'(\theta)]}^2+{[y'(\theta)]}^2}}$ for $\theta$ in terms of $t$ and substitute this back into the parameterization of $\mathbf{r}$. Is there any better way of doing this? For one, this method rarely results in closed-form solutions (edit: which is not a requirement, but would be nice if other methods did have closed-form solutions), for another, I don't even know if it's right. I was then unsure how to do the second part because it would change the KE-PE equation and as I was already hand-waving I wasn't sure if I would need to use $\displaystyle \Delta v$ and $\Delta y$ or what.
Actually, I think you can take the limit just fine. Consider $\lim_{E_i\to 0} (O_i-E_i)^2/E_i$ under two cases: Case 1: $O_i > 0$. In this case the term goes off to infinity in the limit, and the overall chi-square statistic goes with it. Case 2: $O_i = 0$. In this case the term equals $E_i^2/E_i = E_i$, which goes to $0$ in the limit. So if $O_i=0$, the term adds nothing to the chi-square statistic. Your statistic doesn't have a chi-squared distribution under the null, but it's perfectly possible to simulate it under the null (whence $O_i$ must be $0$), as long as you replace the contribution $(O_i-E_i)^2/E_i$ for the cell with $E_i=0$ with the limiting value $0$. Then if $O_i$ for that cell is 0, you can compute the overall chi-square and compare with the simulated distribution. If $O_i$ is anything but 0, you can reject the null immediately; it's not possible to observe that if the null were true. You can't simply run it through a canned routine as-is, but with a little bit of effort you can still do a test as described. An alternative might be to use something from the power-divergence statistics $\frac{2}{\lambda(\lambda+1)}\sum_{i=1}^kO_i[(O_i/E_i)^\lambda-1]$ where $\lambda$ is chosen so the statistic will always exist. I believe an appropriate reference for these is Cressie and Read (1984)$^{[1]}$. e.g. something like the Freeman-Tukey $F^2 =4\sum_i (\sqrt{O_i}-\sqrt{E_i})^2$ However, instead of auto-rejecting, an $O_i$ of 1 would only contribute 4 to the statistic. You would likely need to simulate the null distribution of the statistic here; the asymptotic chi-square approximation may not be so good. A bigger issue, perhaps, is to note that your categories are ordered. I don't think it would make sense to use a chi-square-like statistic in any case, since it throws away a lot of power in the rest of the table. You might use something like an Anderson-Darling statistic, but (again) with simulated distribution under the null (you have a similar issue to the above in applying this test, but it should have better power if you use an analogous approach to solving it). [I wouldn't use Kolmogorov-Smirnov because it won't automatically reject the impossible case. At least I wouldn't use it as-is, but a test of that form could be adapted to behave as you'd hope.] [1]: Cressie, N. A. C. and Read, T. R. C., (1984), "Multinomial goodness-of-fit tests," J. Roy. Statist. Soc. Ser. B, 46, 440-464
Last time I wrote about different ways of calculating distance in a vector space — say, a two-dimensional Euclidean plane like the streets of Portland, Oregon. I showed three ways to reckon the distance, or norm, between two points (i.e. vectors). As a reminder, using the distance between points u and v on the map below this time: $$ \|\mathbf{u} - \mathbf{v}\|_1 = |u_x - v_x| + |u_y - v_y| $$ $$ \|\mathbf{u} - \mathbf{v}\|_2 = \sqrt{(u_x - v_x)^2 + (u_y - v_y)^2} $$ $$ \|\mathbf{u} - \mathbf{v}\|_\infty = \mathrm{max}(|u_x - v_x|, |u_y - v_y|) $$ Let's think about all the other points on Portland's streets that are the same distance away from u as v is. Again, we have to think about what we mean by distance. If we're walking, or taking a cab, we'll need to think about \(\ell_1\) — the sum of the distances in x and y. This is shown on the left-most map, below. For simplicity, imagine u is the origin, or (0, 0) in Cartesian coordinates. Then v is (0, 4). The sum of the distances is 4. Looking for points with the same sum, we find the pink points on the map. If we're thinking about how the crow flies, or \(\ell_2\) norm, then the middle map sums up the situation: the pink points are all equidistant from u. All good: this is what we usually think of as 'distance'. The \(\ell_\infty\) norm, on the other hand, only cares about the maximum distance in any direction, or the maximum element in the vector. So all points whose maximum coordinate is 4 meet the criterion: (1, 4), (2, 4), (4, 3) and (4, 0) all work. You might remember there was also a weird definition for the \(\ell_0\) norm, which basically just counts the non-zero elements of the vector. So, again treating u as the origin for simplicity, we're looking for all the points that, like v, have only one non-zero Cartesian coordinate. These points form an upright cross, like a + sign (right). So there you have it: four ways to draw a circle. Wait, what? A circle is just a set of points that are equidistant from the centre. So, depending on how you define distance, the shapes above are all 'circles'. In particular, if we normalize the ( u, v) distance as 1, we have the following unit circles: It turns out we can define any number of norms (if you like the sound of \(\ell_{2.4}\) or \(\ell_{240}\) or \(\ell_{0.024}\)... but most of the time, these will suffice. You can probably imagine the shapes of the unit circles defined by these other norms. What can we do with this stuff? Let's think about solving equations. Think about solving this: $$ x + 2y = 8 $$ I'm sure you can come up with a soluiton in your head, x = 6 and y = 1 maybe. But one equation and two unknowns means that this problem is underdetermined, and consequently has an infinite number of solutions. The solutions can be visualized geometrically as a line in the Euclidean plane (right). But let's say I don't want solutions like (3.141590, 2.429205) or (2742, –1367). Let's say I want the simplest solution. What's the simplest solution? This is a reasonable question, but how we answer it depends how we define 'simple'. One way is to ask for the nearest solution to the origin. Also reasonable... but remember that we have a few different ways to define 'nearest'. Let's start with the everyday definition: the shortest crow-flies distance from the origin. The crow-flies, \(\ell_2\) distances all lie on a circle, so you can imagine starting with a tiny circle at the origin, and 'inflating' it until it touches the line \(x + 2y - 8 = 0\). This is usually called the minimum norm solution, minimized on \(\ell_2\). We can find it in Python like so: import numpy.linalg as la A = [[1, 2]] b = [8] la.lstsq(A, b) The result is the vector (1.6, 3.2). You could almost have worked that out in your head, but imagine having 1000 equations to solve and you start to appreciate numpy.linalg. Admittedly, it's even easier in Octave (or MATLAB if you must) and Julia: A = [1 2] b = [8] A \ b But remember we have lots of norms. It turns out that minimizing other norms can be really useful. For example, minimizing the \(\ell_1\) norm — growing a diamond out from the origin — results in (0, 4). The \(\ell_0\) norm gives the same sparse* result. Minimizing the \(\ell_\infty\) norm leads to \( x = y = 8/3 \approx 2.67\). This was the diagram I wanted to get to when I started with the 'how far away is the supermarket' business. So I think I'll stop now... have fun with Norm! * I won't get into sparsity now, but it's a big deal. People doing big computations are always looking for sparse representations of things. They use less memory, are less expensive to compute with, and are conceptually 'neater'. Sparsity is really important in compressed sensing, which has been a bit of a buzzword in geophysics lately.
Value and Momentum Two of the most discussed effects in the asset pricing literature are Value and Momentum.Let’s start with the definitions: 1) Value Effect: Stocks with high book-to-market have historically outperformed stocks with low book-to-market Book-to-Market: (book value of common equity + deferred taxes and investment credit - book value of preferred stock)/(market value of equity). The higher the book-to-market (BM), the “cheaper” the stock, as you are getting more book value for every dollar you invest. 2) Momentum Effect: Stocks with high returns over the previous year (winners) have historically outperformed stocks with low returns over the previous year (losers). Past-year returns are calculated as the cumulative returns from to (month is excluded). Bonus – Size Effect: Stocks with low market capitalization (price shares outstanding) have historically outperformed stocks with high market capitalization. Both the value and momentum effects are stronger among small stocks. Trading on Value and Momentum Although there are many ways to construct portfolios based on the value and momentum effects, I will start with the following baseline for back-testing (see here for details): 1) Start with monthly CRSP data, and adjust for delisting returns. I do this by (1) Setting the return to the delisting return in the month that the firm delists if the delisting return is non-missing (2) Set the delisting return to -0.3 (-30%) if the delisting return is missing and the delisting code is 500, 520, 551-573, 574, 580 or 584 (3) Set the delisting return to -1 (-100%) if the delisting return is missing, and the delisting code does not belong to the list above. This is important for calculating returns in the extreme value and momentum portfolios, as these firms have a higher-than-average likelihood of delisting. 2) Each month, select ordinary common shares which have non-stale prices, non-missing returns, non-missing shares outstanding and are traded on the major exchanges (NYSE, AMEX and NASDAQ). 2a) For the value portfolios, select firms with a non-missing book-to-market. 2b) For the momentum portfolios, select firms with a non-missing/non-stale price at t-12, and no more than 4 missing returns between t-12 and t-2. 3) Select only NYSE firms, then calculate percentiles of the sorting variables among these firms each month – these are the breakpoints we are going to use to form portfolios. For example: If you want to form 5 value portfolios, calculate the 20th, 40th, 60th and 80th percentiles of book-to-market for every month in your sample. 4) Merge these breakpoints back into the rest of the data (all NYSE, AMEX and NASDAQ firms). Then sort into portfolios based on the breakpoints. Back to the five value portfolios example: Firms with a book-to-market below the 20th percentile will be put into portfolio one, firms with a book-to-market between the 20th and 40th percentiles will be put into portfolio 2, etc. Note: The portfolios will not all have the same number of firms. This is because the average NSADAQ/AMEX firm is different than the average NYSE firm, so the percentiles will not line up exactly. This prevents small firms from exerting an undue influence on the results. 5) Now you have the portfolio assignments at the end of each month. Portfolios are rebalanced monthly, so these assignments will be used for the following month to prevent a look-ahead bias. 6) All portfolios are value-weighted using last month’s ending market capitalization. This also prevents a look-ahead bias. 7) Convert each portfolio return to an excess return by subtracting the monthly risk-free rate, which can be found at Ken French’s data library. 8) Form a factor portfolio by subtracting the extreme portfolios from one another. Using the 5 value portfolio example: Subtract the 1 portfolio (lowest BM) from the 5 portfolio (highest BM). This is also an excess return. Call these portfolios high-minus-low (HML). 9) Following the observation of Value and Momentum Everywhere, form a “Combo” portfolio which is an equal-weighted average of the value and momentum portfolios. Because the value and momentum effects deliver positive excess returns, and are negatively correlated, a combination of the two should on average outperform either strategy on its own. Evaluating Portfolio Performance I am going to start by evaluating performance in the simplest way possible: CAPM alpha – which is a measure of average returns that cannot be explained by a portfolio’s covariance with the market portfolio. To calculate this, run the following regression: \begin{equation} R^e_{p,t}=\alpha + \beta R^e_{m,t} + \epsilon_{p,t} \end{equation} where is the excess return on the portfolio of interest, is the excess return on the market and denotes the CAPM alpha we are trying to measure. The table below presents the CAPM alphas and corresponding t-Statistics for 10 portfolios formed on value and momentum using data from 1970-2016. All quantities are annualized. For both value and momentum, the CAPM alpha is almost monotonically increasing from the low-BM/loser portfolios to the high-BM/winner portfolios. The high-minus-low portfolios both generate positive CAPM alphas, although the CAPM alpha for value is only marginally significant. As expected, the negative correlation between value and momentum gives the COMBO portfolio a higher Sharpe Ratio (average excess return/standard deviation of excess returns) than either portfolio on its own. Conditioning on Size In this section I am going to use an alternative portfolio construction: At step 3 above, first divide NYSE firms into two groups – above and below median market capitalization. Then, calculate the breakpoints for value and momentum within each of these two groups. This is useful for understand how the value and momentum effects differ among large and small firms. I then run the same CAPM regression as above. The table below presents the CAPM alphas and corresponding t-Statistics for portfolios formed on size and value/momentum, using data from 1970-2016. All quantities are annualized. As above, the CAPM alpha is almost monotonically increasing from the low-BM/loser portfolios to the high-BM/winner portfolios within both the small and large firm groups. As mentioned previously, the CAPM alphas for value and momentum are larger for the group of smaller firms. Next Steps With the basics established, you can refine the stock screener by: 1) Using less stale book-to-market data 2) Accounting for mis-measurement of book-to-market with intangible assets 3) Account for heterogeneity across industries 4) Accounting for “junky” stocks that get picked up by a value filter 5) Impose restrictions, such as value-weighted portfolio beta
In this class, we will deal only with integrals of complex functions of a real variable integrated with respect to the real variable. This is identical to integration of real functions of real variables. The $j$ is simply treated as a constant in all these cases. In this class, we are typically interested in time $t$ or frequency $\omega$ being the independent variable. Hence, the integrals will be with respect to $t$ or $\omega$. Example: $\int_0^{\pi/4} e^{j 2 t} \ dt = \left[\frac{e^{j 2 t}}{2 j}\right]_{0}^{\pi/4} = \frac{j-1}{2j}$ Sometimes we will have to use integration by parts to evaluate integrals. The main result to recall is(1) (2) Example: If we want to evaluate $\displaystyle{\int_0^1 t e^{-j \omega t} dt}$, where $\omega$ is any complex number We choose
There are numerous mathematically equivalent formulations of quantum mechanics. One of the oldest and most commonly used formulations is the transformation theory proposed by Cambridge theoretical physicist Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics, matrix mechanics (invented by Werner Heisenberg)[2] and wave mechanics (invented by Erwin Schrödinger). In this formulation, the instantaneous state of a quantum system encodes the probabilities of its measurable properties, or "observables". Examples of observables include energy, position, momentum, and angular momentum. Observables can be either continuous (e.g., the position of a particle) or discrete (e.g., the energy of an electron bound to a hydrogen atom). Generally, quantum mechanics does not assign definite values to observables. Instead, it makes predictions about probability distributions; that is, the probability of obtaining each of the possible outcomes from measuring an observable. Naturally, these probabilities will depend on the quantum state at the instant of the measurement. There are, however, certain states that are associated with a definite value of a particular observable. These are known as "eigenstates" of the observable ("eigen" can be roughly translated from German as inherent or as a characteristic). In the everyday world, it is natural and intuitive to think of everything being in an eigenstate of every observable. Everything appears to have a definite position, a definite momentum, and a definite time of occurrence. However, quantum mechanics does not pinpoint the exact values for the position or momentum of a certain particle in a given space in a finite time; rather, it only provides a range of probabilities of where that particle might be. Therefore, it became necessary to use different words for (a) the state of something having an uncertainty relation and (b) a state that has a definite value. The latter is called the "eigenstate" of the property being measured. For example, consider a free particle. In quantum mechanics, there is wave-particle duality so the properties of the particle can be described as a wave. Therefore, its quantum state can be represented as a wave, of arbitrary shape and extending over all of space, called a wave function. The position and momentum of the particle are observables. The Uncertainty Principle of quantum mechanics states that both the position and the momentum cannot simultaneously be known with infinite precision at the same time. However, one can measure just the position alone of a moving free particle creating an eigenstate of position with a wavefunction that is very large at a particular position x, and almost zero everywhere else. If one performs a position measurement on such a wavefunction, the result x will be obtained with almost 100% probability. In other words, the position of the free particle will almost be known. This is called an eigenstate of position (mathematically more precise: a generalized eigenstate (eigendistribution) ). If the particle is in an eigenstate of position then its momentum is completely unknown. An eigenstate of momentum, on the other hand, has the form of a plane wave. It can be shown that the wavelength is equal to h/p, where h is Planck's constant and p is the momentum of the eigenstate. If the particle is in an eigenstate of momentum then its position is completely blurred out. Usually, a system will not be in an eigenstate of whatever observable we are interested in. However, if one measures the observable, the wavefunction will instantaneously be an eigenstate (or generalized eigenstate) of that observable. This process is known as wavefunction collapse. It involves expanding the system under study to include the measurement device, so that a detailed quantum calculation would no longer be feasible and a classical description must be used. If one knows the corresponding wave function at the instant before the measurement, one will be able to compute the probability of collapsing into each of the possible eigenstates. For example, the free particle in the previous example will usually have a wavefunction that is a wave packet centered around some mean position x0, neither an eigenstate of position nor of momentum. When one measures the position of the particle, it is impossible to predict with certainty the result that we will obtain. It is probable, but not certain, that it will be near x0, where the amplitude of the wave function is large. After the measurement is performed, having obtained some result x, the wave function collapses into a position eigenstate centered at x. Wave functions can change as time progresses. An equation known as the Schrödinger equation describes how wave functions change in time, a role similar to Newton's second law in classical mechanics. The Schrödinger equation, applied to the aforementioned example of the free particle, predicts that the center of a wave packet will move through space at a constant velocity, like a classical particle with no forces acting on it. However, the wave packet will also spread out as time progresses, which means that the position becomes more uncertain. This also has the effect of turning position eigenstates (which can be thought of as infinitely sharp wave packets) into broadened wave packets that are no longer position eigenstates. Some wave functions produce probability distributions that are constant in time. Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics it is described by a static, spherically symmetric wavefunction surrounding the nucleus (Fig. 1). (Note that only the lowest angular momentum states, labeled s, are spherically symmetric). The time evolution of wave functions is deterministic in the sense that, given a wavefunction at an initial time, it makes a definite prediction of what the wavefunction will be at any later time. During a measurement, the change of the wavefunction into another one is not deterministic, but rather unpredictable, i.e., random. The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr-Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Interpretations of quantum mechanics have been formulated to do away with the concept of "wavefunction collapse"; see, for example, the relative state interpretation. The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wavefunctions become entangled, so that the original quantum system ceases to exist as an independent entity. For details, see the article on measurement in quantum mechanics. [edit] Mathematical formulation Main article: Mathematical formulation of quantum mechanics See also: Quantum logic In the mathematically rigorous formulation of quantum mechanics, developed by Paul Dirac and John von Neumann, the possible states of a quantum mechanical system are represented by unit vectors (called "state vectors") residing in a complex separable Hilbert space (variously called the "state space" or the "associated Hilbert space" of the system) well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projectivization of a Hilbert space. The exact nature of this Hilbert space is dependent on the system; for example, the state space for position and momentum states is the space of square-integrable functions, while the state space for the spin of a single proton is just the product of two complex planes. Each observable is represented by a maximally-Hermitian (precisely: by a self-adjoint) linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. If the operator's spectrum is discrete, the observable can only attain those discrete eigenvalues. The time evolution of a quantum state is described by the Schrödinger equation, in which the Hamiltonian, the operator corresponding to the total energy of the system, generates time evolution. The inner product between two state vectors is a complex number known as a probability amplitude. During a measurement, the probability that a system collapses from a given initial state to a particular eigenstate is given by the square of the absolute value of the probability amplitudes between the initial and final states. The possible results of a measurement are the eigenvalues of the operator - which explains the choice of Hermitian operators, for which all the eigenvalues are real. We can find the probability distribution of an observable in a given state by computing the spectral decomposition of the corresponding operator. Heisenberg's uncertainty principle is represented by the statement that the operators corresponding to certain observables do not commute. The Schrödinger equation acts on the entire probability amplitude, not merely its absolute value. Whereas the absolute value of the probability amplitude encodes information about probabilities, its phase encodes information about the interference between quantum states. This gives rise to the wave-like behavior of quantum states. It turns out that analytic solutions of Schrödinger's equation are only available for a small number of model Hamiltonians, of which the quantum harmonic oscillator, the particle in a box, the hydrogen-molecular ion and the hydrogen atom are the most important representatives. Even the helium atom, which contains just one more electron than hydrogen, defies all attempts at a fully analytic treatment. There exist several techniques for generating approximate solutions. For instance, in the method known as perturbation theory one uses the analytic results for a simple quantum mechanical model to generate results for a more complicated model related to the simple model by, for example, the addition of a weak potential energy. Another method is the "semi-classical equation of motion" approach, which applies to systems for which quantum mechanics produces weak deviations from classical behavior. The deviations can be calculated based on the classical motion. This approach is important for the field of quantum chaos. An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over histories between initial and final states; this is the quantum-mechanical counterpart of action principles in classical mechanics. [edit] Interactions with other scientific theories The fundamental rules of quantum mechanics are very broad. They assert that the state space of a system is a Hilbert space and the observables are Hermitian operators acting on that space, but do not tell us which Hilbert space or which operators, or if it even exists. These must be chosen appropriately in order to obtain a quantitative description of a quantum system. An important guide for making these choices is the correspondence principle, which states that the predictions of quantum mechanics reduce to those of classical physics when a system moves to higher energies or equivalently, larger quantum numbers. In other words, classic mechanics is simply a quantum mechanics of large systems. This "high energy" limit is known as the classical or correspondence limit. One can therefore start from an established classical model of a particular system, and attempt to guess the underlying quantum model that gives rise to the classical model in the correspondence limit Unsolved problems in physics: In the correspondence limit of quantum mechanics: Is there a preferred interpretation of quantum mechanics? How does the quantum description of reality, which includes elements such as the superposition of states and wavefunction collapse, give rise to the reality we perceive? When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator. Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein-Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field rather than a fixed set of particles. The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one employed since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical -\frac{e^2}{4 \pi\ \epsilon_0\ } \frac{1}{r} Coulomb potential. This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles. Quantum field theories for the strong nuclear force and the weak nuclear force have been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of the subnuclear particles: quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory known as electroweak theory. It has proven difficult to construct quantum models of gravity, the remaining fundamental force. Semi-classical approximations are workable, and have led to predictions such as Hawking radiation. However, the formulation of a complete theory of quantum gravity is hindered by apparent incompatibilities between general relativity, the most accurate theory of gravity currently known, and some of the fundamental assumptions of quantum theory. The resolution of these incompatibilities is an area of active research, and theories such as string theory are among the possible candidates for a future theory of quantum gravity. [edit] Derivation of quantization The particle in a 1-dimensional potential energy box is the most simple example where restraints lead to the quantization of energy levels. The box is defined as zero potential energy inside a certain interval and infinite everywhere outside that interval. For the 1-dimensional case in the x direction, the time-independent Schrödinger equation can be written as[3]: - \frac {\hbar ^2}{2m} \frac {d ^2 \psi}{dx^2} = E \psi. The general solutions are: \psi = A e^{ikx} + B e ^{-ikx} \;\;\;\;\;\; E = \frac{k^2 \hbar^2}{2m} \psi = C \sin kx + D \cos kx \; (exponential rewrite) The presence of the walls of the box restricts the acceptable solutions to the wavefunction. At each wall : \psi = 0 \; \mathrm{at} \;\; x = 0,\; x = L Consider x = 0 * sin 0 = 0, cos 0 = 1. To satisfy \psi = 0 \; D = 0 (cos term is removed) Now Consider: \psi = C \sin kx \; * at X = L, \psi = C \sin kL \; * If C = 0 then \psi =0 \; for all x and would conflict with Born interpretation * therefore sin kL must be satisfied by kL = n \pi \;\;\;\; n = 1,2,3,4,5 \;
Home Publications all years 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 theses techreports presentations edited volumes conferences Awards Research Teaching BLOG Miscellaneous Full CV [pdf] Events Past Events Publications of Torsten Hoefler Copyright Notice: The documents distributed by this server have been provided by thecontributing authors as a means to ensure timely dissemination ofscholarly and technical work on a noncommercial basis. Copyright and allrights therein are maintained by the authors or by other copyrightholders, notwithstanding that they have offered their works hereelectronically. It is understood that all persons copying thisinformation will adhere to the terms and constraints invoked by eachauthor's copyright. These works may not be reposted without the explicitpermission of the copyright holder. E. Solomonik, G. Ballard, J. Demmel, T. Hoefler: A Communication-Avoiding Parallel Algorithm for the Symmetric Eigenvalue Problem (Nr. 11, In Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA'17), presented in Washington, DC, USA, pages 111--121, ACM, ISBN: 978-1-4503-4593-4, Jun. 2017) Abstract Many large-scale scientific computations require eigenvalue solvers in a scaling regime where efficiency is limited by data movement. We introduce a parallel algorithm for computing the eigenvalues of a dense symmetric matrix, which performs asymptotically less communication than previously known approaches. We provide analysis in the Bulk Synchronous Parallel (BSP) model with additional consideration for communication between a local memory and cache. Given sufficient memory to store c copies of the symmetric matrix, our algorithm requires \Theta(\sqrt{c}) less interprocessor communication than previously known algorithms, for any c\leq p^{1/3} when using p processors. The algorithm first reduces the dense symmetric matrix to a banded matrix with the same eigenvalues. Subsequently, the algorithm employs successive reduction to O(\log p) thinner banded matrices. We employ two new parallel algorithms that achieve lower communication costs for the full-to-band and band-to-band reductions. Both of these algorithms leverage a novel QR factorization algorithm for rectangular matrices. Documentsdownload article: BibTeX @inproceedings{solomonik--commav-symm-eigenvalue, author={E. Solomonik and G. Ballard and J. Demmel and T. Hoefler}, title={{A Communication-Avoiding Parallel Algorithm for the Symmetric Eigenvalue Problem}}, year={2017}, month={Jun.}, pages={111--121}, number={11}, booktitle={Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA'17)}, location={Washington, DC, USA}, publisher={ACM}, isbn={978-1-4503-4593-4}, source={http://www.unixer.de/~htor/publications/}, }
Here's my two cents worth. Why Lie Algebras? First I'm just going to talk about Lie algebras. These capture almost all information about the underlying group. The only information omitted is the discrete symmetries of the theory. But in quantum mechanics we usually deal with these separately, so that's fine. The Lorentz Lie Algebra It turns out that the Lie algebra of the Lorentz group is isomorphic to that of $SL(2,\mathbb{C})$. Mathematically we write this (using Fraktur font for Lie algebras) $$\mathfrak{so}(3,1)\cong \mathfrak{sl}(2,\mathbb{C})$$ This makes sense since $\mathfrak{sl}(2,\mathbb{C})$ is non-compact, just like the Lorentz group. Representing the Situation When we do quantum mechanics, we want our states to live in a vector space that forms a representation for our symmetry group. We live in a real world, so we should consider real representations of $\mathfrak{sl}(2,\mathbb{C})$. A bit of thought will convince you of the following. Fact: real representations of a Lie algebra are in one-to-one correspondence (bijection) with complex representations of its complexification. That sounds quite technical, but it's actually simple. It just says that we can have complex vector spaces for our quantum mechanical states! That is, provided we use complex coefficients for our Lie algebra $\mathfrak{sl}(2,\mathbb{C})$. When we complexify $\mathfrak{sl}(2,\mathbb{C})$ we get a direct sum of two copies of it. Mathematically we write $$\mathfrak{sl}(2,\mathbb{C})_{\mathbb{C}} = \mathfrak{sl}(2,\mathbb{C}) \oplus \mathfrak{sl}(2,\mathbb{C})$$ So Where Does $SU(2)$ Come In? So we're looking for complex representations of $\mathfrak{sl}(2,\mathbb{C}) \oplus \mathfrak{sl}(2,\mathbb{C})$. But these just come from a tensor product of two representations of $\mathfrak{sl}(2,\mathbb{C})$. These are usually labelled by a pair of numbers, like so $$|\psi \rangle \textrm{ lives in the } (i,j) \textrm{ representation of } \mathfrak{sl}(2,\mathbb{C}) \oplus \mathfrak{sl}(2,\mathbb{C})$$ So what are the possible representations of $\mathfrak{sl}(2,\mathbb{C})$? Here we can use our fact again. It turns out that $\mathfrak{sl}(2,\mathbb{C})$ is the complexification of $\mathfrak{su}(2)$. But we know that the real representations of $\mathfrak{su}(2)$ are the spin representations! So really the numbers $i$ and $j$ label the angular momentum and spin of particles. From this perspective you can see that spin is a consequence of special relativity! What about Compactness? This tortuous journey shows you that things aren't really as simple as Ryder makes out. You are absolutely right that $$\mathfrak{su}(2)\oplus \mathfrak{su}(2) \neq \mathfrak{so}(3,1)$$ since the LHS is compact but the RHS isn't! But my arguments above show that compactness is not a property that survives the complexification procedure. It's my "fact" above that ties everything together. Interestingly in Euclidean signature one does have that $$\mathfrak{su}(2)\oplus \mathfrak{su}(2) = \mathfrak{so}(4)$$ You may know that QFT is closely related to statistical physics via Wick rotation. So this observation demonstrates that Ryder's intuitive story is good, even if his mathematical claim is imprecise. Let me know if you need any more help!
Mohammed Alatif, Puttaswamy, Laplacian Minimum Boundary Dominating Energy of Graphs, Asia Pac. J. Math., 3 (2016), 99-113. R. A. Rashwan, H. A. Hammad, Random Fixed Point Theorems for Random Mappings, Asia Pac. J. Math., 3 (2016), 114-135. K.P.R. Rao, Md. Mustaq Ali, A.S. Babu, Coincidence Point Theorem for Two Pairs of Hybrid Mappings in Complex Valued Metric Spaces, Asia Pac. J. Math., 3 (2016), 136-143. Ayazul Hasan, h-Purifiable Submodules and Isomorphism of h-Pure Hulls, Asia Pac. J. Math., 3 (2016), 144-152. M.P. Jeyaraman, T. K. Suresh, Certain Third Order Differential Subordination, Asia Pac. J. Math., 3 (2016), 153-160. Somayeh Khademloo, Raheleh Mohseni, Unique Positive Solution of Semilinear Elliptic Equations Involving Concave and Convex Nonlinearities In R^N, Asia Pac. J. Math., 3 (2016), 161-172. Sultan Senan Mahde, Veena Mathad, Some Results on the Edge Hub-Integrity of Graphs, Asia Pac. J. Math., 3 (2016), 173-185. Deborah Olufunmilayo Makinde, K. J. Afasinu, A.O. Afolabi, Some Properties of Certain Subclasses of Harmonic Univalent Functions, Asia Pac. J. Math., 3 (2016), 186-192. M.I. Quresh, M.S. Baboo, Summation Theorems for ${_p}F_{q+1}[(\alpha_p);g\pm m, (\beta_q); z]$ via Mellin-Barnes type contour integral and its Applications, Asia Pac. J. Math., 3 (2016), 193-207.
Orbit Personal Recommendation Two novels for the price of one. The second may be a little weaker than the first, but Larry Niven does not produce any bad writing. In common with the rest of his work, these contain rock hard science. Take a neutron star, put a planet round it and you have the recipe for a gas torus — a vast volume of micro-gravity with breathable air. These books bring orbital mechanics to life, if you notice it amidst the riveting story. ~ o ~ For a circular orbit:(1) Where: Period is the orbital period in seconds G is 6.673x10 -11 M is the mass of the planet r is the radius of the orbit (measured from the centre of the planet). Since $V_{circular} = \frac {2 \pi r} {Period}$ we get(2) It is easily shown (by equating potential energy with kinetic energy if you want to know) that(3) Therefore:(4) For an open-source, world-class network protocol analyser. Diceless role-playing in four-star luxury. ~ o ~ Here I will only consider spacecraft (small bodies) orbiting planets (large bodies). Orbiting bodies rotate around their centre of mass, which in the case of similar sized bodies would have them rotating around a point in space midway between them. There are few cases of this in our solar-system. Pluto and Charon is one, and the only large one. If you want to investigate such things, then you should look here. A spacecraft in orbit around a planet describes an ellipse with the planet at one focus. Such an ellipse has two ends, one at the closest approach to the planet called periapsis 1, and one at the further distance from the planet called apoapsis. The spacecraft will be travelling fastest at periapsis and slowest at apoapsis. A spacecraft in orbit around a planet is not in zero gravity. In fact, for low orbits, the gravity is still quite close to that on the planet's surface. 2 Astronauts float around because they are in free-fall; they are feely falling toward the planet. You can think of it like this: the spacecraft is falling toward the planet, but it is also moving fast, so that by the time it would have hit the planet it actually misses it. See the picture to the right.The spacecraft starts at point A with a certain velocity. Gravity pulls it toward the planet, but it actually falls to point B. When the spacecraft is at point B its velocity will be downwards and the planet will be pulling to the right, which is the same situation as at A, just turned ninety degrees.Of course it is more complex than this, because the direction the planet is pulling changes all the time, but this is the basic mechanism. In order to understand any more details, we have to understand escape velocity. This is the speed that the spacecraft has to be going to escape from the planet altogether. If the spacecraft is going slower than this it will be in orbit, but if it is going too slowly then that orbit will intersect the surface of the planet. If the velocity is exactly right, then the spacecraft will be in a circular orbit — it will always be the same altitude above the planet. It so happens that the relationship between the escape velocity and the circular orbit velocity is very simple. It is just a factor of $\sqrt 2$. See the right side-bar if you must know the details. Since Newtons laws are symmetrical, if a body drops from infinity, it will be going at the escape velocity when it hits the planet. The escape velocity depends on your altitude; higher up there's less gravity, so escape velocity is lower. So far as the maths is concerned, a planet's mass is concentrated in an infinitely small point at it's centre. You only need to consider the size of the planet if you are worried that you might hit it. 3 That means that if you are calculating orbits, you have to remember that (for the maths) an orbit's radius is measured from the centre of the planet but (describing it) it's altitude is measured from the surface. In order to get from one orbit to another we have to burn fuel. If we accelerate our spacecraft from a circular orbit it will go into an elliptical orbit with the periapsis at the point at which we did our burn. At some later point we can burn fuel again to put the spacecraft in a circular orbit wider than the original one. If we wait until apoapsis to do that, then we use the minimum amount of fuel, but we have to wait for half the orbital period. When we are talking about interplanetary travel that can be months. If instead of accelerating the spacecraft we decelerate it, then we go into a smaller elliptical orbit with the burn point as the apoapsis. To launch a spacecraft from the planet, we just accelerate it so that it reaches orbital velocity at the same time as it reaches a position on that orbit. Rockets appear to go straight up, but they curve over and most of their acceleration is horizontal. To land a spacecraft you just decelerate it so that its orbit falls within the atmosphere. On an airless world you have to kill all the orbital velocity so you are falling directly toward the planet, and then use your rockets vertically, directly against gravity. As we accelerate the spacecraft more, the apoapsis of the orbit gets higher. Eventually it reaches infinity. The ellipse will have the same basic shape, but it's ends are open — it is a parabola. This is when the speed is the escape velocity. As the speed keeps increasing the ends open out more — a hyperbola. A hyperbolic flight is one past a planet, where the velocity is above the escape velocity. The spacecraft curves around the planet in a wide arc, but doesn't go into orbit around it.
This is an exercise (1.4.11) from Marker. Fix a language $\mathcal L$ and $\mathcal L$-structure $\mathcal M$. For a subset $A \subseteq M$, an element of $M$ is algebraic over $A$ if it is a member of a finite $A$-definable subset of $M$. Let $\bar A$ denote the set of algebraic elements over $A$. We would like to show that $\bar {\bar A} = \bar A$. Here's a failed attempt to solve the problem. Let a formula $\psi(x, b)$ defines a finite set with a parameter $b$ from $\bar A$ and $\phi (y, a)$ defines a finite set with a parameter $a$ from $A$ (for simplicity we assume the number of parameters is one). Then, naively, the formula $\exists z (\psi(x, z) \wedge \phi(z, a))$ will do the job. However, this formula is not known to define a finite set a priori. I'd be grateful if you could help me in this problem.
This is an article to hopefully give a better understanding of the Discrete Fourier Transform (DFT) by showing an implementation of how the parameters of a real pure tone can be calculated from just two DFT bin values. The equations from previous articles are used in tandem to first calculate the frequency, and then calculate the amplitude and phase of the tone. The approach works best when the tone is between the two DFT bins in terms of frequency.The Coding... There are many applications in which this technique is useful. I discovered a version of this method while analysing radar systems, but the same approach can be used in a very wide range of... This is an article to hopefully give a better understanding of the Discrete Fourier Transform (DFT), but only indirectly. The main intent is to get someone who is uncomfortable with complex numbers a little more used to them and relate them back to already known Trigonometric relationships done in Real values. It is essentially a followup to my first blog article "The Exponential Nature of the Complex Unit Circle".Polar Coordinates The more common way of... Let's consider the following hypothetical situation: You have a sequence $x$ with $N/2$ points and a black box which can compute the DFT (Discrete Fourier Transform) of an $N$ point sequence. How will you use the black box to compute the $N/2$ point DFT of $x$? While the problem may appear to be a bit contrived, the answer(s) shed light on some basic yet insightful and useful properties of the DFT. On a related note, the reverse problem of computing an $N$... Channel Combining Channel combining is a step that combines copies of a given B-DMC $W$ in a recursive manner to produce a vector channel $W_N : {\cal X}^N \to {\cal Y}^N$, where $N$ can be any power of two, $N=2^n, n\le0^{[1]}$. The notation $u_1^N$ as shorthand for denoting a row vector $(u_1, \dots , u_N)$. The vector channel $W_N$ is the virtual channel between the input sequence $u_1^N$ to a linear encoder and the output sequence $y^N_1$ of $N$... Most engineers have seen the moment-to-moment fluctuations that are common with instantaneous measurements of a supposedly steady spectrum. You can see these fluctuations in magnitude and phase for each frequency bin of your spectrogram. Although major variations are certainly reason for concern, recall that we don’t live in an ideal, noise-free world. After verifying the integrity of your measurement setup by checking connections, sensors, wiring, and the like, you might conclude that the... Some might argue that measurement is a blend of skepticism and faith. While time constraints might make you lean toward faith, some healthy engineering skepticism should bring you back to statistics. This article reviews some practical statistics that can help you satisfy one common question posed by skeptical engineers: “How precise is my measurement?” As we’ll see, by understanding how to answer it, you gain a degree of control over your measurement time.An accurate, precise... This part in the series will consider the signals, measurements, analyses and configurations for testing high-speed low-latency feedback loops and their controllers. Along with basic test signals, a versatile IFFT signal generation scheme will be discussed and implemented. A simple controller under test will be constructed to demonstrate the analysis principles in preparation for the design and evaluation of specific controllers and closed-loop applications.Additional design... This is an article to hopefully give a better understanding of the Discrete Fourier Transform (DFT) by deriving exact formulas to calculate the phase and amplitude of a pure complex tone from several DFT bin values and knowing the frequency. This article is functionally an extension of my prior article "Phase and Amplitude Calculation for a Pure Complex Tone in a DFT"[1] which used only one bin for a complex tone, but it is actually much more similar to my approach for real... This is an article to hopefully give a better understanding of the Discrete Fourier Transform (DFT) by deriving exact formulas to calculate the phase and amplitude of a pure complex tone from a DFT bin value and knowing the frequency. This is a much simpler problem to solve than the corresponding case for a pure real tone which I covered in an earlier blog article[1]. In the noiseless single tone case, these equations will be exact. In the presence of noise or other tones... Introduction Quadrature signals are based on the notion of complex numbers and perhaps no other topic causes more heartache for newcomers to DSP than these numbers and their strange terminology of j operator, complex, imaginary, real, and orthogonal. If you're a little unsure of the physical meaning of complex numbers and the j = √-1 operator, don't feel bad because you're in good company. Why even Karl Gauss, one the world's greatest mathematicians, called the j-operator the "shadow of... Some time ago I reviewed the manuscript of a book being considered by the IEEE Press publisher for possible publication. In that manuscript the author presented the following equation: Being unfamiliar with Eq. (1), and being my paranoid self, I wondered if that equation is indeed correct. Not finding a stock trigonometric identity in my favorite math reference book to verify Eq. (1), I modeled both sides of the equation using software. Sure enough, Eq. (1) is not correct. So then I... The finite-word representation of fractional numbers is known as fixed-point. Fixed-point is an interpretation of a 2's compliment number usually signed but not limited to sign representation. It extends our finite-word length from a finite set of integers to a finite set of rational real numbers [1]. A fixed-point representation of a number consists of integer and fractional components. The bit length is defined... This is an article to hopefully give an understanding to Euler's magnificent equation: $$ e^{i\theta} = cos( \theta ) + i \cdot sin( \theta ) $$ This equation is usually proved using the Taylor series expansion for the given functions, but this approach fails to give an understanding to the equation and the ramification for the behavior of complex numbers. Instead an intuitive approach is taken that culminates in a graphical understanding of the equation.Complex... There are four ways to demodulate a transmitted single sideband (SSB) signal. Those four methods are: synchronous detection, phasing method, Weaver method, and filtering method. Here we review synchronous detection in preparation for explaining, in detail, how the phasing method works. This blog contains lots of preliminary information, so if you're already familiar with SSB signals you might want to scroll down to the 'SSB DEMODULATION BY SYNCHRONOUS DETECTION'... In the recent past, high data rate wireless communications is often considered synonymous to an Orthogonal Frequency Division Multiplexing (OFDM) system. OFDM is a special case of multi-carrier communication as opposed to a conventional single-carrier system. The concepts on which OFDM is based are so simple that almost everyone in the wireless community is a technical expert in this subject. However, I have always felt an absence of a really simple guide on how OFDM works which can... In the last posts I reviewed how to use the Python scipy.signal package to design digital infinite impulse response (IIR) filters, specifically, using the iirdesign function (IIR design I and IIR design II ). In this post I am going to conclude the IIR filter design review with an example. Previous posts: This is an article to hopefully give a better understanding of the Discrete Fourier Transform (DFT) by showing an implementation of how the parameters of a real pure tone can be calculated from just two DFT bin values. The equations from previous articles are used in tandem to first calculate the frequency, and then calculate the amplitude and phase of the tone. The approach works best when the tone is between the two DFT bins in terms of frequency.The Coding... The following is an introduction on how to design an infinite impulse response (IIR) filters using the Python scipy.signal package. This post, mainly, covers how to use the scipy.signal package and is not a thorough introduction to IIR filter design. For complete coverage of IIR filter design and structure see one of the references.Filter Specification Before providing some examples lets review the specifications for a filter design. A filter... Most engineers have seen the moment-to-moment fluctuations that are common with instantaneous measurements of a supposedly steady spectrum. You can see these fluctuations in magnitude and phase for each frequency bin of your spectrogram. Although major variations are certainly reason for concern, recall that we don’t live in an ideal, noise-free world. After verifying the integrity of your measurement setup by checking connections, sensors, wiring, and the like, you might conclude that the... The finite-word representation of fractional numbers is known as fixed-point. Fixed-point is an interpretation of a 2's compliment number usually signed but not limited to sign representation. It extends our finite-word length from a finite set of integers to a finite set of rational real numbers [1]. A fixed-point representation of a number consists of integer and fractional components. The bit length is defined... Introduction Quadrature signals are based on the notion of complex numbers and perhaps no other topic causes more heartache for newcomers to DSP than these numbers and their strange terminology of j operator, complex, imaginary, real, and orthogonal. If you're a little unsure of the physical meaning of complex numbers and the j = √-1 operator, don't feel bad because you're in good company. Why even Karl Gauss, one the world's greatest mathematicians, called the j-operator the "shadow of... There are four ways to demodulate a transmitted single sideband (SSB) signal. Those four methods are: synchronous detection, phasing method, Weaver method, and filtering method. Here we review synchronous detection in preparation for explaining, in detail, how the phasing method works. This blog contains lots of preliminary information, so if you're already familiar with SSB signals you might want to scroll down to the 'SSB DEMODULATION BY SYNCHRONOUS DETECTION'... In the last posts I reviewed how to use the Python scipy.signal package to design digital infinite impulse response (IIR) filters, specifically, using the iirdesign function (IIR design I and IIR design II ). In this post I am going to conclude the IIR filter design review with an example. Previous posts: Some time ago I reviewed the manuscript of a book being considered by the IEEE Press publisher for possible publication. In that manuscript the author presented the following equation: Being unfamiliar with Eq. (1), and being my paranoid self, I wondered if that equation is indeed correct. Not finding a stock trigonometric identity in my favorite math reference book to verify Eq. (1), I modeled both sides of the equation using software. Sure enough, Eq. (1) is not correct. So then I... The following is an introduction on how to design an infinite impulse response (IIR) filters using the Python scipy.signal package. This post, mainly, covers how to use the scipy.signal package and is not a thorough introduction to IIR filter design. For complete coverage of IIR filter design and structure see one of the references.Filter Specification Before providing some examples lets review the specifications for a filter design. A filter... This article relates to the Matlab / Octave code snippet: Delay estimation with subsample resolution It explains the algorithm and the design decisions behind it.Introduction There are many DSP-related problems, where an unknown timing between two signals needs to be determined and corrected, for example, radar, sonar,... Some common conceptual hurdles for beginning communications engineers have to do with "Pulse Shaping" or the closely-related, even synonymous, topics of "matched filtering", "Nyquist filtering", "Nyquist pulse", "pulse filtering", "spectral shaping", etc. Some of the confusion comes from the use of terms like "matched filter" which has a broader meaning in the more general field of signal processing or detection theory. Likewise "Raised Cosine" has a different meaning or application in this... Hello, this article is meant to give a quick overview over polyphase filtering and Farrows interpolation. A good reference with more depth is for example Fred Harris' paper: http://www.signumconcepts.com/IP_center/paper018.pdf The task is as follows: Interpolate a band-limited discrete-time signal at a variable offset between samples.In other words:Delay the signal by a given amount with sub-sample accuracy.Both mean the same. The picture below shows samples (black) representing... Introduction It seems to be fairly common knowledge, even among practicing professionals, that the efficiency of propagation of wireless signals is frequency dependent. Generally it is believed that lower frequencies are desirable since pathloss effects will be less than they would be at higher frequencies. As evidence of this, the Friis Transmission Equation[i] is often cited, the general form of which is usually written as: Pr = Pt Gt Gr ( λ / 4πd )2 (1) where the...
[ Disclaimer: I give my answer, however I am not an expert so feel free to comment/edit if I have made some mistake ;-) ] Mathematically, you can think of evolution in classical mechanics as a (symplecto)morphism on the cotangent bundle $T^*M$ of some smooth $n$-dimensional manifold $M$. The cotangent bundle $T^* M$ is a symplectic manifold, and thus carries a natural volume form $\omega$, that is the $2n$-th exterior power of the symplectic form. This volume form $\omega$ induces naturally a measure $\mu_\omega$ defining the measure of a Borel set $B\in \mathscr{B}$ on $T^*M$ as$$\mu_\omega(B)=\int_B \omega\; .$$Apart from the mathematical technicalities, the idea is the following: what in physics is called phase space is a particular geometrical object (the cotangent bundle) endowed with a "natural" measure. In the simplest case, where the coordinate space $M=\mathbb{R}^n$ and the phase space $T^*M = \mathbb{R}^{2n}$, then the natural (symplectic) measure $\mu_\omega$ is exactly the Lebesgue measure. The dynamics is described by a symplectomorphism $\phi$ (i.e. preserves the symplectic form) in the phase space: $\phi:\mathbb{R}\to T^* M$ (also called the Hamiltonian flow). This map preserves also the symplectic measure:$$(\forall B\in\mathscr{K})\; \phi_\#\mu_\omega(B)=\mu_\omega(B)\; ,$$where $\phi_\#\mu_\omega$ is the push forward of the measure by the flow $\phi$ and $\mathscr{K}$ is the set of compact Borel subsets of $T^*M$ (this is essentially a restatement of the relation you wrote). This is, however, a consequence of the special feature "the dynamics preserves the symplectic form", and is therefore not true for general measures. In my opinion, this is physically relevant, for the evolution being a symplectomorphism is closely related to the Hamilton-Jacobi form of the equations of motion (and in turn to the least action principle). Concerning the last point, the idea is that in "chaotic" dynamical systems closed (periodic) phase-space-trajectories are possible but "unstable", in the sense that are isolated points on the phase space. That means, physically, that a slight perturbation of the initial conditions (either position or momentum) of a periodic motion results in a non periodic motion that never returns to the initial condition (and in suitable situations, densely covers all the phase space or a region of the phase space). Of course each point of the phase space is still an admissible initial condition, but only a set of measure zero of them gives rise to periodic solutions, while the others to non periodic ones.
ä is in the extended latin block and n is in the basic latin block so there is a transition there, but you would have hoped \setTransitionsForLatin would have not inserted any code at that point as both those blocks are listed as part of the latin block, but apparently not.... — David Carlisle12 secs ago @egreg you are credited in the file, so you inherit the blame:-) @UlrikeFischer I was leaving it for @egreg to trace but I suspect the package makes some assumptions about what is safe, it offers the user "enter" and "exit" code for each block but xetex only has a single insert, the interchartoken at a boundary the package isn't clear what happens at a boundary if the exit of the left class and the entry of the right are both specified, nor if anything is inserted at boundaries between blocks that are contained within one of the meta blocks like latin. Why do we downvote to a total vote of -3 or even lower? Weren't we a welcoming and forgiving community with the convention to only downvote to -1 (except for some extreme cases, like e.g., worsening the site design in every possible aspect)? @Skillmon people will downvote if they wish and given that the rest of the network regularly downvotes lots of new users will not know or not agree with a "-1" policy, I don't think it was ever really that regularly enforced just that a few regulars regularly voted for bad questions to top them up if they got a very negative score. I still do that occasionally if I notice one. @DavidCarlisle well, when I was new there was like never a question downvoted to more than (or less?) -1. And I liked it that way. My first question on SO got downvoted to -infty before I deleted it and fixed my issues on my own. @DavidCarlisle I meant the total. Still the general principle applies, when you're new and your question gets donwvoted too much this might cause the wrong impressions. @DavidCarlisle oh, subjectively I'd downvote that answer 10 times, but objectively it is not a good answer and might get a downvote from me, as you don't provide any reasoning for that, and I think that there should be a bit of reasoning with the opinion based answers, some objective arguments why this is good. See for example the other Emacs answer (still subjectively a bad answer), that one is objectively good. @DavidCarlisle and that other one got no downvotes. @Skillmon yes but many people just join for a while and come form other sites where downvoting is more common so I think it is impossible to expect there is no multiple downvoting, the only way to have a -1 policy is to get people to upvote bad answers more. @UlrikeFischer even harder to get than a gold tikz-pgf badge. @cis I'm not in the US but.... "Describe" while it does have a technical meaning close to what you want is almost always used more casually to mean "talk about", I think I would say" Let k be a circle with centre M and radius r" @AlanMunn definitions.net/definition/describe gives a websters definition of to represent by drawing; to draw a plan of; to delineate; to trace or mark out; as, to describe a circle by the compasses; a torch waved about the head in such a way as to describe a circle If you are really looking for alternatives to "draw" in "draw a circle" I strongly suggest you hop over to english.stackexchange.com and confirm to create an account there and ask ... at least the number of native speakers of English will be bigger there and the gamification aspect of the site will ensure someone will rush to help you out. Of course there is also a chance that they will repeat the advice you got here; to use "draw". @0xC0000022L @cis You've got identical responses here from a mathematician and a linguist. And you seem to have an idea that because a word is informal in German, its translation in English is also informal. This is simply wrong. And formality shouldn't be an aim in and of itself in any kind of writing. @0xC0000022L @DavidCarlisle Do you know the book "The Bronstein" in English? I think that's a good example of archaic mathematician language. But it is still possible harder. Probably depends heavily on the translation. @AlanMunn I am very well aware of the differences of word use between languages (and my limitations in regard to my knowledge and use of English as non-native speaker). In fact words in different (related) languages sharing the same origin is kind of a hobby. Needless to say that more than once the contemporary meaning didn't match a 100%. However, your point about formality is well made. A book - in my opinion - is first and foremost a vehicle to transfer knowledge. No need to complicate matters by trying to sound ... well, overly sophisticated (?) ... The following MWE with showidx and imakeidx:\documentclass{book}\usepackage{showidx}\usepackage{imakeidx}\makeindex\begin{document}Test\index{xxxx}\printindex\end{document}generates the error:! Undefined control sequence.<argument> \ifdefequal{\imki@jobname }{\@idxfile }{}{... @EmilioPisanty ok I see. That could be worth writing to the arXiv webmasters as this is indeed strange. However, it's also possible that the publishing of the paper got delayed; AFAIK the timestamp is only added later to the final PDF. @EmilioPisanty I would imagine they have frozen the epoch settings to get reproducible pdfs, not necessarily that helpful here but..., anyway it is better not to use \today in a submission as you want the authoring date not the date it was last run through tex and yeah, it's better not to use \today in a submission, but that's beside the point - a whole lot of arXiv eprints use the syntax and they're starting to get wrong dates @yo' it's not that the publishing got delayed. arXiv caches the pdfs for several years but at some point they get deleted, and when that happens they only get recompiled when somebody asks for them again and, when that happens, they get imprinted with the date at which the pdf was requested, which then gets cached Does any of you on linux have issues running for foo in *.pdf ; do pdfinfo $foo ; done in a folder with suitable pdf files? BMy box says pdfinfo does not exist, but clearly do when I run it on a single pdf file. @EmilioPisanty that's a relatively new feature, but I think they have a new enough tex, but not everyone will be happy if they submit a paper with \today and it comes out with some arbitrary date like 1st Jan 1970 @DavidCarlisle add \def\today{24th May 2019} in INITEX phase and recompile the format daily? I agree, too much overhead. They should simply add "do not use \today" in these guidelines: arxiv.org/help/submit_tex @yo' I think you're vastly over-estimating the effectiveness of that solution (and it would not solve the problem with 20+ years of accumulated files that do use it) @DavidCarlisle sure. I don't know what the environment looks like on their side so I won't speculate. I just want to know whether the solution needs to be on the side of the environment variables, or whether there is a tex-specific solution @yo' that's unlikely to help with prints where the class itself calls from the system time. @EmilioPisanty well th eenvironment vars do more than tex (they affect the internal id in teh generated pdf or dvi and so produce reproducible output, but you could as @yo' showed redefine \today oor teh \year, \month\day primitives on teh command line @EmilioPisanty you can redefine \year \month and \day which catches a few more things, but same basic idea @DavidCarlisle could be difficult with inputted TeX files. It really depends on at which phase they recognize which TeX file is the main one to proceed. And as their workflow is pretty unique, it's hard to tell which way is even compatible with it. "beschreiben", engl. "describe" comes from the math. technical-language of the 16th Century, that means from Middle High German, and means "construct" as much. And that from the original meaning: describe "making a curved movement". In the literary style of the 19th to the 20th century and in the GDR, this language is used. You can have that in englisch too: scribe(verb) score a line on with a pointed instrument, as in metalworking https://www.definitions.net/definition/scribe @cis Yes, as @DavidCarlisle pointed out, there is a very technical mathematical use of 'describe' which is what the German version means too, but we both agreed that people would not know this use, so using 'draw' would be the most appropriate term. This is not about trendiness, just about making language understandable to your audience. Plan figure. The barrel circle over the median $s_b = |M_b B|$, which holds the angle $\alpha$, also contains an isosceles triangle $M_b P B$ with the base $|M_b B|$ and the angle $\alpha$ at the point $P$. The altitude of the base of the isosceles triangle bisects both $|M_b B|$ at $ M_ {s_b}$ and the angle $\alpha$ at the top. \par The centroid $S$ divides the medians in the ratio $2:1$, with the longer part lying on the side of the corner. The point $A$ lies on the barrel circle and on a circle $\bigodot(S,\frac23 s_a)$ described by $S$ of radius…
On my calculator, I usually get a $0$ when I divide something by $2$, a lot of times if that makes sense, but I was just wondering why does $2^{-329} = 0?$ It doesn't. Your calculator can't handle a number of such a small magnitude. Specifically, $2^{-329}\approx 9.14\times 10^{-100}$, and I'm guessing that your calculator can only handle numbers of magnitude between $10^{-99}$ and $10^{99}$. Your calculator gives you $0$ for $2^{-329}$ for the same reason that it gives you $0$ when you divide by $2$ a lot: Because $2^{-329}$ is dividing by $2$ a lot. More precisely, $$2^{-329}=\dfrac{1}{2^{329}}=\dfrac{1}{\text{huge number}}=\text{number so small your calculator doesn't know it from }0.$$ Note that $2^{-329}$ is what you get if you start with $1$ and divide by $2$ repeatedly, a total of $329$ times. The calculator doesn't have enough precision to store the number which is so small that the closest number it can represent is 0. Similarly, the same thing happens when you try repeatedly square rooting a number on your calculator. I.e. pick any positive number $n$ and evaluate $\sqrt{\sqrt{\sqrt{...\sqrt{\sqrt{n}}}}}$. You'll find the calculator eventually returns $1.0$ when you do it enough times. You know from your algebra course (hopefully) that $2^x \neq 0$ for any $x$, so this has something to do with calculator doing something in appropriate. You were probably expecting something in scientific notation: $$ 2^{-329} = a \cdot 10^b $$ for some $1 \leq a < 10$ and an integer $b$ (so that neither the right hand side nor the left hand side are zeroes, of course). Can you still compute this on your calculator? In your algebra class, you should have learned a trick to make insanely big (or insanely small, like this one) numbers manageable. Let's put this trick to action: $$ b + \log_{10} a = \log_{10} 2^{-329} = -329 \log_{10} 2 = -329 \cdot 0.30103 = -99.038869 $$ If $1 \leq a < 10$, then $0 \leq \log_{10} a < 1$. So $b$ is the integer floor part of the answer, and $\log_{10} a$ is the fractional part: $$ b = -100; \log_{10} a = 1 - 0.038869 = 0.961131; a = 10^{0.0.961131} = 9.14389 $$ Thus, $$ 2^{-329} = 9.14389 \cdot 10^{-100} $$ as of course pointed out by everybody else in this thread. Calculator is your (powerful) tool, but you still need to be smart in using that tool. Because, the calculator has a limit, and the input of yours crosses that limit. Your calculator calculates the value for the positive index first and then inverses it. Hence, it's underflowing its register values.
This is the second post in a series on bubbles in the U.S. Equity Market. The first part can be found here. Testing for Multiple Bubbles 1: Historical Episodes of Exuberance and Collapse in the S&P 500, Phillips, Shi and Yu, 2013 Summary As mentioned previously, most bubbles are identified ex-post. This presents a problem for policy makers who would like to identify bubbles early and prevent them from growing too large. Whether or not policy makers should do this is an entirely different question and will be addressed in a future post. The focus here is to discuss a paper by Phillips et al. which implements a purely statistical technique for identifying the emergence and collapse of bubbles. The paper is designed to fix a specific problem - other statistical techniques (including Philips 2011) fail when a series exhibits multiple bubbles of different time lengths and magnitudes. This is important, as we suspect the S&P 500 has experienced several bubble periods in the past 100 years. To this end, the authors implement the generalized sup augmented Dickey-Fuller test (GSADF), which will be discussed in more detail below. Unit roots Consider the ARMA(p,q) representation of a stochastic process: \begin{equation}\Phi(L) y_t = \Theta(L) \epsilon_t\end{equation}we say the process has a unit root, when the lag polynomial has a root equal to 1. In other words, a solution to is . Understanding unit roots is important, because it can totally change the behavior of a stochastic process. Consider a simple AR(1) model. \begin{equation} y_t = \rho y_{t-1} + \epsilon_t \end{equation} with . Below I’ve simulated two AR(1) series, one that is stationary (), and one with a unit root (). As you can see, the stationary series tends to revert to it’s mean, while the unit root has no such tendency. Augmented Dickey-Fuller (ADF) Test Before getting into the paper, I think it is important to review the ADF test for a unit root. This will help us understand what exactly Philips et al. are doing with the GSADF test. The discussion of the ADF test follows closely the treatment in Hamilton (1994). Suppose we have an AR(p) process: \begin{equation} (1-\phi_1 L - \phi_2 L^2 - \dots - \phi_p L^p) y_t = \epsilon_t \end{equation} Now, doing some algebra, we can rearrange this as follows (with denoting the first difference operator): \begin{equation} y_t = \rho y_{t-1} + \psi_1 \Delta y_{t-1} + \dots + \psi_{p-1} \Delta y_{t-p+1} + \epsilon_t \end{equation} Now suppose our null hypothesis is that is a unit root without drift ( and ). This implies one of the roots of is 1, and all the others are outside the unit circle. To implement the Dickey-Fuller test for a unit root, we estimate the following regression: Under the null will converge at rate to a non-standard distribution, which is why the appropriate critical values are calculated by simulation. If is sufficiently small, we reject the null of a unit root in favor of the left-tailed alternative (). The Paper Consider adding a bubble component to our standard asset pricing equation:\begin{equation}P_t=E_t \left[ \sum\limits_{j=1}^{\infty} \Bigg(\frac{1}{1+r_f}\Bigg)^j (D_{t+j} + U_{t+j}) \right]+ B_t\end{equation}Where is the risk-free rate, is the dividend paid and is an unobserved fundamental component at time . is the bubble component, which follows a submartinagle: . When , the asset price is controlled by dividends and fundamentals. Suppose is integrated of order 1, meaning has a unit root. Denote this I(1). Suppose further that is integrated of order 0, I(0), or I(1). If this is true, then the asset price is at most I(1). If the price is explosive. This implies that explosive asset price behavior can be used to detect bubbles. SADF Suppose we split the sample into different windows. For example, consider running an ADF test, using only data between period and (so the sample size is ):\begin{equation}\Delta y_t = \alpha_{r1,r2} + \beta_{r1,r2} y_{t-1} + \sum\limits_{i=1}^k \psi_{r1,r2}^i \Delta y_{t-i} + \epsilon_t\end{equation}note this is equivalent to the formulation above under the null. When , we can subtract from both sides to get this equation. Now, rather than testing we are testing . Now, consider expanding the window. Fix the smallest window size at (a particular fraction of the data). Calculate the ADF test with all window sizes between and 1, which represents using the whole sample. Fix at 0 (start of the sample). Define the sup Augmented Dickey Fuller Test (SADF) as: \begin{equation} SADF(r_0)= \sup_{r2\in[r_0,1]} ADF_0^{r_2} \end{equation} For those who haven’t seen it before, the is the least upper bound of a set. Think of it like the maximum in a more general setting. For example - consider an open interval (a,b). The maximum is not well defined (as is never achieved), but the is . SADF finds the largest ADF statistic among those computed with expanding windows. If the SADF is sufficiently large (this is a right-tailed test) the series display explosive behavior in at least one of the windows, which we take as evidence of a bubble. GSADF The innovation in this paper is the GSADF. Instead of starting all the windows at , allow to vary from 0 to (so we still get a minimum window size of ). Define GSADF as: \begin{equation} GSADF(r_0)=sup_{r_2 \in [r_0,1] , r_1 \in[0,r_2-r_0]} ADF_{r_1}^{r_2} \end{equation} The authors mention that this test is sensitive to choice of , and they choose 36-months for their empirical work (they have 1684 observations, so this is about 2\% of the sample). As with the ADF test, the critical values need to be derived from simulations as has a non-standard distribution under the null. BSADF The authors found that the GSADF expanding from to failed to identify bubble episodes, so to improve accuracy they conducted a backward sup ADF (BSADF) test. The first window is from to , and expands backwards with the largest window being to . Even though this is “backward”, it can still be used to detect bubbles in real time, as you can set to today, and see if the series is in a bubble phase. Identifying Bubbles Start at , and at each iteration move toward the end of your series. Define the start of the bubble as the first observation, (first value of ), whose BSADF statistic exceeds the critical value. Define the end of the bubble as the first observation after , , whose BSADF statistic is below the critical value. is designed to capture the minimum length of a bubble phase, to avoid picking up short positive trends in the data. The authors use , where is based on the frequency of observation. Identifying multiple bubbles works the same way. Suppose we have two bubble periods (non-overlapping). First, find the smallest for the start of the first bubble , and then a period at least afterward for the burst . Then to find the second bubble we start looking for values of in for a second bubble the same way. Being able to identify multiple bubbles is important, as the authors believe the S&P 500 experienced multiple bubble phases over the past 100 years. Empirics The authors use their backward GSADF test to detect bubbles in the S&P 500. They actually use the Price Dividend ratio, as opposed to just the price, to account for the price of the asset relative to the fundamentals, . Setting 6 months, the tests identified the banking panic of 1907, the 1917 stock market crash, the crash of 1928-1929, the postwar boom of 1954, Black Monday in 1987, the dot-com bubble in 1995-2001 and the subprime crisis 2008-2009. Other methods such as standard SADF failed to identify many of the bubbles of interest. Remarks All of the econometric reasoning and asymptotic theory behind the test makes sense, so I have nothing to add there. I will say, it’s pretty amazing that the test was able to identify pretty much every significant bubble you’ve ever heard of in U.S. history. Two points I would like to make are: 1) I’m not sure price dividend ratio is the right quantity to use for this test. Given that dividends are distributed quarterly (the same problem exists for earnings-per-share), the data is a bit stale by the third month. I would think that prices have more information by themselves, as they are forward looking, and should include investor expectations for future dividends. I understand that the authors are trying to include the idea that bubbles are deviations from fundamental value, but I’m not sure this is the right way to go. 2) This is not really a test for “bubbles”. At best, it is a test for bubble-like behavior. At worst, it is a test for periods of extreme return persistence that lasted 6 or more months. This raises issues of interpretation - when policy makers see an asset enter the “bubble” phase, all that says is returns have been persistent recently. In practice, this test should be used in conjunction with other factors, to be sure the test is not providing a false positive. Future Work A natural next step is to replicate the results from this paper, and then use the code to implement the GSADF test on the candidate “bubble” stocks identified in Part 1. It will be interesting to see if the GSADF test agrees with my simple filter based on large price increases and declines.
The axiom of well-ordered choice, or $\sf AC_{\rm WO}$, is strictly weaker than the axiom of choice itself. If we start with $L$ and add $\omega_1$ Cohen reals, then go to $L(\Bbb R)$, one can show that $\sf AC_{\rm WO}$ holds, while $\Bbb R$ cannot be well-ordered there. Pincus proved in the 1970s that this is equivalent to the following statement on Hartogs and Lindenbaum numbers: $\sf AC_{\rm WO}$ is equivalent to the statement $\forall x.\aleph(x)=\aleph^*(x)$. Here, the Lindenbaum number, $\aleph^*(x)$, is the least ordinal which $x$ cannot be mapped onto. One obvious fact is that $\aleph(x)\leq\aleph^*(x)$. In the late 1950s or early 1960s Jensen proved that this assumption also implies $\sf DC$. This is also a very clever proof. The conjunction of these two consequences gives us that $\aleph_1\leq 2^{\aleph_0}$, as a result of a theorem of Shelah from the 1980s, this implies there is a non-measurable set of reals. As far as Hahn–Banach, or other things of that sort, I do not believe that much is known on the topic. But to sum up, this axiom does not imply that the reals are well-ordered, but it does imply there is a non-measurable set of reals because there is a set of reals of size $\aleph_1$ and $\sf DC$ holds. Moreover, it is equivalent to saying that the Hartogs and Lindenbaum numbers are equal for all sets.
prettify-symbols-mode is a recent feature of Emacs and it’s very nice. And it looks like it can replace TeX-fold-mode in the future. But, at the time of writing, prettify-symbols-mode doesn’t seem to work well with AUCTeX unless you enable two workarounds together. Tested versions: AUCTeX 11.89.7 (latest version from GNU ELPA) GNU Emacs 25.1 Usual way to enable pretty symbols If you want to enable it for elisp buffers, you can add: (add-hook 'emacs-lisp-mode-hook 'prettify-symbols-mode) Then something like (lambda () (blah)) in elisp buffers should display as (λ () (blah)). If you want to enable it also for other lisp buffers, scheme mode buffers etc, you can adjust the following code: (dolist (mode '(scheme emacs-lisp lisp clojure)) (let ((here (intern (concat (symbol-name mode) "-mode-hook")))) ;; (add-hook here 'paredit-mode) (add-hook here 'prettify-symbols-mode))) If you want to enable for all buffers, you can add: (global-prettify-symbols-mode 1) And then for major modes of your interest, you may want to adjust the buffer-local prettify-symbols-alist accordingly, following the simple example code you can find from the documentation for prettify-symbols-mode. Expected way to use with AUCTeX Following code may be expected to work: (add-hook 'TeX-mode-hook 'prettify-symbols-mode) If it works, then \alpha, \beta, \leftarrow and so on should display as α, β, ←, … for TeX file buffers. I do not doubt that it will just work fine in future versions of AUCTeX, and if you are reading this as an old article, it is possible that just upgrading your AUCTeX package may be enough to make that line work as you expected. If it doesn’t work, then try making the following two changes. First change Instead of adding to the hook directly, try adding a delayed version, like so: (defun my-delayed-prettify () (run-with-idle-timer 0 nil (lambda () (prettify-symbols-mode 1)))) (add-hook 'TeX-mode-hook 'my-delayed-prettify) This way, (prettify-symbols-mode 1) is guaranteed to run after the style hooks and not before. I don’t know what style hooks do, but it looks like they may reset/erase font-lock stuff you have set up. If pretty symbols still don’t show up in AUCTeX buffers, then try adding the following change, in addition to the above change. Second change This one isn’t really about adding something. It is about removing. Remove the following line from your dotemacs if any: (require 'tex) tex.el will load anyway if you visit a TeX file with Emacs. This is a strange change to make, indeed. You should also remove the following line that is commonly used with miktex users: (require 'tex-mik) tex-mik.el is a good small library but tex-mik.el loads tex.el. Feel free to copy parts of tex-mik.el and paste to your dotemacs if you want. You can ensure you have removed every call of (require ‘tex) from your dotemacs by appending the following line to the end of dotemacs and then restarting Emacs to see if the warning message shows up: (if (featurep 'tex) (warn "(require 'tex) is still somewhere!")) If there was some code in dotemacs that relied on the fact that (require ‘tex) was called before, then you have to wrap that code with the with-eval-after-load macro, like this: (with-eval-after-load 'tex (add-to-list 'TeX-view-program-selection '(output-pdf "SumatraPDF")) (add-to-list 'TeX-view-program-list `("SumatraPDF" my--sumatrapdf)))
Starting with the boundary conditions for parallel E and B fields for emr normal to an interface, i am trying to derive reflection coefficient with refractive indices. I have got as far as $E_1=E_2\qquad \qquad $ (parallel $\vec E$ fields are equal) $E_{0i}+E_{0r}=E_{0t}\qquad \qquad $ Eq.(a) $\frac{B_1}{\mu_1} = \frac{B_2}{\mu_2}\qquad \qquad $ (parallel $\vec B$ fields are discontinuous) $\frac{E_i}{\mu_0 v_1} + \frac{E_{0r}}{\mu_0 v_1} = \frac{E_{0t}}{\mu_0 v_1} \qquad $ (B fields represented as electric field by dividing through by speed) $E_{0i}-E_{0r} = \frac{v1}{v2} E_{0t} \qquad $ (cancel $\mu_0$ and multiply by $v_1$) $E_{0i}-E_{0r} = \frac{n_2}{n_1} E_{0t} \qquad $ Eq. (b) (can swap $v$ for refractive index) $2E_{0i}=\left(1+\frac{n_2}{n_1} E_{0t}\right)\qquad $ Eq.(c) (I can the get to here by adding equation a and b together) I know I'm really close but I can't quite get to the reflection or transmission coefficient, how can i finish this off?
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for @JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default? @JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font. @DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma). @egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge. @barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually) @barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording? @barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us. @DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.) @barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow) if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.) @egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended. @barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really @DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts. @DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ... @DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts. MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers... has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable? I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something. @baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!... @baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier. @baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals.
My question is about the challenge space size in Schnorr protocol. To be precise, I feel I've read all the Internet (twice) and I still don't understand why is it bad to allow challenge space to be large (say, why one shouldn't let $\text{Challenge} \in \mathbb{Z}_q$) I'm interested only in situation of honest prover $P$ and malicious verifier $\tilde{V}$. To settle the notation, recall that (one round of) Schnorr protocol has the following form: Prover generates random $r \in \mathbb{Z}_q$, calculates $\text{Commit}=g^r \pmod p$, then sends $\text{Commit}$ to verifier. Verifier generates $m$ random bits and forms the number $\text{Challenge}$ from them (thus $\text{Challenge}$ is the random number ranging from $0$ to $2^{m} - 1$), then sends $\text{Challenge}$ to prover. Prover calculates $\text{Response} = r+s \cdot \text{Challenge} \pmod q$, then sends $\text{Response}$ to verifier. Verifier checks that $g^{\text{Response}} = \text{Commit} \cdot y^{\text{Challenge}} \pmod p$ where $y = g^s \pmod p$ and $s$ is the prover's secret. The simulator for this protocol is as follows. The algorithm generates random $\text{Response} \in \mathbb{Z}_p$. The algorithm asks verifier for the number $\text{Challenge}$ and receives it. The algorithm sets $\text{Commit} = g^{\text{Response}} \cdot y^{-\text{Challenge}} \pmod p$ The algorithm appends $(\text{Commit}, \text{Challenge}, \text{Response})$ to the transcript. The standard answer to my question is: if $\text{Challenge}$ is pseudorandom -- namely, depends on $\text{Commit}$, say, is equal to $\text{hash} (\text{Commit} || M)$ -- this simulator protocol cannot work, since at step 2 it requires the knowledge of parameter generated at step 3. There are no other simulators known for this protocol, so in the case of malicious verifier there's just no simulator. And since any ZK-protocol has simulator, the conclusion follows that malicious verifier case is not ZK. Question 1. However, I still don't get how exactly can malicious verifier extract some information. Okay, there's no known simulator for malicious verifier case -- well, how does it help him? Question 2. What about the challenge space size? Mao, Wenbo in their "Modern Cryptography" state that, thus, because of this simulator argument, $\text{Challenge}$ space size should NOT be large (don't get the logic again) ($\text{Challenge} \in \mathbb{Z}_q$ is prohibited) and state that the best choice is $\text{Challenge} \in [0, \log_2 p)$ (or, equivalently, $m = \log_2 \log_2 p$). Why such an odd and strange value?
Yesterday I started to write a paper about the reformulation of the Riemann Hypothesis. My idea was to map the function such that all of the trivial zeros are outside of the unit disk, and the non-trivial zeros are on the circle. Iff RH is true, then the radius of convergence (the distance to the closest singularity from the origin of the taylor series) of the taylor series representing reciprocal of the function is $1$. After some manipulations, I have got 2 conjectures: https://mathoverflow.net/questions/212289/riemann-hypothesis-reformulation-lim-n-to-infty-sum-k-lnka-kn-over-n-s. (Topic deleted from MO.) I would like to know if they really imply RH, or I went wrong somewhere. I post the reformulation here: EDIT: $\zeta(s)$ has its non-trivial zeros on the line $Re(s)=0.5$. It means, that the the taylor series of $$Z(s)={1\over\zeta\left(\frac{1}{2}+\frac{1+s}{1-s}\right)}$$ have its radius of convergence of $1$. (I mapped the right half plane to the unit disc, so trivial zeros are outside of the disk.) Its derivates given by Cauchy's integral formula, and taking the right contour $C$ such that $C(t)=f^{-1}(t-(a-1/2)i)$ with $f(z)=i(z+1)/(z-1)$ the map from $\mathbb{D}\to\mathbb{\overline{H}}$ becames $${Z}^{(n)}(0)=\frac{n!}{2\pi i}\int_{-\infty}^{\infty}{1\over \zeta(a+it)}C_n(t)\;dt$$ with $C_n(t)=C'(t)/C(t)^{n+1}$. WLG letting $1.5>a>1$, and using the dirichlet series for the reciprocal of the zeta function, I got \begin{align*}{Z}^{(n)}(0)&=\frac{n!}{2\pi i}\int_{-\infty}^{\infty}\left[\sum_{k=1}^\infty \frac{\mu(k)}{k^{a+it}}\right]C_n(t)\;dt\\&=\sum_{k=1}^\infty \frac{\mu(k)}{k^a}\int_{-\infty}^{\infty}\frac{n!}{2\pi i}\frac{C_n(t)}{k^{it}}\;dt\\&=\sum_{k=1}^\infty \frac{\mu(k)}{k^a}\int_{-\infty}^{\infty}\frac{n!}{2\pi i}g_k(C(t))C_n(t)\;dt\\&=\sum_{k=1}^\infty \frac{\mu(k)}{k^a}g^{(n)}_{k}(0).\end{align*} $$g_k(t)=1/k^{iC^{-1}(t)}$$ In the last 2 steps, I changed the contour integral to the derivates of a function series, noticing that $g\circ C(t)=1/(k^{it})$. The only singularity of $g$ is at $1$, but $g$ is bounded inside the contour, so I tought the integral and the derivates are the same. For later to have the limits defined, define the function $d\colon\mathbb{N}\mapsto \mathbb{N}$ such that $d(n)$ gives the $n$th square-free integer. $$Z^{(n)}(0)=\sum_{k=1}^{\infty}\frac{\mu(d(k))}{d(k)^2}g^{(n)}_{d(k)}(0)$$ 1. conjecture:Using the ratio test led me to my first question here, such that given a series $$A(n)=\sum_{k=1}^{\infty}a_k(n),$$ $$a_k(n)=\frac{\mu(d(k))}{n!d(k)^2}g^{(n)}_{d(k)}$$ with $|a_k(n)/a_k(n+1)|\to 1$ (Taylor series of $g$ about the origin have its radius of conv. 1) as $n\to \infty$, it is true that $$\lim_{n\to \infty}\left|\frac{A(n)}{A(n+1)}\right|=1.$$ I suppose it is true for some series satisfying certain conditions, but I cannot prove it. 2. conjecture: Using the recurrence relation of the coefficients (due to WolframAlpha): $na_k(n)+(n+2)a_k(n+2)-2(n+1-\ln(k))a_k(n+1)=0$, $$\lim_{n\to \infty}{\sum_k\ln(k)a_k(n)\over\sum_kna_k(n)}=0$$ would imply RH. Does proving the 2 conjectures above prove the Riemann hypothesis? I think it would also prove GRH for Dirichlet L-function with a little change, and with a good choice of $a$ to ensure convergence.
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
Ratio of Volume of Sphere Inside a Cube Inside a Sphere If the side of the cube is \[2x\]the the innermost sphere has radius \[x\]. The radius of the larger sphere is the distance from the centre of the cube to one of its vertices, and is equal to \[\sqrt{x^2+x^2+x^2} = \sqrt{3x^2} = x \sqrt{3}\] Then \[\frac{Volume \: of \: Large \: Sphere}{Volume \: of \: Small \: Sphere}= \frac{4/3 \pi (x \sqrt{3})^3}{4/3 \pi x^3} = 3 \sqrt{3}\]
Last Updated: May 11, 2019 Keywords Boussinesq approximation, closure problem, Reynolds averaging Reynolds Average It is computationally expensive to resolve the wide range of time and length scales observed in turbulent flows. We now consider decomposing a flow property \(f\), such as velocity and pressure, into a mean component \(\overline{f}\) and a fluctuating component \(f’\). \begin{equation} f(\boldsymbol{x}, t) = \overline{f}(\boldsymbol{x}, t) + f'(\boldsymbol{x}, t), \tag{1} \label{eq:decomposition} \end{equation} where \(\boldsymbol{x}\) is the position vector and \(t\) is time. The Reynolds-averaged Navier-Stokes ( RANS) turbulence models aim to solve the mean flow \(\overline{f}\) that changes more slowly in time and space than the original variable \(f\). The governing equations of the mean component will be derived later. There are many averaging operations defined in mathematics but the RANS models use the Reynolds average. It is briefly described in the newly published textbook by Kajishima and Taira [1]. For the discussion in this chapter, let us redefine the averaging operation such that it satisfies \begin{equation} \overline{f’} = 0,\;\;\overline{f’ \overline{f}} = 0,\;\;\overline{\overline{f}} = \overline{f}. \tag{7.2} \label{eq:reave} \end{equation} These relations in Eq. \eqref{eq:reave} are referred to as the Reynolds-averaging laws. The ensemble average that satisfies these laws is called the Reynolds average. This conceptual averaging operation conveniently removes fluctuating components from the flow field variables without explicitly defining the spatial length scale used in the averaging operation. The ensemble average that appears in the above definition is defined as (and usually denoted as \(\langle f \rangle\)) \begin{equation} \langle f \rangle(\boldsymbol{x}, t) \equiv \lim_{N \to \infty}\frac{1}{N}\sum_{i=1}^{N}f_{i}(\boldsymbol{x}, t), \tag{2} \label{eq:ensembleAve} \end{equation} where \(f_i\) are the samples of \(f\) and \(N\) is the number of samples. In other words, it is the average of the instantaneous values of the property at a given point in space \(\boldsymbol{x}\) and time \(t\) over a large number of repeated identical experiments. In general, this ensemble average varies with space and time (time-dependent). For the stationary random processes, we can define the time average \(f_T\): \begin{equation} f_{T}(\boldsymbol{x}) \equiv \frac{1}{T} \int_{0}^{T} f(\boldsymbol{x}, t) dt, \tag{3} \label{eq:timeAve} \end{equation} where \(T\) is the integration time. In the case of stationary random processes, the time averages equal the ensemble averages as stated in [3]: if the signal is stationary, the time average defined by equation \eqref{eq:timeAve} is an unbiased estimator of the true average \(\langle f \rangle\). Moreover, the estimator converges to \(\langle f \rangle\) as the time becomes infinite; i.e., for stationary random processes \begin{equation} \langle f \rangle(\boldsymbol{x}) = \lim_{T \to \infty} \frac{1}{T} \int_{0}^{T} f(\boldsymbol{x}, t) dt, \tag{8.28} \label{eq:aveRelation} \end{equation} Thus the time and ensemble averages are equivalent in the limit as \(T \to \infty\), but only for a stationary random process. Interested readers might want to search by the keyword ” ” on the relation between the ensemble and time averages. ergodic hypothesis RANS Equations To Be Updated Closure Problem – Reynolds Stress The linear eddy viscosity models (LEVM) assume the linear stress-strain relationship and employ the eddy-viscosity concept ( Boussinesq approximation) introduced by Joseph Valentin Boussinesq \begin{equation} -\rho\overline{u_i u_j} = \mu_t \left(\frac{\partial \overline{U}_i}{\partial x_j} + \frac{\partial \overline{U}_j}{\partial x_i} \right) -\frac{2}{3}\delta_{ij}\rho k. \tag{4} \label{eq:BoussinesqApprox} \end{equation} RANS Models in OpenFOAM Linear Eddy Viscosity Model (LEVM) Nonlinear Eddy Viscosity Model (NLEVM) Reynolds Stress Model (RSM) Limitations of LEVM Transition Models \(k\)-\(kl\)-\(\omega\) \(\gamma\)-\(Re_{\theta}\) Differential Reynolds Stress model SSG/LRR-\(\omega\) JH-\(\omega^h\) References [1] T. Kajishima and K. Taira, Computational Fluid Dynamics: Incompressible Turbulent Flows. Springer, 2016. [2] H. K. Versteeg and W. Malalasekera, An introduction to Computational Fluid Dynamics: The Finite Volume Method. Person Prentice Hall, 1995. [3] W. K. George, Lectures in Turbulence for the 21st Century.
The ladder operator method is used to solve the one-particle Schrodinger equation with a harmonic potential. What other potentials for the one-particle Schrodinger equation may be solved with the ladder operator method? Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community The hydrogen atom is an example where ladder operators can be used. There is a hidden SO(4) symmetry that explains the degeneracy for the prinicpal quantum number and one can use algebraic methods to get the eigenvalues. Here is a paper that does central force problems in general. . The full symmetry group for the hydrogen atom is SO(4,2). Here is another resource. Some people think ladder operators are only for the harmonic oscillator or equally spaced eigenvalues but this is because they are restricting themselves to the Heisenberg Lie algebra which works for the harmonic oscillator but there are other problems with other Lie algebras and their own representation theory. For a trivial counter example for the belief you need equally spaced eigenvalues: Suppose your Hamiltonian was just $L^2$ . This is hermitian so legitimate, there is an algebraic method to solve for eigenvalues it depends on the representation theory of $su(2)$ or equivalently $sl(2,C)$ but as we all know the eigenvalues of this operator are not equally spaced. Given a non-degenerate observable $A$, its ladder operator is an operator $L$ such that $[A, L] = \xi L $ for some $\xi \in \mathbb{R}$. We can then write $A (L \lvert a \rangle) = (LA + [A,L]) \lvert a \rangle = (a+\xi)(L \lvert a \rangle )$. Which implies by non-degeneracy that $L \lvert a \rangle = \lvert a + \xi \rangle $ Also note that since $A$ is Hermitian that $[A, L^\dagger] = -\xi L^\dagger $. So $L^\dagger \lvert a \rangle = \lvert a - \xi \rangle $ similarly. The Jacobi identity then implies that $[A, [L, L^\dagger]] = 0$. You said you were interested in solving the 1-D Schrodinger Equation. In this case your observable $A$ is the Hamiltonian $H$. One dimensional Hamiltonians are always non-degenerate when if comes to bound states so this condition is always satisfied (see proof here). So if you can find an operator $L$ such that $[H, L] = \xi L $ for some $\xi \in \mathbb{R}$, then the problem can be solved by ladder operators. The energy of an 1-D system will also always be bounded below since $P^2$ is positive definite and $V(X)$ has a minimum since you interested in bound states. One essential part of systems where ladder operators apply is that their spectrum (energy levels in the case of $A=H$, ${a}$ in general) is linearly spaced. If this makes qualitative sense then trying ladder operators may be the way to go. All that said, I'm fairly certain that you could prove that linearly spaced energy levels implies a harmonic potential. If this is true, then the harmonic oscillator is the only 1-D system whose Schrodinger Equation can be solved in this way.
3GPP 5G has been focused on structured LDPC codes known as quasi-cyclic low-density parity-check (QC-LDPC) codes, which exhibit advantages over other types of LDPC codes with respect to the hardware implementations of encoding and decoding using simple shift registers and logic circuits.5G NR QC-LDPC Circulant Permutation Matrix A circular permutation matrix ${\bf I}(P_{i,j})$ of size $Z_c \times Z_c$ is obtained by circularly shifting the identity matrix $\bf I$ of... For any B-DMC $W$, the channels $\{W_N^{(i)}\}$ polarize in the sense that, for any fixed $\delta \in (0, 1)$, as $N$ goes to infinity through powers of two, the fraction of indices $i \in \{1, \dots, N\}$ for which $I(W_N^{(i)}) \in (1 − \delta, 1]$ goes to $I(W)$ and the fraction for which $I(W_N^{(i)}) \in [0, \delta)$ goes to $1−I(W)^{[1]}$. Mrs. Gerber’s Lemma Mrs. Gerber’s Lemma provides a lower bound on the entropy of the modulo-$2$ sum of two binary random... Channel Combining Channel combining is a step that combines copies of a given B-DMC $W$ in a recursive manner to produce a vector channel $W_N : {\cal X}^N \to {\cal Y}^N$, where $N$ can be any power of two, $N=2^n, n\le0^{[1]}$. The notation $u_1^N$ as shorthand for denoting a row vector $(u_1, \dots , u_N)$. The vector channel $W_N$ is the virtual channel between the input sequence $u_1^N$ to a linear encoder and the output sequence $y^N_1$ of $N$... With the growth in the Internet of Things (IoT) products, the number of applications requiring an estimate of range between two wireless nodes in indoor channels is growing very quickly as well. Therefore, localization is becoming a red hot market today and will remain so in the coming years. One question that is perplexing is that many companies now a days are offering cm level accurate solutions using RF signals. The conventional wireless nodes usually implement synchronization... In the recent past, high data rate wireless communications is often considered synonymous to an Orthogonal Frequency Division Multiplexing (OFDM) system. OFDM is a special case of multi-carrier communication as opposed to a conventional single-carrier system. The concepts on which OFDM is based are so simple that almost everyone in the wireless community is a technical expert in this subject. However, I have always felt an absence of a really simple guide on how OFDM works which can... Minimum Shift Keying (MSK) is one of the most spectrally efficient modulation schemes available. Due to its constant envelope, it is resilient to non-linear distortion and was therefore chosen as the modulation technique for the GSM cell phone standard. MSK is a special case of Continuous-Phase Frequency Shift Keying (CPFSK) which is a special case of a general class of modulation schemes known as Continuous-Phase Modulation (CPM). It is worth noting that CPM (and hence CPFSK) is a... Design Files: Part1.slx Hi everyone, In this series of tutorials on discrete-time PLLs we will be focusing on Phase-Locked Loops that can be implemented in discrete-time signal proessors such as FPGAs, DSPs and of course, MATLAB. In the first part of the series, we will be reviewing the basics of continuous-time baseband PLLs and we will see some useful mathematics that will give us insight into the inners working of PLLs. In the second part, we will focus on... Hi! For my first post, I will share some information about GPS - Global Positioning System. I will delve one step deeper than a basic explanation of how a GPS system works and introduce some terminology. GPS, like we all know is the system useful for identifying one's position, velocity, & time using signals from satellites (referred to as SV or space vehicle in literature). It uses the principle of trilateration (not triangulation which is misused frequently) for... Octaveforge / Matlab design script. Download: here weighted numerical optimization of Laplace-domain transfer function linear-phase design, optimizes vector error (magnitude and phase) design process calculates and corrects group delay internally includes sinc() response of the sample-and-hold stage in the ADC optionally includes multiplierless FIR filter Digital-to-analog conversion connects digital... Introduction Evaluating the performance of communication systems, and wireless systems in particular, usually involves quantifying some performance metric as a function of Signal-to-Noise-Ratio (SNR) or some similar measurement. Many systems require performance evaluation in multipath channels, some in Doppler conditions and other impairments related to mobility. Some have interference metrics to measure against, but nearly all include noise power as an impairment. Not all systems are... In the recent past, high data rate wireless communications is often considered synonymous to an Orthogonal Frequency Division Multiplexing (OFDM) system. OFDM is a special case of multi-carrier communication as opposed to a conventional single-carrier system. The concepts on which OFDM is based are so simple that almost everyone in the wireless community is a technical expert in this subject. However, I have always felt an absence of a really simple guide on how OFDM works which can... Minimum Shift Keying (MSK) is one of the most spectrally efficient modulation schemes available. Due to its constant envelope, it is resilient to non-linear distortion and was therefore chosen as the modulation technique for the GSM cell phone standard. MSK is a special case of Continuous-Phase Frequency Shift Keying (CPFSK) which is a special case of a general class of modulation schemes known as Continuous-Phase Modulation (CPM). It is worth noting that CPM (and hence CPFSK) is a... Introduction Evaluating the performance of communication systems, and wireless systems in particular, usually involves quantifying some performance metric as a function of Signal-to-Noise-Ratio (SNR) or some similar measurement. Many systems require performance evaluation in multipath channels, some in Doppler conditions and other impairments related to mobility. Some have interference metrics to measure against, but nearly all include noise power as an impairment. Not all systems are... The problem of "spectral inversion" comes up fairly frequently in the context of signal processing for communication systems. In short, "spectral inversion" is the reversal of the orientation of the signal bandwidth with respect to the carrier frequency. Rick Lyons' article on "Spectral Flipping" at http://www.dsprelated.com/showarticle/37.php discusses methods of handling the inversion (as shown in Figure 1a and 1b) at the signal center frequency. Since most communication systems process... 3GPP 5G has been focused on structured LDPC codes known as quasi-cyclic low-density parity-check (QC-LDPC) codes, which exhibit advantages over other types of LDPC codes with respect to the hardware implementations of encoding and decoding using simple shift registers and logic circuits.5G NR QC-LDPC Circulant Permutation Matrix A circular permutation matrix ${\bf I}(P_{i,j})$ of size $Z_c \times Z_c$ is obtained by circularly shifting the identity matrix $\bf I$ of... Introduction It seems to be fairly common knowledge, even among practicing professionals, that the efficiency of propagation of wireless signals is frequency dependent. Generally it is believed that lower frequencies are desirable since pathloss effects will be less than they would be at higher frequencies. As evidence of this, the Friis Transmission Equation[i] is often cited, the general form of which is usually written as: Pr = Pt Gt Gr ( λ / 4πd )2 (1) where the... Some common conceptual hurdles for beginning communications engineers have to do with "Pulse Shaping" or the closely-related, even synonymous, topics of "matched filtering", "Nyquist filtering", "Nyquist pulse", "pulse filtering", "spectral shaping", etc. Some of the confusion comes from the use of terms like "matched filter" which has a broader meaning in the more general field of signal processing or detection theory. Likewise "Raised Cosine" has a different meaning or application in this... The topic of this article are the effects of radio frequency distortions on a baseband signal, and how to model them at baseband. Typical applications are use as a simulation model or in digital predistortion algorithms.Introduction Transmitting and receiving wireless signals usually involves analog radio frequency circuits, such as power amplifiers in a transmitter or low-noise amplifiers in a receiver.Signal distortion in those circuits deteriorates the link quality. When... Octaveforge / Matlab design script. Download: here weighted numerical optimization of Laplace-domain transfer function linear-phase design, optimizes vector error (magnitude and phase) design process calculates and corrects group delay internally includes sinc() response of the sample-and-hold stage in the ADC optionally includes multiplierless FIR filter Digital-to-analog conversion connects digital... Design Files: Part1.slx Hi everyone, In this series of tutorials on discrete-time PLLs we will be focusing on Phase-Locked Loops that can be implemented in discrete-time signal proessors such as FPGAs, DSPs and of course, MATLAB. In the first part of the series, we will be reviewing the basics of continuous-time baseband PLLs and we will see some useful mathematics that will give us insight into the inners working of PLLs. In the second part, we will focus on... The problem of "spectral inversion" comes up fairly frequently in the context of signal processing for communication systems. In short, "spectral inversion" is the reversal of the orientation of the signal bandwidth with respect to the carrier frequency. Rick Lyons' article on "Spectral Flipping" at http://www.dsprelated.com/showarticle/37.php discusses methods of handling the inversion (as shown in Figure 1a and 1b) at the signal center frequency. Since most communication systems process... Minimum Shift Keying (MSK) is one of the most spectrally efficient modulation schemes available. Due to its constant envelope, it is resilient to non-linear distortion and was therefore chosen as the modulation technique for the GSM cell phone standard. MSK is a special case of Continuous-Phase Frequency Shift Keying (CPFSK) which is a special case of a general class of modulation schemes known as Continuous-Phase Modulation (CPM). It is worth noting that CPM (and hence CPFSK) is a... Introduction Evaluating the performance of communication systems, and wireless systems in particular, usually involves quantifying some performance metric as a function of Signal-to-Noise-Ratio (SNR) or some similar measurement. Many systems require performance evaluation in multipath channels, some in Doppler conditions and other impairments related to mobility. Some have interference metrics to measure against, but nearly all include noise power as an impairment. Not all systems are... Some common conceptual hurdles for beginning communications engineers have to do with "Pulse Shaping" or the closely-related, even synonymous, topics of "matched filtering", "Nyquist filtering", "Nyquist pulse", "pulse filtering", "spectral shaping", etc. Some of the confusion comes from the use of terms like "matched filter" which has a broader meaning in the more general field of signal processing or detection theory. Likewise "Raised Cosine" has a different meaning or application in this... Introduction It seems to be fairly common knowledge, even among practicing professionals, that the efficiency of propagation of wireless signals is frequency dependent. Generally it is believed that lower frequencies are desirable since pathloss effects will be less than they would be at higher frequencies. As evidence of this, the Friis Transmission Equation[i] is often cited, the general form of which is usually written as: Pr = Pt Gt Gr ( λ / 4πd )2 (1) where the... In the recent past, high data rate wireless communications is often considered synonymous to an Orthogonal Frequency Division Multiplexing (OFDM) system. OFDM is a special case of multi-carrier communication as opposed to a conventional single-carrier system. The concepts on which OFDM is based are so simple that almost everyone in the wireless community is a technical expert in this subject. However, I have always felt an absence of a really simple guide on how OFDM works which can... Octaveforge / Matlab design script. Download: here weighted numerical optimization of Laplace-domain transfer function linear-phase design, optimizes vector error (magnitude and phase) design process calculates and corrects group delay internally includes sinc() response of the sample-and-hold stage in the ADC optionally includes multiplierless FIR filter Digital-to-analog conversion connects digital... Design Files: Part1.slx Hi everyone, In this series of tutorials on discrete-time PLLs we will be focusing on Phase-Locked Loops that can be implemented in discrete-time signal proessors such as FPGAs, DSPs and of course, MATLAB. In the first part of the series, we will be reviewing the basics of continuous-time baseband PLLs and we will see some useful mathematics that will give us insight into the inners working of PLLs. In the second part, we will focus on... Engineering is usually about managing efficiencies of one sort or another. One of my favorite working definitions of an engineer says, "An engineer is somebody who can do for a nickel what any damn fool can do for a dollar." In that case, the implication is that the cost is one of the characteristics being optimized. But cost isn't always the main efficiency metric, or at least the only one. Consider how a common transportation appliance, the automobile, is optimized... The topic of this article are the effects of radio frequency distortions on a baseband signal, and how to model them at baseband. Typical applications are use as a simulation model or in digital predistortion algorithms.Introduction Transmitting and receiving wireless signals usually involves analog radio frequency circuits, such as power amplifiers in a transmitter or low-noise amplifiers in a receiver.Signal distortion in those circuits deteriorates the link quality. When...
In the ACKS2 polarizable force field paper, I found a thing called the atom-condensed softness matrix. In another paper, I found this expression for it: $$ \chi_{kl} = 2 \sum_{i}^{\text{occ MOs}} \sum_{j}^{\text{unocc MOs}} \frac{\langle\psi_{i}|g_{k}|\psi_{j}\rangle\langle\psi_{j}|g_{l}|\psi_{i}\rangle}{\epsilon_{i} - \epsilon_{j}} \delta_{\sigma_{i}\sigma_{j}}, $$ where $\epsilon_{i}$: orbital energy of the $i$th KS orbital $\chi$: (non-interacting) response matrix $\psi_{i}$: spatial orbital $g_{i}$: potential basis function What is the physical meaning of it, or at least what information we can get from this matrix? If in KS-DFT we consider the system as non-interacting, why do we consider interaction between two species in this equation? (if I understand it right - between molecules $i$ and $j$)
Following the excellent answer here, it is stated that Connection: Let $P^X(\cdot\mid\cdot)$ be a regular conditional probability of $P$ given $X$. Then for any $A \in \mathcal{F}$ we have $$ {\rm E}[1_A\mid X]=\varphi(X), $$ where $\varphi(x) = P^X(A\mid x)$. In short we write ${\rm E}[1_A\mid X]=P^X(A\mid X)$. I'm trying to prove this for myself, but I'd appreciate some help. We need to show two things: $P^X(A\mid X)$ is $\sigma(X)$-measurable $\rm{E}[1_C P^X(A\mid X)] = \rm{E}[1_C 1_A]$ for all $C \in \sigma(X)$. Since $P^X(\cdot\mid\cdot)$ is a regular conditional probability, we know that for fixed $A \in \mathcal{F}$, the mapping $x \mapsto P^X(A\mid x)$ is $(\mathcal{B}(\mathbb{R}),\mathcal{B}(\mathbb{R}))$-measurable. Hence the composition $P^X(A\mid \cdot) \circ X: \Omega \to \mathbb{R}$ is a random variable for fixed $A \in \mathcal{F}$, which the author writes as $P^X(A\mid \cdot) \circ X = P^X(A|X)$. Thus $P^X(A\mid X)$ is $\sigma(X)$-measurable, and we have 1. For 2, recall that for all $A \in \mathcal{F}$ and $B \in \mathcal{B}(\mathbb{R})$ the conditional probability is defined to satisfy$$\int_B P^X(A\mid x) \,d\mu_X(x) = P(A \cap \{X \in B\}),$$where $\mu_X$ is the distribution measure of $X$ on $\mathcal{B}(\mathbb{R})$. This is the part I'm having trouble with. Here's my attempt:\begin{align*}\rm{E}[1_C P^X(A\mid X)] & = \int_{\Omega} 1_C(\omega)P^X(A\mid X)(\omega) dP(\omega) \\& = \int_{\mathbb{R}} 1_{\{1\}}(x) P^X(A\mid x)d\mu_X(x) &\qquad& (2) \\& = \int_{\{1\}}P^X(A\mid x)d\mu_X(x) \\& = P(A \cap \{X = 1\}) \\& = \int_\Omega 1_A(\omega) 1_{\{X = 1\}}(\omega) dP(\omega) \\& = \int_\Omega 1_A(\omega) 1_C(\omega) dP(\omega) &\qquad& (6) \\& = \rm{E}[1_A 1_C].\end{align*} I'm almost certain lines (2) and (6) aren't true, but I'm not sure how else to write the indicator random variable when transitioning to the distribution measure in (2). For (6), I don't see why $\{X = 1\} = C$, either, but it's my best shot at it right now :/
I would like to clarify something and I will take the example of the $SO(2)$ Lie group. A Lie group is a manifold with a group structure. Thus we can define maps on it to be able to move on the manifold. In $SO(2)$, I can write the matrices as : $$ \begin{pmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{pmatrix} $$ As I understand things, I see the $\theta$ as a map on the manifold. But when we want to find the Lie algebra of the group, we say that it is the tangent space to the identity. Thus, we have to take a curve from $\mathbb{R}$ to $SO(2)$ that pass in the identity and we have to derive it on the identity to find the Lie algebra. In this vision, we could say that $\theta$ is in fact the parametrisation of my curve. Thus an element of the Lie algebra is the derivative of the matrix according to $\theta$, and I find : $$ \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} $$ Finally : is $\theta$ a map on my manifold or is it a curve on it ?
@ José Hdz. Stgo.: Thank you for this great answer. I think I should add a few comments on the related materials I collected from different books. (Since I am neither a native speaker of German language nor well-versed in it, sometimes I need google translation or deepL translation for a better understanding on all these mysteries) I believe Biermann used some hypotheses originated from L. Schlesinger on Gauss' work of AGM and elliptic functions(see Fragmente zur Theorie des arithmetrisch-geometrischen Mittels aus den Jahren 1797-1799, L. Schlesinger, 1911). Schlesinger made a great effort on dating some fragments on elliptic functions in Gauss' Leistenotizen(in which most of the records are dated no later than 1798), which is extremely important for understanding the development of Gauss' theory on elliptic functions. Schlesinger pointed out that the fragments in the Leistenotizen containing AGM are consisted of: a)AGM and the rectification of ellipses; b)AGM and the series whose exponents are quadratic functions(i.e., theta functions); c)Series expansion of elliptic integrals as well as linear differential equation for elliptic integral of the first kind and the second kind. In 1911 Schlesinger dated these materials no later than 1798 as well as other records in the Leistenotizen. It is very likely that this hypothesis contradicts what Gauss wrote in his dairy on May, 30th 1799(which implies that Gauss did not know the relation between AGM and elliptic integral before 1799), so later Schlesinger changed it to the summer of 1799(Gauss's Werke, X-1, pp. 273). In the same year(Nov. 1799), Gauss started his Scheda Ac, another important record on Gauss' own development of elliptic function theory. In the Scheda Ac one can see the encrypted word GALEN(Gauss's Werke, X-1, pp. 273). I trust Biermann's interpretation of the GALEN that Gauss realized the importance of the reciprocal of AGM(which is equal to some elliptic integral of the first kind), and I do not reject Biermann's hypothesis that "Vicimus GEGAN" is a discovery on AGM. but I do not think "Vicimus GEGAN" is a discovery on the AGM and elliptic integrals/thetafunctions: I)The date of the record "Vicimus GEGAN" is Oct. 21, 1796. Gauss started his research on lemniscate months after that(Jan. 1797). It is highly improbable that Gauss had already known the relation between the general AGM and the elliptic integral of the first kind when he had not even started his research on a special case(the lemniscatic integral). II)I doubt whether the oral tradition that Schering recorded is reliable. Gauss already know that $$\theta_4(e^{-\pi})=1-e^{-\pi}+e^{-4\pi}-\cdots=\sqrt{\frac{\varpi}{\pi}}$$ $$\theta_2(e^{-\pi})=2e^{-\pi/4}+e^{-9\pi/4}+\cdots=\sqrt{\frac{\varpi}{\pi}}$$ $$\theta_3(e^{-\pi})=1+e^{-\pi}+e^{-4\pi}+\cdots$$ $$\theta_3^4=\theta_4^4+\theta_2^4$$ in 1798( Scheda Aa, see Gauss's Werke, III, pp. 418), where $\varpi$ is the lemniscatic constant $$\varpi=2\int_{0}^1\frac{\mathrm{d}x}{\sqrt{1-x^4}}.$$ If he did know the relation between AGM and the theta functions in 1794, he can prove that $$AGM((\theta_3(q))^2,(\theta_4(q))^2)=1$$ without any effort, which would definitely answer Gauss' question on May 30th, 1799. Pfaff's letter(Gauss's Werke, X-1, pp. 273, which is quoted in Hidden Harmony—Geometric Fantasies: The Rise of Complex Function Theory by Jeremy Gray) suggests that the proof was elusive to Gauss even in Nov. 1799. III)If Gauss had discovered anything about AGM in 1796, it might be something else, e.g. the asymptotic formula for AGM in Scheda Ac(Gauss's Werke, X-1, pp. 186, Gauss also said he found that AGM can be written as the quotient of two transcendental functions long ago in the 101st entry of his diary). But it is likely that we can never know what Gauss discovered on Oct. 21, 1796.
Algebraic Geometry Seminar Fall 2016 The seminar meets on Fridays at 2:25 pm in Van Vleck B305. Here is the schedule for the previous semester. Contents Algebraic Geometry Mailing List Please join the AGS Mailing List to hear about upcoming seminars, lunches, and other algebraic geometry events in the department (it is possible you must be on a math department computer to use this link). Fall 2016 Schedule date speaker title host(s) September 16 Alexander Pavlov (Wisconsin) Betti Tables of MCM Modules over the Cones of Plane Cubics local September 23 PhilSang Yoo (Northwestern) Classical Field Theories for Quantum Geometric Langlands Dima October 7 Botong Wang (Wisconsin) Enumeration of points, lines, planes, etc. local October 14 Luke Oeding (Auburn) Border ranks of monomials Steven October 28 Adam Boocher (Utah) Bounds for Betti Numbers of Graded Algebras Daniel November 4 Lukas Katthaen Finding binomials in polynomial ideals Daniel November 11 Daniel Litt (Columbia) Arithmetic restrictions on geometric monodromy Jordan November 18 David Stapleton (Stony Brook) Hilbert schemes of points and their tautological bundles Daniel December 2 Rohini Ramadas (Michigan) TBA Daniel and Jordan December 9 Robert Walker (Michigan) TBA Daniel Abstracts Alexander Pavlov Betti Tables of MCM Modules over the Cones of Plane Cubics Graded Betti numbers are classical invariants of finitely generated modules over graded rings describing the shape of a minimal free resolution. We show that for maximal Cohen-Macaulay (MCM) modules over a homogeneous coordinate rings of smooth Calabi-Yau varieties X computation of Betti numbers can be reduced to computations of dimensions of certain Hom groups in the bounded derived category D(X). In the simplest case of a smooth elliptic curve embedded into projective plane as a cubic we use our formula to get explicit answers for Betti numbers. In this case we show that there are only four possible shapes of the Betti tables up to a shifts in internal degree, and two possible shapes up to a shift in internal degree and taking syzygies. PhilSang Yoo Classical Field Theories for Quantum Geometric Langlands One can study a class of classical field theories in a purely algebraic manner, thanks to the recent development of derived symplectic geometry. After reviewing the basics of derived symplectic geometry, I will discuss some interesting examples of classical field theories, including B-model, Chern-Simons theory, and Kapustin-Witten theory. Time permitting, I will make a proposal to understand quantum geometric Langlands and other related Langlands dualities in a unified way from the perspective of field theory. Botong Wang Enumeration of points, lines, planes, etc. It is a theorem of de Brujin and Erdős that n points in the plane determines at least n lines, unless all the points lie on a line. This is one of the earliest results in enumerative combinatorial geometry. We will present a higher dimensional generalization to this theorem. Let E be a generating subset of a d-dimensional vector space. Let [math]W_k[/math] be the number of k-dimensional subspaces that is generated by a subset of E. We show that [math]W_k\leq W_{d-k}[/math], when [math]k\leq d/2[/math]. This confirms a "top-heavy" conjecture of Dowling and Wilson in 1974 for all matroids realizable over some field. The main ingredients of the proof are the hard Lefschetz theorem and the decomposition theorem. I will also talk about a proof of Welsh and Mason's log-concave conjecture on the number of k-element independent sets. These are joint works with June Huh. Luke Oeding Border ranks of monomials What is the minimal number of terms needed to write a monomial as a sum of powers? What if you allow limits? Here are some minimal examples: [math]4xy = (x+y)^2 - (x-y)^2[/math] [math]24xyz = (x+y+z)^3 + (x-y-z)^3 + (-x-y+z)^3 + (-x+y-z)^3[/math] [math]192xyzw = (x+y+z+w)^4 - (-x+y+z+w)^4 - (x-y+z+w)^4 - (x+y-z+w)^4 - (x+y+z-w)^4 + (-x-y+z+w)^4 + (-x+y-z+w)^4 + (-x+y+z-w)^4[/math] The monomial [math]x^2y[/math] has a minimal expression as a sum of 3 cubes: [math]6x^2y = (x+y)^3 + (-x+y)^3 -2y^3[/math] But you can use only 2 cubes if you allow a limit: [math]6x^2y = \lim_{\epsilon \to 0} \frac{(x^3 - (x-\epsilon y)^3)}{\epsilon}[/math] Can you do something similar with xyzw? Previously it wasn't known whether the minimal number of powers in a limiting expression for xyzw was 7 or 8. I will answer this and the analogous question for all monomials. The polynomial Waring problem is to write a polynomial as linear combination of powers of linear forms in the minimal possible way. The minimal number of summands is called the rank of the polynomial. The solution in the case of monomials was given in 2012 by Carlini--Catalisano--Geramita, and independently shortly thereafter by Buczynska--Buczynski--Teitler. In this talk I will address the problem of finding the border rank of each monomial. Upper bounds on border rank were known since Landsberg-Teitler, 2010 and earlier. We use symmetry-enhanced linear algebra to provide polynomial certificates of lower bounds (which agree with the upper bounds). This work builds on the idea of Young flattenings, which were introduced by Landsberg and Ottaviani, and give determinantal equations for secant varieties and provide lower bounds for border ranks of tensors. We find special monomial-optimal Young flattenings that provide the best possible lower bound for all monomials up to degree 6. For degree 7 and higher these flattenings no longer suffice for all monomials. To overcome this problem, we introduce partial Young flattenings and use them to give a lower bound on the border rank of monomials which agrees with Landsberg and Teitler's upper bound. I will also show how to implement Young flattenings and partial Young flattenings in Macaulay2 using Steven Sam's PieriMaps package. Adam Boocher Let R be a standard graded algebra over a field. The set of graded Betti numbers of R provide some measure of the complexity of the defining equations for R and their syzygies. Recent breakthroughs (e.g. Boij-Soederberg theory, structure of asymptotic syzygies, Stillman's Conjecture) have provided new insights about these numbers and we have made good progress toward understanding many homological properties of R. However, many basic questions remain. In this talk I'll talk about some conjectured upper and lower bounds for the total Betti numbers for different classes of rings. Surprisingly, little is known in even the simplest cases. Lukas Katthaen (Frankfurt) In this talk, I will present an algorithm which, for a given ideal J in the polynomial ring, decides whether J contains a binomial, i.e., a polynomial having only two terms. For this, we use ideas from tropical geometry to reduce the problem to the Artinian case, and then use an algorithm from number theory. This is joint work with Anders Jensen and Thomas Kahle. David Stapleton Fogarty showed in the 1970s that the Hilbert scheme of n points on a smooth surface is smooth. Interest in these Hilbert schemes has grown since it has been shown they arise in hyperkahler geometry, geometric representation theory, and algebraic combinatorics. In this talk we will explore the geometry of certain tautological bundles on the Hilbert scheme of points. In particular we will show that these tautological bundles are (almost always) stable vector bundles. We will also show that each sufficiently positive vector bundles on a curve C is the pull back of a tautological bundle from an embedding of C into the Hilbert scheme of the projective plane.
No, I don't think auto-regulation explain much in the population sizes of predators. Group selection may explain such auto-regulation but I don't think it is of any considerable importance for this discussion. The short answer is, as @shigeta said [predators] tend to starve to death as they are too many! To have a better understanding of what @shigeta said, you'll be interested in understanding various model of prey-predator or of consumer-resource interactions. For example the famous Lotka-Volterra equations describe the population dynamics of two co-existing species where one is the prey and the other is a predator. Let's first define some variables… $x$ : Number of preys $y$ : number of predators $t$ : time $\alpha$, $\beta$, $\xi$ and $\gamma$ are parameters describing how one species influence the population size of the other one. The Lotka-Voltera equations are: $$\frac{dx}{dt} = x(\alpha - \beta y)$$$$\frac{dy}{dt} = -y(\gamma - \xi x)$$ You can show that for some parameters the matrix for these equations have a complex eigenvalue meaning that the long term behavior of this system is cyclic (periodic behavior). If you simulate such systems you'll see that the population sizes of the two species fluctuate like this: where the blue line represents the predators and the red line represents the preys. Representing the same data in phase space, meaning with the population size of the two species on axes $x$ and $y$ you get: where the arrows shows the direction toward which the system moves. If the population size of the predators ($y$) reaches 0 (extinction), then $\frac{dx}{dt} = x(\alpha - \beta y)\space$ becomes $\frac{dx}{dt} = x\alpha \space$ (which general solution is $x_t = e^{\alpha t}x_0$) and therefore the populations of preys will grow exponentially. If the population size of preys ($x$) reaches 0 (extinction), then $\frac{dy}{dt} = -y(\gamma - \xi x)\space$ becomes $\frac{dy}{dt} = -y\gamma \space$, and therefore the population of predators will decrease exponentially. Following this model, your question is actually: Why are the parameters $\alpha$, $\beta$, $\xi$ and $\gamma$ not "set" in a way that predators cause the extinction of preys (and therefore their own extinction)? One might equivalently ask the opposite question? Why don't preys evolve in order to escape predators so that the population of predators crushes? As showed, you don't need a complex model to allow the co-existence of predators and preys. You could describe your model a bit more accurately in another post and ask why in your model the preys always get extinct. But there are tons of possibilities to render your model more realistic such as adding spatial heterogeneities (places to hide for example as suggested by @AudriusMeškauskas). One can also consider other trophic levels, stochastic effects, varying selection pressure through time (and other types of balancing selection), age, sex or health-specific mortality rate due to predation (e.g. predators may target preferentially young ones or diseased ones), several competing species, etc.. I would also like to talk about other things that might be of interest in your model (two of them need you to allow evolutionary processes in your model): 1) lineage selection: predators that eat too much end up disappearing because they caused their preys to get extinct. This hypothesis has nothing to do with some kind of auto-regulation for the good of species. Of course you'd need several species of predators and preys in your model. This kind of hypothesis are usually considered as very unlikely to have any explanatory power. 2) Life-dinner principle. While the wolf runs for its dinner, the rabbit runs for its life. Therefore, there is higher selection pressure on the rabbits which yield the rabbits to run in average slightly faster than wolves. This evolutionary process protects the rabbits from extinction. 3) You may consider..
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty). And Chrome has a Personal Blocklist extension which does what you want. : ) Of course you already have a Google account but Chrome is cool : ) Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies? do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created. @QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value. I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$. @QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0. @KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc. In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results @QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O @NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that. @NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment. @QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h). @KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow) Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
Based on the Bekenstein-Hawking Equation for Entropy, hasn't the relationship between quantum mechanics and gravity already been established. The macroscopic Beckenstain-Hawking entropy formula $$ S_{BH} = \frac{k A}{4 l_p^2} $$ with the Planck length given by $$ l_p = \sqrt{\frac{G\hbar}{c^3}} $$ gives a hint that quantum gravity is needed to determine the entropy because it contains both, the gravity constant $G$ and Plancks constant $\hbar$. However, this formula does NOT say what the correct quantum gravity is, that is needed to correctly describe the microstates of the black hole. Assuming a certain quantum gravity and calculating the entropy from a statistical mechanics point of view by counting the microstates $$ S = -k \sum\limits_i P_i \ln P_i $$ where $P_i$ is the probability that the system is in the microstate $i$, the Beckenstein-Hawking formula must be reproducable. If it does not, the quantum gravity applied is wrong. In summary, the Beckenstein-Hawking formula is not a quantum gravity theory, but it can be used as a test of all wannabe quantum gravities. To add to Dilaton's correct answer: The black hole area law is a result in classical gravitational physics. It tells us something about the macroscopic behavior of gravity, but it doesn't tell us anything directly about quantum gravity. It isn't even formulated in quantum mechanical terms. (This is what makes quantum gravity such a puzzle. The best constraint we have only constrains the correspondence limit.) The Bekenstein-Hawking formula is obtained in the so-called "black hole thermodynamics", which is based in pseudo formal analogies with real thermodynamics. Even if we accept the formula as if was correct, it does not establish "the relationship between quantum mechanics and gravity" because it precisely ignores quantum gravity effects and treats the black hole in a classical or 'semi-classical' fashion. When quantum gravity corrections are included, the event horizon (a purely classical concept) disappears. An introduction to the kind of quantum gravity corrections expected is given in Small, dark, and heavy: But is it a black hole?
ISSN: 1531-3492 eISSN: 1553-524X All Issues Discrete & Continuous Dynamical Systems - B September 2011 , Volume 16 , Issue 2 A special issue Dedicated to Qishao Lu on the occasion of his 70th birthday Select all articles Export/Reference: Abstract: This issue of Discrete and Continuous Dynamical Systems–Series B, is dedicated to our professor and friend, Qishao Lu, on the occasion of his 70th birthday and in honor of his important and fundamental contributions to the fields of applied mathematics, theoretical mechanics and computational neurodynamics. His pleasant personality and ready helpfulness have won our hearts as his admirers, students, and friends. For more information please click the "Full Text" above. Abstract: It is a central theme to study the Lyapunov stability of periodic solutions of nonlinear differential equations or systems. For dissipative systems, the Lyapunov direct method is an important tool to study the stability. However, this method is not applicable to conservative systems such as Lagrangian equations and Hamiltonian systems. In the last decade, a method that is now known as the 'third order approximation' has been developed by Ortega, and has been applied to particular types of conservative systems including time periodic scalar Lagrangian equations (Ortega, J. Differential Equations, 128(1996), 491-518). This method is based on Moser's twist theorem, a prototype of the KAM theory. Latter, the twist coefficients were re-explained by Zhang in 2003 through the unique positive periodic solutions of the Ermakov-Pinney equation that is associated to the first order approximation (Zhang, J. London Math. Soc., 67(2003), 137-148). After that, Zhang and his collaborators have obtained some important twist criteria and applied the results to some interesting examples of time periodic scalar Lagrangian equations and planar Hamiltonian systems. In this survey, we will introduce the fundamental ideas in these works and will review recent progresses in this field, including applications to examples such as swing, the (relativistic) pendulum and singular equations. Some unsolved problems will be imposed for future study. Abstract: In this paper, we study and classify the firing patterns in the Chay neuronal model by the fast/slow decomposition and the two-parameter bifurcations analysis. We show that the Chay neuronal model can display complex bursting oscillations, including the "fold/fold" bursting, the "Hopf/Hopf" bursting and the "Hopf/homoclinic" bursting. Furthermore, dynamical properties of different firing activities of a neuron are closely related to the bifurcation structures of the fast subsystem. Our results indicate that the codimension-2 bifurcation points and the related codimension-1 bifurcation curves of the fast-subsystem can provide crucial information to predict the existence and types of bursting with changes of parameters. Abstract: Global Hopf bifurcation analysis is carried out on a six-dimensional FitzHugh-Nagumo (FHN) neural network with a time delay. First, the existence of local Hopf bifurcations of the system is investigated and the explicit formulae which can determine the direction of the bifurcations and the stability of the periodic solutions are derived using the normal form method and the center manifold theory. Then the sufficient conditions for the system to have multiple periodic solutions when the delay is far away from the critical values of Hopf bifurcations are obtained by using the Wu's global Hopf bifurcation theory and the Bendixson's criterion. Especially, a synchronized scheme is used during the analysis to reduce the dimension of the system. Finally, example numerical simulations are given to support the theoretical analysis. Abstract: A problem of reducing a general three-dimensional (3-D) autonomous quadratic system to a Lorenz-type system is studied. Firstly, under some necessary conditions for preserving the basic qualitative properties of the Lorenz system, the general 3-D autonomous quadratic system is converted to an extended Lorenz-type system (ELTS) which contains a large class of existing chaotic dynamical systems. Secondly, some different canonical forms of the ELTS are obtained with the aid of various nonsingular linear transformations and normalization techniques. Thirdly, the conjugate systems of the ELTS are defined and discussed. Finally, a sufficient condition for the nonexistence of chaos in such ELTS is derived. Abstract: This paper concerns the consensus of discrete-time multi-agent systems with linear or linearized dynamics. An observer-type protocol based on the relative outputs of neighboring agents is proposed. The consensus of such a multi-agent system with a directed communication topology can be cast into the stability of a set of matrices with the same low dimension as that of a single agent. The notion of discrete-time consensus region is then introduced and analyzed. For neurally stable agents, it is shown that there exists an observer-type protocol having a bounded consensus region in the form of an open unit disk, provided that each agent is stabilizable and detectable. An algorithm is further presented to construct a protocol to achieve consensus with respect to all the communication topologies containing a spanning tree. Moreover, for the case where the agents have no poles outside the unit circle, an algorithm is proposed to construct a protocol having an origin-centered disk of radius $\delta$ ($0<\delta<1$) as its consensus region. Finally, the consensus algorithms are applied to solve formation control problems of multi-agent systems. Abstract: In this paper, a three dimensional Ginburg-Landau type equation is considered. Firstly, two families of new traveling wave solutions in term of explicit functions are presented by using the homogeneous balance method, in which one consists of variable-amplitude solutions and the other constant-amplitude solutions (namely, plane wave solutions). Moreover, the stability of plane wave solutions is analyzed by using the regular phase plane techniques. Abstract: The release of ink in Aplysia californicaoccurs selectively to long-lasting stimuli. There is a good correspondence between features of the behavior and the firing pattern of the ink gland motor neurons. Indeed, the neurons do not fire for brief inputs and there is a delayed firing for long duration inputs. The biophysical mechanisms for the long delay before firing is due to a transient potassium current which activates rapidly but inactivates more slowly. Based on voltage-clamp experiments, a nine-variable Hodgkin-Huxley-like model for the ink gland motor neurons was developed by Byrne. Here, fast-slow analysis and two-parameter dynamical analysis are used to investigate the contribution of different currents and to predict various firing patterns, including the long latency before firing. Abstract: In this paper a class of generalized piecewise smooth maps is studied, which is linear at one side and nonlinear with power dependence at the other side. According to the value of the power in the term $x^z$, the bifurcations occurring in this map are classified into five types: $z>1$, $z=1$, $0<z<1$, $z=0$, and $z<0$. We derive the occurrence conditions of border collision bifurcation and smooth fold and flip bifurcation, especially the codimension-2 bifurcation points describing the interaction between border collision bifurcation and smooth bifurcation. The general results are then applied to the specific cases of the power $z$, and different bifurcation scenarios are shown for individual cases, from which the period-adding scenario is found to be general for any power. Abstract: Reliability of spike timing has been a hot topic recently. However reliability has not been considered for bursting behavior, as commonly observed in a variety of nerve and endocrine cells, including $\beta$-cells in intact pancreatic islets. In this paper, reliability of $\beta$-cells with noise is considered. A method to numerically study reliability of bursting cells is presented. Reliability of a single cell will decrease as noise level becomes larger. The reliability of networks of $\beta$-cells coupled by gap junctions or synaptic excitation is investigated. Simulations of the network of $\beta$-cells reveal that increasing noise level decreases the reliability. But the reliability of the network is higher than that of single cell. The effect of coupling strength on reliability is also investigated. Reliability will decrease when coupling strength is small and increase when coupling strength is large. Abstract: In this paper, a constraint-stabilized numerical method is presented for the planar rigid multibody system with friction-affected translational joints, in which the sliders and the guides are treated as particles and bilateral constraints, respectively. The dynamical equations of the non-smooth system are obtained by using the first kind of Lagrange's equations and Baumgarte stabilization method. The normal forces of bilateral constraints are expressed by the Lagrange multipliers and described by complementarity condition, while frictional forces are characterized by a set-valued force law of the type of Coulomb's law for dry friction. Using event-driven scheme, the state transition problem of stick-slip and normal forces of bilateral constraints is formulated and solved as a horizontal linear complementarity problem (HLCP). Finally, the planar rigid multibody system with two translational joints is considered as a illustrative application example. The results obtained also show that the drift of constraints of the system remains bounded. Abstract: We study the evolution of spatiotemporal dynamics and synchronization transition on small-world Hodgkin-Huxley (HH) neuronal networks that are characterized with channel noises, ion channel blocking and information transmission delays. In particular, we examine the effects of delay on spatiotemporal dynamics over neuronal networks when channel blocking of potassium or sodium is involved. We show that small delays can detriment synchronization in the network due to a dynamic clustering anti-phase synchronization transition. We also show that regions of irregular and regular wave propagations related to synchronization transitions appear intermittently as the delay increases, and the delay-induced synchronization transitions manifest as well-expressed minima in the measure for spatial synchrony. In addition, we show that the fraction of sodium or potassium channels can play a key role in dynamics of neuronal networks. Furthermore, We found that the fraction of sodium and potassium channels has different impacts on spatiotemporal dynamics of neuronal networks, respectively. Our results thus provide insights that could facilitate the understanding of the joint impact of ion channel blocking and information transmission delays on the dynamical behaviors of realistic neuronal networks. Abstract: In this paper we study the following problem $$-\triangle_{p}u+|u|^{p-2}u=f(x,u) $$ in a bounded smooth domain $\Omega \subset {\bf R}^{N}$ with a nonlinear boundary value condition $|\nabla u|^{p-2}\frac{\partial u}{\partial\nu}=g(x,u)$. Results on the existence of positive solutions are obtained by the sub-supersolution method and the Mountain Pass Lemma. Abstract: In this paper, we investigate the dynamic behavior of a system of two coupled Hindmarsh-Rose (HR) neurons, based on bifurcation analysis of its fast subsystem. The individual HR neuron has chaotic behavior, but they can become regularized when coupled through synaptic coupling or joint electrical-synaptic coupling. Through numerical methods we first investigate the bifurcation structure of its fast subsystem. We show that the emerging of periodic patterns of neurons is related to topological changes of its underlying bifurcations. The Lyaponov exponent calculations further reveal the pathway from chaotic bursting behavior to regular bursting of HR neurons. Finally, we include both electrical and synaptic coupling in the system, and numerically calculate the time dynamics. Even though electrical couplings (or gap junctions) usually does not regularize chaotic trajectories, but joint coupling has been more effective than synaptic coupling alone in producing stable rhythms. The main contribution of this paper is that we provide a mathematical description for transitions of neuron dynamics from chaotic trajectories to regular bursting when synaptic and electrical-synaptic coupling strengthens, using bifurcation analysis. Abstract: A proportionally-fair controller with time delay is considered to control Internet congestion. The time delay is chosen to be a controllable parameter. To represent the relation between the delay and congestion analytically, the method of multiple scales is employed to obtain the periodic solution arising from the Hopf bifurcation in the congestion control model. A new control method is proposed by perturbing the delay periodically. The strength of the perturbation is predicted analytically in order that the oscillation may disappear gradually. It implies that the proved control scheme may decrease the possibility of the congestion derived from the oscillation. The proposed control scheme is verified by the numerical simulation. Abstract: In this paper we are concerned with a class of nonlinear degenerate elliptic equations under the natural growth. We show that each bounded weak solution of $A$-harmonic type equations under the natural growth belongs to local Hölder continuity based on a density lemma and the Moser-Nash's argument. Then we show that its weak solution is of optimal regularity with the Hölder exponent for any $\gamma$: $0\le \gamma<\kappa$, where $\kappa$ is the same as the Hölder's index for homogeneous $A$-harmonic equations. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
in an exercise I was asked to prove that (a) The structures $(\mathbb{R}^+,1,\cdot)$ and $(\mathbb{R},0,+)$ are elementarily equivalent. (b) The two structures $(\mathbb{N},<)$ and $(\mathbb{Q},<)$ are not elementarily equivalent. (c) The structure $(\mathbb{R},0,1,+,\cdot )$ is not an elementary substructure of the structure $(\mathbb{C}, 0,1,+,\cdot)$. Can I check if what I am doing is correct? Hope it is okay that I put all three questions together, since they are sort of related. Sincere thanks! (a) For (a), I tried to show that the two structures are in fact isomorphic. Let $\mathcal{N}=(\mathbb{R}^+,1,\cdot)=(\mathbb{R}^+,J)$. Let $\mathcal{M}=(\mathbb{R},0,+)=(\mathbb{R},I)$. Define $e:\mathbb{R}\to\mathbb{R}^+$, $e(x)=e^x$. Then $e$ is a bijection. Also we have, $e(I(c_0))=e(0)=1=J(c_1)$ Denote $I(F)=+$, $J(F)=\cdot$ For each $a_1,a_2,a_3\in M$, we have $I(F)(a_1,a_2)=a_3 \iff a_1+a_2=a_3 \iff e^{a_1+a_2}=e^{a_3} \iff e^{a_1}\cdot e^{a_2}=e^{a_3} \iff J(F)(e(a_1),e(a_2))=e(a_3)$ Therefore, $e$ is an isomorphism. Therefore, $\mathcal{M}$ and $\mathcal{N}$ are isomorphic, and hence elementarily equivalent. (b) Consider $\varphi= (\forall x_1 (\exists x_2 (P_< (x_2,x_1)))) $. Then $(\mathbb{N},>)\not\models \varphi$ but $(\mathbb{Q},<)\models \varphi$. (c) Consider $\varphi=(\forall x_1 (\exists x_2 (F_\times (x_2, x_2)=x_1)))$ Then, $(\mathbb{R},0,1,+,\cdot)\not\models\varphi$ (since the square root of $-1$ is not in $\mathbb{R}$) but $(\mathbb{C},0,1,+,\cdot)\models\varphi$.
Theorem 2.27: If $X$ is a metric space and $E \subset X$, then $\bar E$ (the closure of $E$) is closed. The proof says: If $p \in X$ and $p \not \in \bar E$ then $p$ is neither a point of $E$ nor a limit point of $E$. Hence $p$ has a neighborhood which does not intersect $E$. The complement of $\bar E$ is therefore open. Hence $\bar E$ is closed. I'm particularly questioning about ''Hence $p$ has a neighborhood which does not intersect $E$. The complement of $\bar E$ is therefore open.'' Should we also prove that the neighborhood of $p$ also does not intersect $E'$ (the set of all limit points of $E$)? Here's what I tried to prove, by contrapositive: ''For any $p \in {\bar E}^c$, if $N_r(p) \cap E' \ne \emptyset$ then $N_r(p) \cap E \ne \emptyset$''. Proof: For any $p \in {\bar E}^c$, if $N_r(p) \cap E' \ne \emptyset$, then take $q \in N_r(p) \cap E'$, $\exists N_h(q)$ s.t. $N_h(q) \subset N_r(p)$. Since $q \in E'$ is a limit point, $N_h(q) \cap E \ne \emptyset$, and hence $N_r(p) \cap E \ne \emptyset$. I'm not quite sure whether this is necessary. Or is there anything I missed from Rudin's proof?
Let's say we write a standard call option on $S_t$ which pays $Max[0, S_t-K] \,\forall \, t \in T $. Given that $\frac{dS}{S} = \mu \,dt + \sigma \,dW_t$, and, $V_T = (S_T -K)_+$, we can solve this under the Black-Scholes framework as: $$V_t[S_t,K,\sigma_S,r,t] = S_t \varPhi[d_1] - K e^{-r (T-t)} \varPhi[d_1 - \sigma \sqrt{T-t}]$$ where: $\varPhi[x]$ is a cumulative distrbution function; and, $$d_1 = \frac{\ln\left(\frac{S_t}{K}\right)+{(r+\sigma^2/2)(T-t)} }{\sigma \sqrt{T-t}}$$ What is the expected variance of this option's returns, $\sigma_V$? I.e., how does the process $E \left[ V_t \right]$ evolve wrt time? Intuitively, the logarithmic variance should be defined if we constrain that the option must take non-zero, positive values. I ask because I am trying to assess what might be called a compound option in which the parameters are adapted for $V_t$.
I have been doing quite a lot of reading recently about the theory of atoms in molecules and think that I have semi-satisfactory answers to your questions, so I will share them here and perhaps others will add complementary answers as there actually seems to be quite an extensive literature on this subject, but there is still a lot of debate over these topics. First, a quick note on how the charge density, $\rho$, can be used in the theory of atoms in molecules (AIM) to determine chemically intuitive properties. This is usually done by the definition of an index. For instance, the first index one might think to define is the electron number index, $N(\Omega)$, where $\Omega$ is the space defined by the critical points surrounding a single nucleus. Essentially AIM claims that $\Omega$ and the very idea of an atom are interchangeable, so $N(\Omega)$ is conceptually identical to answering how many electrons are associated with a partiular atom. Hence, the AIM definition of the charge on an atom follows as,$$Q_{\Omega_i}=Z_{\Omega_i}-N(\Omega_i)$$where $Z_{\Omega_i}$ is the charge of the nucleus of atom $i$ in the basin $\Omega_i$. Not surprisingly, $N(\Omega_i)$ is defined as[1],$$N(\Omega_i)=\int_{\Omega}\rho(\textbf{r})d\tau$$which is clearly just the charge density in a particular basin. This is interchangeable with a number density of electrons by division by the electric charge $e$. These indices are always defined in terms of the charge density. The reason these are called indices, rather than something more exact, is because we are really computing a quantity which dimensionally satisfies the property we are interested in, and then interpreting it as the thing itself. The ambiguity comes from the idea of a basin, $\Omega$, which is mathematically well-defined but is not guaranteed to be physically exact for any reason. How dependent are computed charges using the quantum theory of atoms in molecules on the used level of theory? Ref. [2] is a review of the AIM theory with a specific emphasis on the electron localization functions (ELFs) to describe aromaticity. More on these ELFs in a moment. They present the calculation of the index $N(\Omega)$ for several diatomic molecules using DFT, HF, and CISD calculations. The total charge in each of the basins stays more or less the same for all of these methods. The value of $N(\Omega)$ generally only changes in the first or second decimal place for the calculations they present (so tenths to hundredths of an electron). The trend when Coulomb correlation is included is that $N(\Omega)$ decreases for what would traditionally be considered the more electronegative atom. That is to say that coulomb correlation makes the bonds more covalent and less ionic. For mostly ionic systems such as $\ce{LiF}$, the addition of coulomb correlation has a small effect because the electrons are already mostly localized in the basins [2]. All of this is to say that the actual charges computed based on $N(\Omega)$ seem to be relatively insensitive to coulomb correlation. This makes sense based on the fact that the primary features of the spatial distribution of $\rho$ will be dictated by the requirement of antisymmetrization of the wavefunction which is, of course, included in a HF description. That covers the question of charges based on $N(\Omega)$, but this does not actually give us a picture of how electrons are shared between atoms. In other words, how localized and how delocalized are the electrons? In order to answer this question, ref. [3] provides definitons of both localization and delocalization indices, which are unique because the localization index describes localization of electrons to a single basin, while the delocalization index describes the sharing of electrons with either a specific basin, or with all other basin in a molecule. While $N(\Omega)$ is not greatly affected by coulomb correlation, the localization index, $\lambda(A)$, and the delocalization index, $\delta(A,B)$ are. For instance, $\delta(A,B)$ which is a measure of the sharing of electrons between the atoms $A$ and $B$, changes by almost $1.0$ when going from the HF description of $\ce{N2}$ to the CISD description of $\ce{N2}$. In general, for homodiatomics, coulomb correlation has the effect of increasing the density in individual basins. And, consistent with the statement above about charge, the localization index for polar diatomics tends to increase the charge on the less electronegative element when a correlated wavefunction is used. I would highly recommend reading ref. [3] as a place to gain some intuition about the behavior of these different indices. Do calculated charges converge to a specific value with higher level of theory? I have not been able to find any explicit discussion of this, but of course any method which approaches the exact wavefunction, and hence the exact density, must have the same limiting value for any index defined in terms of the density. This is really the power of the AIM model. It is completely quantum mechanical in the sense that the exact wavefunction will yield the exact basins. Whether or not these basins mean what AIM wants them to mean, however, is what people debate. Another related point is how the various indices are affected by the particular basis set used. I have seen a couple papers which mention this, and note that the results really do not depend much on the particular basis set chosen[1]. The reason which is speculated for this behavior is that the partitioning used in AIM is in real space rather than in the Hilbert space to which the basis set belongs. Unsurprisingly, however, the indices are sensitive to the basis set when the density itself is very sensitive to the basis set. Open-shell molecules seem to be more complicated on this front [2]. Are charges from calculated electron densities comparable to charges from measured electron densities? The primary purpose of ref. [1] is to compare the value of properties determined from AIM and from a density determined by x-ray diffraction of p-nitroaniline. This molecule was chosen, I believe, because it has a large dipole moment. One complication in using the experimental density is that it is not as simple as just measuring the density and then having the whole scalar field in front of you. Rather, from what I understand, you measure the diffraction data and then use certain theoretical models to translate this diffraction data into a charge density. Thus, there is ambiguity in what charge density you actually measure. Ref. [1] uses the UMM and KRMM multipole formalisms which are discussed in this wikipedia article. The point of bringing this up is that the answer for the charges actually depends on which method you use to construct the experimental density. Ref. [1] indicates that the KRMM multipole method which uses the AIM analysis of charges agrees much more closely with an AIM analysis of the charges from a theoretical density found from periodic-HF and periodic-DFT calculations (this is all solid phase). The p-HF and moreso the p-DFT charges agree very well with experimental charges which are calculated from KRMM using the AIM method. See Table 2 of [1]. The dipole moments also agree very well, but only after various refinements are made to both the partitioning of the experimental density as well as the use of the KRMM method. See Table 4 of Ref. [1]. One point which was mentioned in this book[4] (the relevant chapter is available on Google Books, ch. 3) is that the location of bond critical points, saddle points on the density between two atoms, can be very different for theoretical and experimental densities. Thus, properties derived from the basins can, in principle, be very different for the densities. It seems, however, that despite this problem of locating the bond critical point accurately, properties which are derived from the theoretical and experimental basins are not very different. This is because in the region of the saddle point, the density is usually quite flat, thus shifting the bond critical point does not have much effect on how much density is in each basin. Are there more reliable methods to obtain partial charges? Only briefly, I will point out that various population analyses are always possible to determine charges on atoms. The answer to this really depends on what you mean by reliable. In a sense, AIM is the only reliable method of determining charges on atoms in molecules because it is the only method I am aware of which bothers to define what an atom in a molecule actually is. Additionally, it is reliable in the sense that it exists in a completely self-contained mathematical framework. Population analyses, however, can be very dependent on the particular basis set used, and also are not invariant to unitary transformations of the orbitals. In this sense they are very unreliable. Mulliken charges are a common form of finding charges from a population analysis, but these are known to very basis set dependent. A better scheme, I think, is the CHELPG scheme which fits the molecular electrostatic potential of a system and hence conserves the dipole moment when it assigns charges to atoms. This is a very nice property for a population analysis. Nonetheless, it will be sensitive to method and basis set depending on how much the electrostatic potential changes upon changing either of these. I think that generally the AIM charges and charges from population analyses will not be too different, but one notable exception is given in ref. [4] on page 162 where it is noted that the AIM charge on boron in $\ce{(H3BNH3)2}$ is positive, $q(\Omega_{\ce{B}})=2.15$, while the Mulliken population analysis gives a negative charge of $-0.26$. So that's a very stark difference. I guess I would trust AIM, but I don't really know. As a concluding point, everything I have discussed above calculates properties from indices based on $\rho$, but there is no reason to limit the definition of indices to the density. Rather, some authors have defined indices, for physically motivated reasons, based on the gradient of the density, $\nabla\rho$ and the Laplacian of the density, $\nabla^2\rho$. An interesting application of one of these based on the gradient is for some alkanediols where there is no bond critical point between the two $\ce{O-H}$ groups despite the obvious presence of an internal hydrogen bond [5]. Ref. [5] is a really cool paper. Definitely give it a read. [1] Volkov, A., Gatti, C., Abramov, Y., & Coppens, P. (2000). Evaluation of net atomic charges and atomic and molecular electrostatic moments through topological analysis of the experimental charge density. Acta Crystallographica Section A: Foundations of Crystallography, 56(3), 252-258. [2] Poater, J., Duran, M., Sola, M., & Silvi, B. (2005). Theoretical evaluation of electron delocalization in aromatic molecules by means of atoms in molecules (AIM) and electron localization function (ELF) topological approaches. Chemical reviews, 105(10), 3911-3947. [3] Fradera, X., Austen, M. A., & Bader, R. F. (1999). The Lewis model and beyond. The Journal of Physical Chemistry A, 103(2), 304-314. [4] Popelier, P. L. A., Aicken, F. M., & O’Brien, S. E. (2000). Atoms in molecules. Chemical Modelling: Applications and Theory, 1, 143À198. [5] Lane, J. R., Contreras-García, J., Piquemal, J. P., Miller, B. J., & Kjaergaard, H. G. (2013). Are bond critical points really critical for hydrogen bonding?. Journal of chemical theory and computation, 9(8), 3263-3266.
I am trying to understand the definition of point process when reading its Wikipedia article: Let $S$ be locally compact second countable Hausdorff space equipped with its Borel σ-algebra $B(S)$. Write $\mathfrak{N}$ for the set of locally finite counting measures on $S$ and $\mathcal{N}$ for the smallest σ-algebra on $\mathfrak{N}$ that renders all the point counts $$ \Phi_B : \mathfrak{N} \to \mathbb{Z}_{+}, \varrho \mapsto \varrho(B)$$ for relatively compact sets $B$ in $B$-measurable. A point process on $S$ is a measurable map $ \xi: \Omega \to \mathfrak{N} $ from a probability space $(\Omega, \mathcal F, P)$ to the measurable space $(\mathfrak{N},\mathcal{N})$. My questions are: Is the counting measure the one that gives the cardinality of a measurable subset, as defined in its Wikipedia article? If yes, isn't it that there is only one counting measure on a measurable space, and why in the definition of point process, does "write $\mathfrak{N}$ for the set of locally finite counting measures on $S$" imply that there are more than one counting measures on $S$? It has been noted[citation needed] that the term point process is not a very good one if S is not a subset of the real line, as it might suggest that ξ is a stochastic process. Is a point process a stochastic process? If no, when can it be? How are the two related? Thanks and regards!
Let$\ \sigma(n)$ be the sum-of divisors function, with the divisors raised to$\ 1$. If the Riemann Hypothesis is false, Robin proved there are infinitely many counterexamples to the inequality$$\ \sigma(n)<e^\gamma n \log \log n.$$ There are 27 small counterexamples, but the conjecture is that it holds for every$\ n>5040$. Akbary and Friggstad showed the least counterexample to it must be a superabundant number, i.e. a number$\ a$ such that$\ \frac{\sigma(a)}{a}>\frac{\sigma(b)}{b}$ for all$\ b<a$. Now, it is a virtual certainty that if the inequality fails (for some$\ n>5040$), the maximum of the ratio$\ \frac{\sigma(n)}{n \log \log n}$ will be reached by a colossally abundant number, namely a number$\ c$ such that$\ \frac{\sigma(c)}{c^{1+\epsilon}}>\frac{\sigma(d)}{d^{1+\epsilon}}$, for all$\ d<c$ and for some$\ \epsilon>0$. Since it could lead me to something on the subject, what I'm asking is: if the inequality fails, will only a finite number of colossally abundant numbers satisfy Robin's inequality? For the benefit of those who may not be familiar with all this: In 1915 Ramanujan proved that if the Riemman Hypothesis is true, then for all sufficiently large $n$ we have an inequality on $\frac{\sigma(n)}{n}$, where $\sigma(n)$ is the sum of the divisors of the positive integer $n$. The inequality was $$\sigma(n)<e^\gamma n \ln\ln n$$ In 1984 Robin elaborated this to show that if there is a single exception to this for $n>5040$ (the largest currently known exception and a "colossally abundant number" - hereafter a "CA"), then the Riemann Hypothesis is false. Because of the importance of the RH, this attracted a good deal of attention. But 30 years later nothing seems to have come of it. Obviously the most plausible candidates to break the inequality are numbers with lots of divisors. I believe, though I am weak on the history, that the concept, if not the name, of CA came from Ramanujan during his 1915 work. To give a little perspective: few people are interested in CA per se. But vast numbers of people are interested in RH, even if only a tiny number do serious work on it (because of the risk to one's reputation). So the immediate interest of the inequality was that it provided another way, superficially at least totally different, to disprove the RH by computation. People had got fed up with results that the first zillion zeros were on the line, particularly when analysts quoted Littlewood's "Miscellany" on the Skewes' number (which is now a somewhat less compelling point :) ). So this was something else to try. However, after 30 years nothing has so far come of that. In the meantime people have been working on CA as objects of interest in their own right. The question is whether if the RH is false (so that the inequality fails - Robin's result was an iff type result), then only a finite number of CA will satisfy the Robin inequality. [Added later - the precise question having been clarified] If I had realised that would be the question, I would never have started to answer it! I had earlier understood it to be a quite different question. But there are a few points to be made. I have never read Robin's paper - my interest is in RH, and I do not regard Robin's inequality as a useful way of tackling the RH (a judgment which of course is of zero interest to anyone else). So I am at a serious disadvantage - in not having read the paper and to compound that, I cannot immediately lay my hands on it. It is fairly easy to show that if $a,b$ are coprime counterexamples to the inequality, then so is $ab$ provided $a,b$ are sufficiently big (which they would be). It is also fairly clear that unless something weird happens at huge values, counterexamples are likely to be CA. So it seems a fairly safe guess that if RH is false then there will be infinitely many CA not satisfying the Robin inequality. But unfortunately the question asks for something much stronger than that, namely will all but finitely many CA fail to satisfy it? Short answer: good question; I have no idea and should delete this entire answer. But pending a little digging early in the coming week I will leave it here until a better answer comes. In my defence, I would only say that I have only been using this site for less than 3 weeks. I have answered lots of daft questions, and had fun competing putting up answers fast. I failed to adjust adequately when this one came along. But it does illustrate the wisdom of the concept of clarifying the question with comments before writing Answers. I had started to do that, but got impatient when I could not immediately grasp the clarifications. That was entirely my fault. I apologise unreservedly.
I've always wondered why do one use squared or absolute returns to determine if volatility modeling is required for the return series? We understand that there are various tests for its autocorrelation and conditional heteroskedasticity. However, I don't quite grasp the concept behind it. Can anyone kindly explain what's the statistical intuition behind using squared/abs returns to determine if vol representation is needed? Thank you. To simplify, consider the errors rather than the returns. The variance is effectively the average of the squared errors, while absolute deviation is the average of the absolute errors. So plotting the squared errors or absolute errors over time could give an indication of whether the variance or absolute deviation is constant over time. Since variance is more commonly the practical focus, one approach would be to simply regress the squared errors on p of its lags. This is the ARCH(p) model. GARCH(p,q) introduces an additional term, which has the effect of reducing the need of p to be large. Simple...because you are interested in deviations from a metric, and not whether it deviates above or below. The very definition of volatility is a "measure of deviation". Squaring returns or using the absolute values just eases the calculation to arrive at a deviation measure. Otherwise volatility would have to be calculated in other ways as positive and negative returns would introduce side effects that will affect the volatility computation. Also, often we can assume the average of short-term returns in the long run to be zero, the historic volatility is equal to $\hat{\sigma_T^2}=\frac{\sum_{i=1}^T{r_i^2}}{T-1}$. Sp to study the volatility process we therefore study the squared return process, which is a good proxy.
2019-09-12 16:43 Pending/LHCb Collaboration Pending LHCB-FIGURE-2019-008.- Geneva : CERN, 10 详细记录 - 相似记录 2019-09-10 11:06 Smog2 Velo tracking efficiency/LHCb Collaboration LHCb fixed-target programme is facing a major upgrade (Smog2) for Run3 data taking consisting in the installation of a confinement cell for the gas covering $z \in [-500, -300] \, mm $. Such a displacement for the $pgas$ collisions with respect to the nominal $pp$ interaction point requires a detailed study of the reconstruction performances. [...] LHCB-FIGURE-2019-007.- Geneva : CERN, 10 - 4. Fulltext: LHCb-FIGURE-2019-007_2 - PDF; LHCb-FIGURE-2019-007 - PDF; 详细记录 - 相似记录 2019-09-09 14:37 Background rejection study in the search for $\Lambda^0 \rightarrow p^+ \mu^- \overline{\nu}$/LHCb Collaboration A background rejection study has been made using LHCb Simulation in order to investigate the capacity of the experiment to distinguish between $\Lambda^0 \rightarrow p^+ \mu^- \overline{\nu}$ and its main background $\Lambda^0 \rightarrow p^+ \pi^-$. Two variables were explored, and their rejection power was estimated applying a selection criteria. [...] LHCB-FIGURE-2019-006.- Geneva : CERN, 09 - 4. Fulltext: PDF; 详细记录 - 相似记录 2019-09-06 14:56 Tracking efficiencies prior to alignment corrections from 1st Data challenges/LHCb Collaboration These plots show the first outcoming results on tracking efficiencies, before appli- cation of alignment corrections, as obtained from the 1st data challenges tests. In this challenge, several tracking detectors (the VELO, SciFi and Muon) have been misaligned and the effects on the tracking efficiencies are studied. [...] LHCB-FIGURE-2019-005.- Geneva : CERN, 2019 - 5. Fulltext: PDF; 详细记录 - 相似记录 2019-09-06 11:34 详细记录 - 相似记录 2019-09-02 15:30 First study of the VELO pixel 2 half alignment/LHCb Collaboration A first look into the 2 half alignment for the Run 3 Vertex Locator (VELO) has been made. The alignment procedure has been run on a minimum bias Monte Carlo Run 3 sample in order to investigate its functionality [...] LHCB-FIGURE-2019-003.- Geneva : CERN, 02 - 4. Fulltext: VP_alignment_approval - TAR; VELO_plot_approvals_VPAlignment_v3 - PDF; 详细记录 - 相似记录 2019-07-29 14:20 详细记录 - 相似记录 2019-07-09 09:53 Variation of VELO Alignment Constants with Temperature/LHCb Collaboration A study of the variation of the alignment constants has been made in order to investigate the variations of the LHCb Vertex Locator (VELO) position under different set temperatures between $-30^\circ$ and $-20^\circ$. Alignment for both the translations and rotations of the two halves and of the modules with certain constrains of the modules position was performed for each run that correspond to different a temperature [...] LHCB-FIGURE-2019-001.- Geneva : CERN, 04 - 4. Fulltext: PDF; Related data file(s): ZIP; 详细记录 - 相似记录
First, I propose to rewrite your original system (by multiplyingnumerator and denominator by $x\,y$) as\begin{align} x' &= \frac{d_2 r_1\,x + (r_1 a_{22} - r_2 a_{12})\,x\,y}{a_{12}a_{21}\, x\, y - (d_1 + a_{11}\,x)(d_2 + a_{22}\,y)}, \tag{1a}\\ y' &= \frac{d_1 r_2\,y + (r_2 a_{11} - r_1 a_{21})\,x\,y}{a_{12}a_{21}\,x\,y - (d_1 + a_{11}\,x)(d_2 + a_{22}\,y)}. \tag{1b}\end{align}If I understand you correctly, you want to see whether the following holds: CLAIM: Given an arbitrary pair of initial conditions $x(0) = x_0$, $y(0) = y_0$ such that $x_0 > 0$ and $y_0 < 0$, we have that $x(t) > 0$ and $y(t) < 0$ for all time $t$. System $(1)$ is a nonlinear dynamical system, so explicitly solving itwill be difficult. However, we can apply some dynamical systemstechniques to see whether the claim is true. As a first step, we determine the equilibria of system $(1)$, i.e. thepoints $(x,y)$ where $x'=y'=0$. It turns out (I invite you to checkthis) that the origin $(x,y)=(0,0)$ is the only equilibrium of the system.Next, we determine the stability of this equilibrium. We do this bytaking the Jacobian of the right hand side of system $(1)$, and evaluateit at $(x,y)=(0,0)$. This yields the matrix\begin{equation} J((0,0)) = \begin{pmatrix} r_1/d_1 & 0 \\ 0 & r_2 / d_2\end{pmatrix}.\tag{2}\end{equation}The eigenvalues and eigenfunctions of this matrix can readily be readoff: we have $\lambda_1 = r_1/d_1 > 0$ with eigenvector $(1,0)^T$, and$\lambda_2 = r_2/d_2 < 0$ with eigenvector $(0,1)^T$. This means (by theGrobman-Hartman theorem) that we can approximate the phase plane ofsystem $(1)$ around the origin by the linearised system $(x',y')^T =J((0,0))(x,y)^T$; from the eigenvalues, we see that the origin is asaddle. Moreover, the stable manifold of the origin is given by the line$\{ x=0 \}$, and the unstable manifold of the origin is given by theline $\{ y=0 \}$. These manifolds act as separatrices in the phaseplane: in other words, because they consist of orbits, other orbitscannot cross these manifolds and are therefore `caught' between them. Inparticular, any orbit which starts in the lower right quadrant will staythere. So, the claim seems to be true. However, this linear approximation of the system only holds locally,that is, sufficiently close to the origin. Because the origin is asaddle, every orbit (except the ones on the stable manifold) will flow away from the origin, where the local linear approximation does nothold anymore. Of course, the stable and unstable manifolds still act asseparatrices, but these are only locally straight. If we zoom out alittle bit, can we still determine what will happen? Generally, one would now do a so-called manifold expansion, but in the case of system $(1)$, it turns out to be quite easy, because: OBSERVATION: The line $\{ x=0 \}$ and the line $\{ y=0\}$ are both invariant under the flow of system $(1)$. So, as you zoom out, you see that the unstable manifold of the origin is exactly equal to the line $\{ y = 0 \}$; also, the stable manifold of the origin is exactly equal to the line $\{x=0\}$. To reiterate, both manifolds consist of orbits, and orbits cannot cross each other, so orbits that are caught in between these two manifolds in the lower right quadrant will stay in the lower right quadrant for all time -- which is exactly the content of the claim. Obviously, the same statement holds for every other quadrant, so this system indeed preserves some kind of 'monotonicity'. Question: I'm curious, where does this system come from? It has the form\begin{equation} \vec{x}' = M R \vec{x},\end{equation}with $R = \text{diag}(r_1,r_2)$ and $M = \frac{1}{\text{det} B} B^T$, where$B = S^{-1} \left(D + A\right) S$, with\begin{equation}S = \begin{pmatrix} 0 & 1 \\ -1 & 0\end{pmatrix} \quad\text{and}\quad A = \begin{pmatrix} a_{11} x & a_{12} x \\ a_{21} y & a_{22} y \end{pmatrix},\end{equation}which seems suggestive.
Given a matrix $F \in \mathbb{C}^{m \times n}$ such that a $m>n$ and other matrix $A$ (non-symmetric matrix) of size $n \times n$ and spectral norm as: $$\|A-F^*\operatorname{diag}(b)F\|_2 = \sigma_{\max}(A-F^*\operatorname{diag}(b)F) = \sqrt{\lambda_{\max} \left( (A-F^*\operatorname{diag}(b)F)^* (A-F^*\operatorname{diag}(b)F \right)),}$$ How do I compute analytically $\nabla_b \|A-F^*\operatorname{diag}(b)F\|_2$, where $b \in \mathbb{C}^{m \times 1}$ is some vector and {$*$} is a sign for conjugate transpose? I need gradient because I want to find $b$ by minimizing $\|A-F^*\operatorname{diag}(b)F\|_2$ as I would like to find the optimum by using gradient descent. Is it possible?
Shortcomings of the MAPE The MAPE, as a percentage, only makes sense for values where divisions and ratios make sense. It doesn't make sense to calculate percentages of temperatures, for instance, so you shouldn't use the MAPE to calculate the accuracy of a temperature forecast. If just a single actual is zero, $A_t=0$, then you divide by zero in calculating the MAPE, which is undefined. It turns out that some forecasting software nevertheless reports a MAPE for such series, simply by dropping periods with zero actuals (Hoover, 2006). Needless to say, this is not a good idea, as it implies that we don't care at all about what we forecasted if the actual was zero - but a forecast of $F_t=100$ and one of $F_t=1000$ may have very different implications. So check what your software does. If only a few zeros occur, you can use a weighted MAPE (Kolassa & Schütz, 2007), which nevertheless has problems of its own. This also applies to the symmetric MAPE (Goodwin & Lawton, 1999). MAPEs greater than 100% can occur. If you prefer to work with accuracy, which some people define as 100%-MAPE, then this may lead to negative accuracy, which people may have a hard time understanding. (No, truncating accuracy at zero is not a good idea.) If we have strictly positive data we wish to forecast (and per above, the MAPE doesn't make sense otherwise), then we won't ever forecast below zero. The MAPE unfortunately treats overforecasts differently than underforecasts: an underforecast will never contribute more than 100% (e.g., if $F_t=0$ and $A_t=1$), but the contribution of an overforecast is unbounded (e.g., if $F_t=5$ and $A_t=1$). This means that the MAPE may be lower for biased than for unbiased forecasts. Minimizing it may lead to forecasts that are biased low. Especially the last bullet point merits a little more thought. For this, we need to take a step back. To start with, note that we don't know the future outcome perfectly, nor will we ever. So the future outcome follows a probability distribution. Our so-called point forecast $F_t$ is our attempt to summarize what we know about the future distribution (i.e., the predictive distribution) at time $t$ using a single number. The MAPE then is a quality measure of a whole sequence of such single-number-summaries of future distributions at times $t=1, \dots, n$. The problem here is that people rarely explicitly say what a good one-number-summary of a future distribution is. When you talk to forecast consumers, they will usually want $F_t$ to be correct "on average". That is, they want $F_t$ to be the expectation or the mean of the future distribution, rather than, say, its median. Here's the problem: minimizing the MAPE will typically not incentivize us to output this expectation, but a quite different one-number-summary (McKenzie, 2011, Kolassa, 2019). This happens for two different reasons. Asymmetric future distributions. Suppose our true future distribution follows a stationary $(\mu=1,\sigma^2=1)$ lognormal distribution. The following picture shows a simulated time series, as well as the corresponding density. The horizontal lines give the optimal point forecasts, where "optimality" is defined as minimizing the expected error for various error measures. We see that the asymmetry of the future distribution, together with the fact that the MAPE differentially penalizes over- and underforecasts, implies that minimizing the MAPE will lead to heavily biased forecasts. (Here is the calculation of optimal point forecasts in the gamma case.) Symmetric distribution with a high coefficient of variation. Suppose that $A_t$ comes from rolling a standard six-sided die at each time point $t$. The picture below again shows a simulated sample path: In this case: The dashed line at $F_t=3.5$ minimizes the expected MSE. It is the expectation of the time series. Any forecast $3\leq F_t\leq 4$ (not shown in the graph) will minimize the expected MAE. All values in this interval are medians of the time series. The dash-dotted line at $F_t=2$ minimizes the expected MAPE. We again see how minimizing the MAPE can lead to a biased forecast, because of the differential penalty it applies to over- and underforecasts. In this case, the problem does not come from an asymmetric distribution, but from the high coefficient of variation of our data-generating process. This is actually a simple illustration you can use to teach people about the shortcomings of the MAPE - just hand your attendees a few dice and have them roll. See Kolassa & Martin (2011) for more information. Related CrossValidated questions R code Lognormal example: mm <- 1 ss.sq <- 1 SAPMediumGray <- "#999999"; SAPGold <- "#F0AB00" set.seed(2013) actuals <- rlnorm(100,meanlog=mm,sdlog=sqrt(ss.sq)) opar <- par(mar=c(3,2,0,0)+.1) plot(actuals,type="o",pch=21,cex=0.8,bg="black",xlab="",ylab="",xlim=c(0,150)) abline(v=101,col=SAPMediumGray) xx <- seq(0,max(actuals),by=.1) polygon(c(101+150*dlnorm(xx,meanlog=mm,sdlog=sqrt(ss.sq)), rep(101,length(xx))),c(xx,rev(xx)),col="lightgray",border=NA) (min.Ese <- exp(mm+ss.sq/2)) lines(c(101,150),rep(min.Ese,2),col=SAPGold,lwd=3,lty=2) (min.Eae <- exp(mm)) lines(c(101,150),rep(min.Eae,2),col=SAPGold,lwd=3,lty=3) (min.Eape <- exp(mm-ss.sq)) lines(c(101,150),rep(min.Eape,2),col=SAPGold,lwd=3,lty=4) par(opar) Dice rolling example: SAPMediumGray <- "#999999"; SAPGold <- "#F0AB00" set.seed(2013) actuals <- sample(x=1:6,size=100,replace=TRUE) opar <- par(mar=c(3,2,0,0)+.1) plot(actuals,type="o",pch=21,cex=0.8,bg="black",xlab="",ylab="",xlim=c(0,150)) abline(v=101,col=SAPMediumGray) min.Ese <- 3.5 lines(c(101,150),rep(min.Ese,2),col=SAPGold,lwd=3,lty=2) min.Eape <- 2 lines(c(101,150),rep(min.Eape,2),col=SAPGold,lwd=3,lty=4) par(opar) References Gneiting, T. Making and Evaluating Point Forecasts. Journal of the American Statistical Association, 2011, 106, 746-762 Goodwin, P. & Lawton, R. On the asymmetry of the symmetric MAPE. International Journal of Forecasting, 1999, 15, 405-408 Hoover, J. Measuring Forecast Accuracy: Omissions in Today's Forecasting Engines and Demand-Planning Software. Foresight: The International Journal of Applied Forecasting, 2006, 4, 32-35 Kolassa, S. Why the "best" point forecast depends on the error or accuracy measure (Invited commentary on the M4 forecasting competition). International Journal of Forecasting, 2019 Kolassa, S. & Martin, R. Percentage Errors Can Ruin Your Day (and Rolling the Dice Shows How). Foresight: The International Journal of Applied Forecasting, 2011, 23, 21-29 Kolassa, S. & Schütz, W. Advantages of the MAD/Mean ratio over the MAPE. Foresight: The International Journal of Applied Forecasting, 2007, 6, 40-43 McKenzie, J. Mean absolute percentage error and bias in economic forecasting. Economics Letters, 2011, 113, 259-262
If the integral kernel $k(x, y)$ of an operator $T : C^\infty_c(M) \to \mathcal{D}'(M)$ is symmetric ($M$ is a compact manifold), then the operator $T$ is symmetric. Is the converse true? That is, given a self-adjoint $T$ which has an integral kernel $k(x, y)$, is $k(x, y) = k(y, x)$? A reference would be really appreciated. closed as off-topic by Christian Remling, Alex Degtyarev, coudy, Neil Strickland, Ryan Budney May 12 '15 at 20:56 This question appears to be off-topic. The users who voted to close gave this specific reason: "MathOverflow is for mathematicians to ask each other questions about their research. See Math.StackExchange to ask general questions in mathematics." – Christian Remling, Alex Degtyarev, coudy, Neil Strickland, Ryan Budney The following standard argument works if $k$ is assumed bounded. I don't think that assumption should be necessary, but maybe this is at least a helpful start. If $T$ is symmetric then for every $f,g \in C^\infty(M)$ we have $$\int \int k(x,y) f(x) g(y)\,dx \,dy = \int \int k(x,y) g(x) f(y)\,dy\,dx$$ or changing variables and using Fubini's theorem (here we use the assumption that $k$ is bounded), $$\iint k(x,y) f(x) g(y) \,dx\,dy = \iint k(y,x) f(x) g(y)\,dx\,dy.$$ In other words, for every $F : M \times M \to \mathbb{R}$ which is of the form $F(x,y) = \sum_{i=1}^n f_i(x) g_i(y)$ where $f_i, g_i \in C^\infty(M)$, we have $$\iint (k(x,y) - k(y,x)) F(x,y) \,dx\,dy = 0.$$ Now using a monotone class argument, show that the same holds for all bounded measurable $F: M \times M \to \mathbb{R}$. Taking $F(x,y) = k(x,y) - k(y,x)$ you get $$\iint |k(x,y) - k(y,x)|^2 \,dx\,dy = 0.$$ (If $k$ is assumed continuous you can instead use Stone-Weierstrass in the last step.)
Edited: Just realised my first post was somewhat misleading and not precise. Thanks to the two commetators that pointed it out. I am working on an article and ended up wondering for which values of $\gamma>0$ does the following inequality hold: $$\frac{\sum_iN_ic_{i,t}^\gamma}{\left(\sum_iN_ic_{i,t}\right)^\gamma}<\frac{\sum_iN_ic_{i,0}^\gamma}{\left(\sum_iN_ic_{i,0}\right)^\gamma}$$ For $N_i \in [0,1]$ and $\sum_i N_i=1$. This can be rewritten as: $$\frac{\mathbb{E}_i \left[ c_{i,t}^\gamma\right]}{\mathbb{E}_i \left[c_{i,t}\right]^\gamma}<\frac{\mathbb{E}_i \left[ c_{i,0}^\gamma\right]}{\mathbb{E}_i \left[c_{i,0}\right]^\gamma}$$ Where $c_{i,t}\geq c_{i,0}$ for every $i$. Of course I already noticed that the expresion holds with equality when $\gamma =1$. Somewhat inspired in Jensen´s inequality, my intuition is that it should hold for $\gamma \in (0,1)$ but I haven´t been able to prove it. Any suggestions? Note: Jensen´s inequality states that $\mathbb{E}\left( \varphi(x) \right) > \varphi \left(\mathbb{E}x \right)$ if $\varphi(x)$ is a convex function.
$$\int_0^\pi x \ln(\sin (x))dx $$ I tried integrating this by parts but I end up getting integral that doesn't converge, which is this $$ \int_0^\pi \dfrac{x^2\cos (x)}{\sin(x)} \ dx$$ So can anyone help me on this one? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community By making the change of variable $$ u=\pi -x $$ you get that $$ I=\int_0^\pi x \ln(\sin x)\:dx=\int_0^\pi (\pi-u) \ln(\sin u)\:du=\pi\int_0^\pi \ln(\sin u)\:du-I $$ giving $$ I=\frac{\pi}2\int_0^\pi \ln(\sin u)\:du=\pi\int_0^{\pi/2} \ln(\sin u)\:du. $$ Then conclude with the classic evaluation of the latter integral: see many answers here.
Before answering the question more or less directly, I'd like to point out that this is a good question that provides an object lesson and opens a foray into the topics of singular integral equations, analytic continuation and dispersion relations. Here are some references of these more advanced topics: Muskhelishvili, Singular Integral Equations; Courant & Hilbert, Methods of Mathematical Physics, Vol I, Ch 3; Dispersion Theory in High Energy Physics, Queen & Violini; Eden et.al., The Analytic S-matrix. There is also a condensed discussion of `invariant functions' in Schweber, An Intro to Relativistic QFT Ch13d. The quick answer is that, for $m^2 \in\mathbb{R}$, there's no "shortcut." One must choose a path around the singularities in the denominator. The appropriate choice is governed by the boundary conditions of the problem at hand. The $+i\epsilon$ "trick" (it's not a "trick") simply encodes the boundary conditions relevant for causal propagation of particles and antiparticles in field theory. We briefly study the analytic form of $G(x-y;m)$ to demonstrate some of these features. Note, first, that for real values of $p^2$, the singularity in the denominator of the integrand signals the presence of (a) branch point(s). In fact, [Huang, Quantum Field Theory: From Operators to Path Integrals, p29] the Feynman propagator for the scalar field (your equation) may be explicitly evaluated:\begin{align}G(x-y;m) &= \lim_{\epsilon \to 0} \frac{1}{(2 \pi)^4} \int d^4p \, \frac{e^{-ip\cdot(x-y)}}{p^2 - m^2 + i\epsilon} \nonumber \\&= \left \{ \begin{matrix}-\frac{1}{4 \pi} \delta(s) + \frac{m}{8 \pi \sqrt{s}} H_1^{(1)}(m \sqrt{s}) & \textrm{ if }\, s \geq 0 \\ -\frac{i m}{ 4 \pi^2 \sqrt{-s}} K_1(m \sqrt{-s}) & \textrm{if }\, s < 0.\end{matrix} \right.\end{align}where $s=(x-y)^2$. The first-order Hankel function of the first kind $H^{(1)}_1$ has a logarithmic branch point at $x=0$; so does the modified Bessel function of the second kind, $K_1$. (Look at the small $x$ behavior of these functions to see this.) A branch point indicates that the Cauchy-Riemann conditions have broken down at $x=0$ (or $z=x+iy=0$). And the fact that these singularities are logarithmic is an indication that we have an endpoint singularity [eg. Eden et. al., Ch 2.1]. (To see this, consider $m=0$, then the integrand, $p^{-2}$, has a zero at the lower limit of integration in $dp^2$.) Coming back to the question of boundary conditions, there is a good discussion in Sakurai, Advanced Quantum Mechanics, Ch4.4 [NB: "East Coast" metric]. You can see that for large values of $s>0$ from the above expression that we have an outgoing wave from the asymptotic form of the Hankel function. Connecting it back to the original references I cited above, the $+i\epsilon$ form is a version of the Plemelj formula [Muskhelishvili]. And the expression for the propagator is a type of Cauchy integral [Musk.; Eden et.al.]. And this notions lead quickly to the topics I mentioned above -- certainly a rich landscape for research.
Chapter 01: Number System Notes (Solutions) of Chapter 01: Number System, Text Book of Algebra and Trigonometry Class XI (Mathematics FSc Part 1 or HSSC-I), Punjab Text Book Board, Lahore. Contents & summary Rational numbers and irrational numbers Properties of real numbers Complex numbers Operation on complex numbers Complex numbers as ordered pairs of real numbers Properties of the foundamental operations on complex numbers A special subset of $\mathbb{C}$ The real line Real plane or coordinate plane Geometrical representation of complex numbers, the complex plane To find real and imaginary parts of (i) $(x+iy)^n$ (ii) $\left(\frac{x_1+iy_1}{x_2+iy_2}\right)^n, x_2+iy_2\neq 0$ The square root of complex number has many values with two distinct values. $\sqrt{-1}$ has two distinct values $i$ and $-i$. This fact can be verified by taking square of $i$ and $-i$, which give the same answer $-1$. It is different to say $i^2=-1$ and $\sqrt{-1}=i$. (see detail answer here) Solutions Short Questions The following short questions of this chapter was send by Mr. Akhtar Abbas. Here is next chapter fsc/fsc_part_1_solutions/ch01 Last modified: 18 months ago by Administrator
Hello, I've never ventured into char before but cfr suggested that I ask in here about a better name for the quiz package that I am getting ready to submit to ctan (tex.stackexchange.com/questions/393309/…). Is something like latex2quiz too audacious? Also, is anyone able to answer my questions about submitting to ctan, in particular about the format of the zip file and putting a configuration file in $TEXMFLOCAL/scripts/mathquiz/mathquizrc Thanks. I'll email first but it sounds like a flat file with a TDS included in the right approach. (There are about 10 files for the package proper and the rest are for the documentation -- all of the images in the manual are auto-generated from "example" source files. The zip file is also auto generated so there's no packaging overhead...) @Bubaya I think luatex has a command to force “cramped style”, which might solve the problem. Alternatively, you can lower the exponent a bit with f^{\raisebox{-1pt}{$\scriptstyle(m)$}} (modify the -1pt if need be). @Bubaya (gotta go now, no time for followups on this one …) @egreg @DavidCarlisle I already tried to avoid ascenders. Consider this MWE: \documentclass[10pt]{scrartcl}\usepackage{lmodern}\usepackage{amsfonts}\begin{document}\noindentIf all indices are even, then all $\gamma_{i,i\pm1}=1$.In this case the $\partial$-elementary symmetric polynomialsspecialise to those from at $\gamma_{i,i\pm1}=1$,which we recognise at the ordinary elementary symmetric polynomials $\varepsilon^{(n)}_m$.The induction formula from indeed gives\end{document} @PauloCereda -- okay. poke away. (by the way, do you know anything about glossaries? i'm having trouble forcing a "glossary" that is really an index, and should have been entered that way, into the required series style.) @JosephWright I'd forgotten all about it but every couple of months it sends me an email saying I'm missing out. Oddly enough facebook and linked in do the same, as did research gate before I spam filtered RG:-) @DavidCarlisle Regarding github.com/ho-tex/hyperref/issues/37, do you think that \textNFSSnoboundary would be okay as name? I don't want to use the suggested \textPUnoboundary as there is a similar definition in pdfx/l8uenc.def. And textnoboundary isn't imho good either, as it is more or less only an internal definition and not meant for users. @UlrikeFischer I think it should be OK to use @, I just looked at puenc.def and for example \DeclareTextCompositeCommand{\b}{PU}{\@empty}{\textmacronbelow}% so @ needs to be safe @UlrikeFischer that said I'm not sure it needs to be an encoding specific command, if it is only used as \let\noboundary\zzznoboundary when you know the PU encoding is going to be in force, it could just be \def\zzznoboundary{..} couldn't it? @DavidCarlisle But puarenc.def is actually only an extension of puenc.def, so it is quite possible to do \usepackage[unicode]{hyperref}\input{puarenc.def}. And while I used a lot @ in the chess encodings, since I saw you do \input{tuenc.def} in an example I'm not sure if it was a good idea ... @JosephWright it seems to be the day for merge commits in pull requests. Does github's "squash and merge" make it all into a single commit anyway so the multiple commits in the PR don't matter or should I be doing the cherry picking stuff (not that the git history is so important here) github.com/ho-tex/hyperref/pull/45 (@UlrikeFischer) @JosephWright I really think I should drop all the generation of README and ChangeLog in html and pdf versions it failed there as the xslt is version 1 and I've just upgraded to a version 3 engine, an dit's dropped 1.0 compatibility:-)
$\textit{Square Root}$ Until now, the exponents (indices) have all been integers. In theory, an exponent (index) can be any number. We will confine ourselves to the case of exponents (indices) which are the rational number (fractions). The symbol $\sqrt{x}$ means square root of $x$. It means, finds a number that multiplies by itself to give the original number $x$. A square root is an opposite of squaring (power $2$). For this reason a square root is equivalent to an index of $\dfrac{1}{2}$. Suppose we are asked to find $\sqrt{x^2}$. This can be written as $(x^2)^\frac{1}{2}$. We multiply the indices to obtain a result of $x$. The exponent laws used previously can also be applied to $\textit{rational exponents}$, or exponents which are written as a fraction. $$x^{\frac{1}{2}} \times x^{\frac{1}{2}} = x^{\frac{1}{2} + \frac{1}{2}} = x^1 = x$$ $$\sqrt{x} \times \sqrt{x} = x$$ More simply we can take a square root of an algebraic term by halving the index. $$\sqrt{x} = x^{\frac{1}{2}}$$ Note that we write $\sqrt{x}$, rather than $\sqrt[2]{x}$ for $x^{\frac{1}{2}}$. Example 1 Find $\sqrt{16x^4}$. \( \begin{align} \displaystyle \sqrt{16x^4} &= \sqrt{16} \times \sqrt{x^4} &\text{We need to find the square root of both } 16 \text{ and }x^4. \\ &= 4 \times (x^4)^{\frac{1}{2}} &\text{Which number is multiplied by itself to give 16?} \\ &&\text{Replace the cube root sign with a power of } \dfrac{1}{2}. \\ &= 4 \times x^{4 \times \frac{1}{2}} &\text{Apply the Exponent Law.} \\ &= 4 \times x^2 \\ &= 4x^2 \\ \end{align} \) $\textit{Cube Root}$ The symbol $\sqrt[3]{x}$ means cube root. It means, find a number that can be written 3 times and multiplied to give the original number. A cube root is equivalent to a power of $\dfrac{1}{3}$. We can take the cube root of an algebraic term by taking one third of the exponent (index). $$x^{\frac{1}{3}} \times x^{\frac{1}{3}} \times x^{\frac{1}{3}} = x^{\frac{1}{3} + \frac{1}{3} + \frac{1}{3}} = x^1 = x$$ $$\sqrt[3]{x} \times \sqrt[3]{x} \times \sqrt[3]{x} = x$$ $$\sqrt[3]{x} = x^{\frac{1}{3}}$$ Example 2 Find $\sqrt[3]{27x^6}$. \( \begin{align} \displaystyle \sqrt[3]{27x^6} &= \sqrt[3]{27} \times \sqrt[3]{x^6} &\text{We need to find the square root of both } 27 \text{ and }x^6. \\ &= 3 \times (x^6)^{\frac{1}{3}} &\text{Which number, written 3 times and multiplied gives 27?} \\ &&\text{Replace the cube root sign with a power of } \dfrac{1}{3}. \\ &= 3 \times x^{6 \times \frac{1}{3}} &\text{Apply the Exponent Law.} \\ &= 3 \times x^2 \\ &= 3x^2 \\ \end{align} \) $\textit{Rational Exponents}$ $$\sqrt[n]{x} = x^{\frac{1}{n}}$$ As can be seen from the above identity, the denominator of a fraction $n$ indicates the power or type of root. That is , $n=3$ implies cube root, $n=4$ implies fourth root, etc. We can now determine that $$\overbrace {\sqrt[n]{x} \times \sqrt[n]{x} \times \sqrt[n]{x} \times \cdots \times \sqrt[n]{x}}^{n} = x$$ $$\sqrt[n]{x^m} = (x^m)^{\frac{1}{n}} = x^{\frac{m}{n}}$$ Example 3 Write $\sqrt[4]{3}$ as a single power of 3. \( \begin{align} \displaystyle \sqrt[4]{3} &= 3^{\frac{1}{4}} \\ \end{align} \) Example 4 Write $\sqrt[5]{8}$ as a single power of 2. \( \begin{align} \displaystyle \sqrt[5]{8} &= (2^3)^{\frac{1}{5}} \\ &= 2^{3 \times \frac{1}{5}} \\ &= 2^{\frac{3}{5}} \\ \end{align} \) Example 5 Write $\dfrac{1}{\sqrt[6]{16}}$ as a single power of 2. \( \begin{align} \displaystyle \dfrac{1}{\sqrt[6]{16}} &= \dfrac{1}{16^{\frac{1}{6}}} \\ &= 16^{-\frac{1}{6}} \\ &= (2^4)^{-\frac{1}{6}} \\ &= 2^{4 \times -\frac{1}{6}} \\ &= 2^{-\frac{2}{3}} \\ \end{align} \)
Standard Gibbs free energy of formation of liquid water at 298 K is −237.17 kJ/mol and that of water vapour is −228.57 kJ/mol. Therefore, $$\ce{H2O(l)->H2O(g)}~~\Delta G=8.43~\mathrm{kJ/mol}$$ Since $\Delta G>0$, it should not be a spontaneous process but from common observation, water does turn into vapour from liquid over time without any apparent interference. Your math is correct but you left out a very important symbol from your equations. There is a big difference between $\Delta G$ and $\Delta G^\circ$. Only $\Delta G^\circ$ means the Gibbs energy change under standard conditions, and as you noted in the question, the free energy values you quoted are the standard gibbs free energy of water and water vapor. Whether or not something is spontaneous under standard conditions is determined by $\Delta G^\circ$. Whether something is spontaneous under other conditions is determined by $\Delta G$. To find $\Delta G$ for real conditions, we need to know how they differ from standard conditions. Usually "standard" conditions for gases correspond to one bar of partial pressure for that gas. But the partial pressure of water in our atmosphere is usually much lower than this. Assuming water vapor is an ideal gas, then the free energy change as a function of partial pressure is given by $G = G^\circ + RT \ln{\frac{p}{p^\circ}}$. If the atmosphere were perfectly 100% dry, then the water vapor partial pressure would be 0, so $\ln{\frac{p}{p^\circ}}$ would be negative infinity. That would translate to an infinitely negative -- i.e. highly spontaneous -- $\Delta G$ for the water evaporation reaction. Small but not-quite zero dryness in the atmosphere would still lead to the $\Delta G$ of water vapor that is more negative than liquid water. So water evaporation is still spontaneous. Extra credit: given the standard formation energies you found, and assuming water is an ideal gas, you could calculate the partial pressure of water vapor at which $\Delta G = 0$ for water evaporation. And the answer had better be the vapor pressure of water, or else there is a thermodynamic inconsistency in your data set!
Silver is not as inert as gold. Tarnish is the name we give to the phenomenon when silver metal is oxidized and becomes a salt. Surfaces made of silver tend to disinfect themselves pretty quickly. As for disinfecting water poured into a silver cup, I imagine that would take a little longer since you have to wait for silver to diffuse away from the surface ... Circulated CoinI'll assume that the coin is circulated so a very gentle cleaning won't be a problem. You want to use some sort of organic solvent to loosen the glue, then very gently rub the residue off. You don't want to use any sort of silverware polish or anything abrasive when rubbing the coin.Good solvents might be olive oil or nail polish remover. ... Well, $\ce{Ag2O}$ is just a basic oxide. As such, it would dissolve in suitable acids ($\ce{HNO3}$ would do), but I guess that's not quite what you want. Well, some metal oxides ($\ce{ZnO}$, for instance) are amphoteric and thus would dissolve in $\ce{KOH}$ as well, via formation of hydroxo complexes. Sadly, this is not the case with $\ce{Ag2O}$. Then we ... As far as I know, a chunk of solid silver will not spontaneously react with water. But if you pass an electrical current through silver electrodes immersed in water, the silver will be oxidized according to the following equation:$$\ce{2H2O(l) + 2Ag(s) -> 2Ag+(aq) + H2(g) + OH- (aq)}\qquad E^\circ=-1.63\ \mathrm V$$That will get you the ions you ... You're right: the silver is reacting with sulfur compounds in the food to form a tarnish of silver sulfide. This is most commonly observed, in my experience, using silver teaspoons with boiled eggs, which are pretty rich in sulfur.There are a number of reactions that can take place depending on the sulfur-containing species - the abstract from this paper ... Put them into a pot with some club soda and a piece of aluminium foil and pour over a hot water.I am familiar with this process. I am not familiar with the other. Perhaps someone else can provide an answer for it.What happens chemically during these procedure - why does it work, and what are the byproducts of these reactions?This process cleans ... As to the above answers I also want to include the mechanism of action of silver as an antimicrobial agent. The exact mechanism of action of silver as an antimicrobial agent is not known and the current hypothesis is silver will converted to silver ions and this positively charged ions will attack the cell membrane, DNA or proteins which are negatively ... A glance through the table of the isotopes in the venerable CRC handbook reveals that the longest lived radioactive isotope of Ag is $Ag^{108m}$, made by neutron capture by $Ag^{107}$. this has a listed half life of "> 5y", but the capture cross section is only $35\pm5$ barns, pretty small for thermal neutrons. So, it would be really hard to get a lot of ... The solubilities of silver halides decreases down the periodic table:$\ce{AgF}\ : K_{sp} = 205$$\ce{AgCl}: K_{sp} = 1.8\times10^{-10}$$\ce{AgBr}: K_{sp} =5.2\times10^{-13}$$\ce{AgI}\ \ \ : K_{sp} =8.3\times10^{-17}$The rationale for this trend is typically described using the concept of hard/soft acids and bases. (See J. Chem. Educ., 1968, 45, ... To give some numbers to MaxW's comment, you can see on this webpage the x-ray absorption energies of basically all the elements.For example, the K-edge of potassium is $3.61~\mathrm{keV}$. The L-II edge of silver is $3.52~\mathrm{keV}$ and the L-1 edge of silver is $3.81~\mathrm{keV}$. So, it seems like you are somehow automatically assigning the peaks and ... Barium nitrate has a water solubility of $\pu{10.5g/100mL}$ at $\pu{25^oC}$. It isn't specified in the question what concentration of sulfate you suspect might be present, but given that you are trying to check for sulfate by precipitating with barium, barium nitrate should be the way to go. Your choices are restrained as the precipitation of $\ce{SO4^{2-}}$ in $\ce{BaSO4}$ is the classical way to quantify the former and an electrochemical determination (in aqueous solution) is not practical.Electing $\ce{Ba(OH)2}$ may lead to the formation of silver hydroxyde, equally poorly soluble in water.$\ce{Ba(PO3)2}$ itself is very poorly soluble, as ... To put the other answer in a graphical context, I took the X-ray emission spectra of K and Ag from some place online*, and plotted them together, crudely resized to fit the same scale. Some of the Ag L-lines (I think it's Lβ 1 and/or 2) are at exactly the same position as the K Kɑ line. So, the software sees a certain amount of counts at K, and identifies it.... Silver in silver oxide is no more oxidized than in $\ce{AgNO3}$. So you should ask yourself the same question earlier, even before the reaction. Yes, noble metals are somewhat resistant to oxidation. But still they can be oxidized, and thus can form compounds, of which $\ce{AgNO3}$ and $\ce{Ag2O}$ are the examples.Even gold can be oxidized, though that ... The most important radioactive silver nuclide is Ag-110m (half-life: 249.9 d), which is generated in nuclear reactors.The effective dose coefficient of Ag-110m for ingestion by adult members of the public is 2.8E−09 Sv/Bq. However, this committed dose is evaluated over 50 years. In order to estimate the deterministic short-term effects of radiation ... Silver's pretty soft—it's possible to just scrape it off with a metal spatula or something. Also, not all of the silver deposits on the glass. You might be able to adjust the conditions of the reaction to make more silver precipitate into the liquid where it's easier to get at. You can, of course, redissolve the silver in nitric acid or the like.Since the ... Manishearth points to the right direction, the crucial parameters to look at are the standard electrode potentials. One should however take into account that the relevant species that get reduced under the condition of the Tollens' reagent are not the aquo- but the amine complexes, namely $\ce{[Ag(NH3)2]+}$. and $\ce{[Cu(NH3)4]^{2+}}$, respectively.... For the particular reaction where you get a silver mirror with an aldehyde, I doubt it'll work for copper, due to the reduction potentials. I'll take acetaldehyde $\ce{<->}$ acetate as an example here, but I believe that the reduction potentials of most aldehydes will be similar. The nitrate is a spectator ion here, so we need not consider its presence.... I suggest to do the following:Line the bottom of a pan with aluminum foil. Put the silver piece on top of the aluminum foil. The silver piece and aluminum foil must be in contact with each other.To about 2L of hot boiling water add about a half cup of baking soda (Be careful!). Then add the mixture to the pan. Make sure to cover the whole silver piece. ... The concerned silver nuclide with 61 neutrons is $\ce{^108_47Ag}$. This nuclide with 47 protons and 61 neutrons lies in the so-called valley of β-stability.Image taken from Choppin, Liljenzin, Rydberg: Radiochemistry and Nuclear Chemistry, third edition (2002), p. 42Nuclides on right side of the valley (higher neutron numbers) are unstable to decay by β− ... You would get silver at the cathode, and oxygen at the anode.Since silver is below hydrogen in the spectrochemical series,it tends to get reduced over hydrogen, similarly, since $\ce{NO3-}$ is above $\ce{OH-}$ , it tends not to get oxidised, therefore giving oxygen.$\ce{Ag+ + e- -> Ag}$ (at cathode)$\ce{ 4OH- -> O2 + 2H2O + 4e-}$ (at anode) AgO (silver(I, III) oxide) is unstable and decomposes to produce $\ce{O2}$ in aqueous solutions. Hydrogen peroxide is thermodynamically unstable too and slowly decomposes to form water and oxygen. Decomposition of hydrogen peroxide can be catalyzed by different compounds, including transition metals (such as Ag) and their compounds. Probably, the silver (I) ... Hydroxide is bad for this process. You will have additional problems if hydroxide anion is present.Silver cation reacts with hydroxide anion to form silver hydroxide, which spontaneously decomposes into silver oxide:$$\begin{aligned}\ce{Ag+ + OH-}&\ce{ -> AgOH}\\\ce{2AgOH}&\ce{ -> H2O +Ag2O}\end{aligned}$$In addition to shutting down the ... The energy in the electromagnetic radiation decomposes AgCl into its components, silver and chlorine. This produces finely divided silver particles, which look dark because yet solid silver (in the form of an ingot, for example) has a typical metallic 'colour', silver powder is dark. If one needs to work with organic solvents, silver(I) oxide can be dissolved in trifluoroacetic acid (TFA) producing silver(I) trifluoroacetate $\ce{AgOCOCF3}$, which is a versatile reactant in organic synthesis (compact overview: [1]); also a precursor or chloride precipitant in the synthesis of metal complexes (see, e.g. [2]).ReferencesWistrand, L.-G.; ... If you had included the units in your calculation, you would have noticed why your equation is not correct.Molar mass $M$ is defined as$$M=\frac mn\tag1$$where $m$ is mass and $n$ is amount of substance.Since the Avogadro constant $N_\mathrm A$ is$$N_\mathrm A=\frac Nn\tag2$$where $N$ is the number of particles, the mass $m$ of one atom $(N=1)$ is$$m=... Since the lab is past due now, I'll give what I think is the answer.$\ce{Ag2SO4}$ is somewhat soluble in water. It is most likely that you simply didn't get enough to cause precipitation.The other possibility, which I don't think applies here, is that you have a supersaturated solution. There are some precipitates for which crystals are just stubborn to ... You can use hydrated tri-sodium citrate, it's readily soluble in water. As for its chemical reactivity, it's identical to tri-sodium citrate.You should take into consideration the increase in the molecular weight in the hydrated form due to the presence of water molecules. Is any one of these reaction more "true" (occurring more often naturally) than the others or is it the case that a little bit of everything is happening?I think the best answer is "a little bit of everything". Silver sulfide forms faster but requires exposure of the silver to sulfur-containing materials (like human skin, food, etc.). Silver that isn't ...
Assume that we have an already assigned Multiplicative Cyclic Group $\mathbb Z_p^*$ with order $q=p-1$, and $p$ is a prime number, is it possible to create a bilinear function $\hat{e}: \mathbb Z_p^* \times \mathbb Z_p^* \rightarrow \mathbb G_2$ with the property that follows: $$\hat{e}(g^{a},g^{sb})=\hat{e}(g^{b},g^{sa})$$ $g$ is a generator for $\mathbb Z_p^*$, and $\mathbb G_2$ is also a Multiplicative Cyclic Group with order $q$ it seems that if the creation of this function is practical in polynomial time then there should be new attacks on some cryptographic protocols with security property of anonymity.
Archive For The “Number Theory” Category By DC Ipsen The purpose of this publication is to teach how a whole figuring out of devices and dimensions presents an easy foundation for explaining features of actual description that differently could appear complicated or mysterious. the best way devices and dimensions behave-in specific, the way in which their homes effect the mathematical description of actual behavior-makes experience provided that yes uncomplicated notions and conventions are well-known. as soon as those are well-known, the unusual quirks and uncanny powers of devices and dimensions without delay turn into effortless to realize and straightforward to regulate. By Hugh L. Montgomery, Robert C. Vaughan ISBN-10: 0521849039 ISBN-13: 9780521849036 Best numbers are the multiplicative construction blocks of average numbers. knowing their total effect and particularly their distribution supplies upward push to crucial questions in arithmetic and physics. particularly, their finer distribution is heavily hooked up with the Riemann speculation, an important unsolved challenge within the mathematical global. This publication comprehensively covers the entire issues met in first classes on multiplicative quantity thought and the distribution of top numbers. The textual content relies on classes taught effectively over decades on the collage of Michigan, Imperial collage, London and Pennsylvania kingdom collage. By Aderemi Kuku ISBN-10: 142001112X ISBN-13: 9781420011128 ISBN-10: 158488603X ISBN-13: 9781584886037 Illustration concept and better Algebraic K-Theory is the 1st ebook to offer better algebraic K-theory of orders and staff jewelry in addition to represent greater algebraic K-theory as Mackey functors that bring about equivariant greater algebraic K-theory and their relative generalizations. therefore, this ebook makes computations of upper K-theory of workforce earrings extra available and gives novel concepts for the computations of upper K-theory of finite and a few endless teams. Authored by way of a optimal authority within the box, the ebook starts with a cautious assessment of classical K-theory, together with transparent definitions, examples, and critical classical effects. Emphasizing the sensible worth of the often summary topological structures, the writer systematically discusses larger algebraic K-theory of actual, symmetric monoidal, and Waldhausen different types with purposes to orders and crew jewelry and proves quite a few effects. He additionally defines profinite greater ok- and G-theory of tangible different types, orders, and staff jewelry. delivering new insights into classical effects and establishing avenues for extra purposes, the booklet then makes use of representation-theoretic techniques-especially induction theory-to learn equivariant better algebraic K-theory, their relative generalizations, and equivariant homology theories for discrete workforce activities. the ultimate bankruptcy unifies Farrell and Baum-Connes isomorphism conjectures via Davis-Lück meeting maps. By Dickson L.E. By Saban Alaca, Kenneth S. Williams This e-book offers an creation to algebraic quantity thought compatible for senior undergraduates and starting graduate scholars in arithmetic. By Jean-Pierre Serre, Marvin J. Greenberg ISBN-10: 0387904247 ISBN-13: 9780387904245 The mathematical content material and exposition are at a excessive point usual of Serre. I haven't comprehensive examining the full e-book, yet listed here are a few misprints i've got stumbled on that could function an invaluable caution. NB: every one of these error usually are not within the third French edition... Chapter 1: section four, pg. 14, second founded exhibit: the ramification indices will be e_{\beta} now not e_{p} within the product. section five, pg. 15, first formulation has to be N: I_{B}-> I_{A}, now not the opposite direction around. section 6, pg. 17, final sentence of first paragraph, exchange the inclusion image $\in$ with the note "in". essentially, f is part of A[X] and never a component of k[X]. within the French ed. Serre accurately used "dans" and didn't us the logo $\in$. section 7, pg. 22, in facts of Prop. 21, second paragraph, third sentence, change "contain" with "contains". 4th sentence: could be, "... we should have \bar{L}_{S} = \bar{K}_{T}" no longer \bar{L}. [separable end result is later, specifically within the Corollary(!)] Chapter 2: sec 1, pg. 28: 3rd sentence can be "one sees that E is the union of (A:xA) cosets of modules xE,...". As is within the ebook, the sentence doesn't make grammatical sense. sec 2, pg. 29: the def. of w needs to hold a v' not only v, that's: w = (1/m) v' is a discrete valuation of L. sec three, theorem 1, (i): switch ok to \hat{K}; so the of completion of L_i has measure n_i over the of completion of K. sec three, workout 1: the instructed reference should still say part three of Bourbaki Algebra, now not 7. (going through Hermann Paris 1958 as usual) Chapter 4: sec 1, pg. sixty three, prop three, want okay' (not okay) in def. of e', that's: e' = e_{L/K'}. in the evidence of prop three, the s and t for "st, t in H" must be italicized. sec 2, prop 6, first line of evidence: gothic beta may be gothic p, that's to every x in p^{i}_{L} sec three, lemma three, final line of facts: top case Phi is nowhere outlined, desire decrease case phi, that's: phi'(u)....so theta and phi needs to coincide. sec three, assertion of lemma five, back phi, now not Phi. Some information for the newbie: - know the way localization behaves as a functor through, say Atiyah-Macdonald. - For a fresh and transparent facts that separable <=> nondegenerate Tr(,) see Roman's "Field concept" (Bourbaki makes use of etale algebras to get this consequence, a piece greater than needed). - P. Samuel's "Algebraic conception of numbers" (Dover publ. now!) has a truly stylish exposition of the evidence of quadratic reciprocity that's alluded to on the finish of part eight. By Professor Aleksandar Ivić ISBN-10: 1107028833 ISBN-13: 9781107028838 Hardy's Z-function, on the topic of the Riemann zeta-function ζ(s), was once initially utilised by means of G. H. Hardy to teach that ζ(s) has infinitely many zeros of the shape ½+it. it truly is now among crucial capabilities of analytic quantity concept, and the Riemann speculation, that each one complicated zeros lie at the line ½+it, may be the most effective recognized and most crucial open difficulties in arithmetic. this day Hardy's functionality has many functions; between others it's used for wide calculations in regards to the zeros of ζ(s). This entire account covers many points of Z(t), together with the distribution of its zeros, Gram issues, moments and Mellin transforms. It beneficial properties an in depth bibliography and end-of-chapter notes containing reviews, comments and references. The ebook additionally offers many open difficulties to stimulate readers drawn to additional learn. By C. Stanley Ogilvy The conception of numbers is an old and interesting department of arithmetic that performs an incredible function in smooth computing device idea. it's also a favored subject between novice mathematicians (who have made many contributions to the sector) due to its accessibility: it doesn't require complex wisdom of upper mathematics. This pleasant quantity, by means of famous mathematicians, invited readers to hitch a demanding day trip into the secret and magic of quantity conception. No designated education is required — simply highschool arithmetic, a keenness for figures, and an inquisitive brain. this type of individual will quickly be absorbed and intrigued by means of the information and difficulties provided here. Beginning with general notions, the authors skillfully but painlessly shipping the reader to raised geographical regions of arithmetic, constructing the mandatory ideas alongside the way in which, in order that complicated topics should be extra simply understood. integrated are thorough discussions of top numbers, quantity styles, irrationals and iterations, and calculating prodigies, between different topics. Much of the fabric awarded isn't to be present in different well known remedies of quantity thought. in addition, there are numerous very important proofs (presented with uncomplicated and chic causes) frequently missing in comparable volumes. In sum, Excursions in quantity Theorydeals a superb compromise among hugely technical remedies inaccessible to put readers and well known books with too little substance. Its stimulating and hard presentation of vital points of quantity conception could be learn flippantly for amusement or studied heavily for a thrilling psychological challenge. By Peter Roquette ISBN-10: 3037191139 ISBN-13: 9783037191132 The 20 th century used to be a time of significant upheaval and nice development in arithmetic. in an effort to get the general photograph of traits, advancements, and effects, it really is illuminating to envision their manifestations in the neighborhood, within the own lives and paintings of mathematicians who have been lively in this time. The collage files of Göttingen harbor a wealth of papers, letters, and manuscripts from a number of generations of mathematicians--documents which inform the tale of the historical advancements from an area viewpoint. This ebook deals a few essays in response to records from Göttingen and elsewhere--essays that have now not but been incorporated within the author's gathered works. those essays, self reliant from one another, are intended as contributions to the implementing mosaic of the background of quantity conception. they're written for mathematicians, yet there are not any specific historical past specifications. The essays talk about the works of Abraham Adrian Albert, Cahit Arf, Emil Artin, Richard Brauer, Otto Grün, Helmut Hasse, Klaus Hoechsmann, Robert Langlands, Heinrich-Wolfgang Leopoldt, Emmy Noether, Abraham Robinson, Ernst Steinitz, Hermann Weyl, and others. A book of the eu Mathematical Society (EMS). disbursed in the Americas through the yankee Mathematical Society. By Marco Brunella The textual content offers the birational class of holomorphic foliations of surfaces. It discusses at size the idea built via L.G. Mendes, M. McQuillan and the writer to check foliations of surfaces within the spirit of the class of complicated algebraic surfaces.
Search Now showing items 11-20 of 165 Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC (Elsevier, 2013-12) The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ... Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (American Physical Society, 2013-12) The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
What about this we have two people A and B both born in February. I will make four cases: Case1:Both born in a year with February with 28 day. this year has the probability 3/4 Hence we have the set which present their birthday day $\{(1,1) , (1, 2) , \cdots (28 , 28)\}$ This set has $28 \cdot 28$ days and $28$ element with same coordinates. This case has the probability ${9 \over 16}$. So to have the same birthday day we get $\frac{28}{28\cdot 28} \times \frac{9}{16}$. Case 2 : A born in a 28 Feb year. B in a 29 Feb year. This has the probability ${3\over 16}$ again we make the set $\{ (1,1) , \cdots , (28,28), (1, 29) ,\cdots , (28 , 29)\}$ This set has $28\cdot 29$ elements with $28$ element with similar coordinates hence we have the probability $\frac{3}{16} \times \frac{28}{28 \cdot 29}$. Case 3: A in 29 , B in 28 Feb year. This has a similar probability for case 2. Case4: A and B in a year with 29 day in Feb. This will have the probability $\frac{1}{16} \times \frac{29}{29 \cdot 29}$ Thus the probability is the sum if the cases. $ \frac{9}{16} \cdot \frac{1}{28} + 2 \left(\frac{3}{16} \cdot \frac{1}{29} \right)+ \frac{1}{16} \cdot \frac{1}{29}$
We draw (and don't put them back) balls in a box that has $r$ red balls, $y$ yellow balls, $g$ green balls, $b$ blue balls and $w$ white balls. The game stop when we took twice balls of the same color. We are interested on the even A : "The game stop after $k$ drawn and we drawn two red balls." What is the probability space that describe this experiment s.t. all elementaries event has same probability ? Attempts Q1) What do they mean by "elementary event" ? Now, I would say that $$\Omega =\{(x_1,...,x_{k-2},R,R)\mid x_i\neq x_{i+1}\text{ for }0\leq i\leq k-3\},$$ but I don't really consider the number of each ball. Q2) So how can I do here ? Also, for the event space, we always have $2^{\Omega }$, but the teacher says that here it's wrong but I don't understand why. Q3) So what's the event space ?
Prove that the equation $n^a + n^b = n^c$, with $a,b,c,n$ positive integers, has infinite solutions if $n=2$, and no solution if $n\ge3$. So this is fermats last theorem upside down? It occurs to me if we have two binary numbers we may add them to get another power of two, 1000000 1000000+ -------- 10000000 but if we had two numbers in base 3, say 1000000 1000000+ ------- 2000000 we would not have so much luck. Wlog $\,a \le b$. Dividing by $n^a$ yields $\,1 + n^{b-a} = n^{c-a}$ $\Rightarrow$ $b=a\ $ (else $\,n\mid1)\,$ $\Rightarrow$ $\, n = 2,\, c = a\!+\!1$. If $n=2$ we can take $a=k, b=k, c=k+1$ for any $k \in \mathbb{N}$. Let $n \ge 3$. We can assume that $a, b, c \ge 0$ because if not we could multiply left and right side by $n^k$ to make them positive. Now it's clear that $c \ge a$ and $c \ge b$. Then we have $n^a | n^c$, hence $n^a | n^a + n^b$ and $a \le b$. In the same way $b \le a$. So $a = b$. Hence $2n^a = n^c$ and $n=2$. Assuming $b>a$: $$n^b<n^a+n^b<n^{b+1}$$
For instance in Blumenhagen's CFT, there is a standard argument which determines that globally defined conformal transformations on the Riemann sphere where $$l_n = -z^{n+1} \partial_z$$ is an element of the Witt algebra. In this argument we note that $l_n$ is non-singular at $z=0$ only for $n\geq -1$. Also substituting $z=-\frac1w$, we find $$l_n = -\left(-\frac1w\right)^{n-1}\partial_w$$ is non-singular at $w=0$ only for $n\leq +1$. Therefore the global transformations are generated by $\{l_{-1},l_0,l_1\}$. Why is the substitution $z=-\frac1w$ special? For instance if I use $z = -\frac{1}{w^2}$ I could repeat the argument above and conclude $n \leq 1/2$.
Quantum integer Set context $f:{\mathbb N}\to{\mathbb R}$ definiendum $[n]_q \in \mathrm{it}$ inclusion $[n]_q:{\mathbb N}\to{\mathbb C}^*\to{\mathbb R}$ definition $[n]_q:=q^{-f(n)/2}\frac{1-q^n}{1-q}$ Discussion These are $q$-deformations of integers, so that arithmetic coincides at $q=1$. $[n]_{q} = q^{-f(n)/2}q^{-1}\sum_{k=1}^n q^k = n+\tfrac{n}{2}(n-1-f(n))\cdot(q-1)+\mathcal{O}\left((q-1)^2\right)$ In fact this doesn't require $n$ to be an integer. The case $f=0$ is often considered. Quantum aspect: $f=n-1$ gives $[n]_{q^2} = n + \mathcal{O}\left((q-1)^2\right)$. (The $q^2$ isn't necessary.) In the imaginary direction, $q\propto\mathrm{e}^{i\varphi}$, this corresponds to $\lim_{\varphi\to 0}\frac{\sin(n\varphi)}{\sin(\phi)}=n$. With $q=r\mathrm{e}^{i\varphi}$, along the positive real axis number $[n]_q$ is a valley with bottom at $q=1$, where $[n]_{1}=n$, and along $\varphi$ you have harmonic oscillations with period depending on $n$. I might change the exponent in $-f(n)/2$ to something else later I see one can also capture it as K[q_, a_, b_, c_, d_] = q^(b - c) (1 - q^(a + b))/(1 - q^(1 + c + d)) and then K[q, n, 0, 0, 0] K[q, -3 n, n, 1, -4] Reference Wikipedia: q-analog
Definition:Morphism Property Jump to navigation Jump to search Some sources call this property the Definition Then $\circ$ has the morphism property under $\phi$ iff: $\forall x, y \in S: \phi \left({x \circ y}\right) = \phi \left({x}\right) * \phi \left({y}\right)$ Also known as Some sources call this property the homomorphism condition.
I Am trying to derive the expression for the GRS test of the CAPM. I am following the book: The Econometrics Of Financial Markets by Campbell, Lo, McKinley (1997). Define $Z_t$ as an $N×1$ vector of excess returns for N assets. We assume that the excess returns can be described by the following excess-return market model: $$Z_t = \alpha + \beta Z_{mt} + \epsilon_t$$ We assume that excess returns are jointly normal, with: $$E[\epsilon_t]=0 $$ N×1 vector $$E[\epsilon_t \epsilon_t']=\Sigma$$ Accordingly, because excess returns are normally distributed conditionally on the excess return of the market and assuming they are temporally IID, given T observations, we get the following log-likelihood function: $$L(\alpha,\beta,\Sigma)=-NTlog(2\pi)-T/2log(det(\Sigma))-1/2 \sum_{t=1}^{T} (Z_t-\alpha-\beta Z_{mt})'\Sigma^{-1}(Z_t-\alpha-\beta Z_{mt})$$ The partial first derivative w.r.t. alpha is: (1) $$\partial L/\partial \alpha=\Sigma^{-1}\sum_{t=1}^{T}(Z_t-\alpha-\beta Z_{mt}) $$ From which, by setting it equal to 0, we get the MLE of alpha: $$\hat{\alpha}=\hat{\mu}-\hat{\beta}\hat{\mu_{m}}$$ Where $\hat{\mu}=1/T\sum_{t=1}^{T} Z_t$ and $\hat{\mu_m}=1/T\sum_{t=1}^{T} Z_{mt}$ The authors claim that the variance of the MLE estimator of alpha is $$Var[\hat{\alpha}]=1/T[1+\hat{\mu_m}^2/\hat{\sigma_m}^2]\Sigma$$ Where $\hat{\sigma_m}^2=1/T\sum_{t=1}^{T} (Z_{mt}-\hat{\mu_m})^2 $ So that the GRS test is simply the Wald statistics: $$J= \hat{\alpha}'[var[\hat{\mu}]]^{-1}\hat{\alpha}=T[1+\hat{\mu_m}^2/\hat{\sigma_m}^2]^{-1}\hat{\alpha}'\Sigma^{-1}\hat{\alpha}$$ Of the null hypothesis that the alphas are jointly zero. I know that the variance of the estimates can be derived using the inverse of the Fisher information matrix. However, if I compute the derivative of (1), namely the second derivative of the LogLik w.r.t. alpha, change sign and then take its expectation, I can not obtain the expression of the variance claimed by the authors. Can you help me with this last step , please?
Both panel data and mixed effect model data deal with double indexed random variables $y_{ij}$. First index is for group, the second is for individuals within the group. For the panel data the second index is usually time, and it is assumed that we observe individuals over time. When time is second index for mixed effect model the models are called longitudinal models. The mixed effect model is best understood in terms of 2 level regressions. (For ease of exposition assume only one explanatory variable) First level regression is the following $$y_{ij}=\alpha_i+x_{ij}\beta_i+\varepsilon_{ij}.$$ This is simply explained as individual regression for each group. The second level regression tries to explain variation in regression coefficients: $$\alpha_i=\gamma_0+z_{i1}\gamma_1+u_i$$$$\beta_i=\delta_0+z_{i2}\delta_1+v_i$$ When you substitute the second equation to the first one you get $$y_{ij}=\gamma_0+z_{i1}\gamma_1+x_{ij}\delta_0+x_{ij}z_{i2}\delta_1+u_i+x_{ij}v_i+\varepsilon_{ij}$$ The fixed effects are what is fixed, this means $\gamma_0,\gamma_1,\delta_0,\delta_1$. The random effects are $u_i$ and $v_i$. Now for panel data the terminology changes, but you still can find common points. The panel data random effects models is the same as mixed effects model with $$\alpha_i=\gamma_0+u_i$$$$\beta_i=\delta_0$$ with model becomming $$y_{it}=\gamma_0+x_{it}\delta_0+u_i+\varepsilon_{it},$$ where $u_i$ are random effects. The most important difference between mixed effects model and panel data models is the treatment of regressors $x_{ij}$. For mixed effects models they are non-random variables, whereas for panel data models it is always assumed that they are random. This becomes important when stating what is fixed effects model for panel data. For mixed effect model it is assumed that random effects $u_i$ and $v_i$ are independent of $\varepsilon_{ij}$ and also from $x_{ij}$ and $z_i$, which is always true when $x_{ij}$ and $z_i$ are fixed. If we allow for stochastic $x_{ij}$ this becomes important. So the random effects model for panel data assumes that $x_{it}$ is not correlated with $u_i$. But the fixed effect model which has the same form $$y_{it}=\gamma_0+x_{it}\delta_0+u_i+\varepsilon_{it},$$ allows correlation of $x_{it}$ and $u_i$. The emphasis then is solely for consistently estimating $\delta_0$. This is done by subtracting the individual means: $$y_{it}-\bar{y}_{i.}=(x_{it}-\bar{x}_{i.})\delta_0+\varepsilon_{it}-\bar{\varepsilon}_{i.},$$ and using simple OLS on resulting regression problem. Algebraically this coincides with least square dummy variable regression problem, where we assume that $u_i$ are fixed parameters. Hence the name fixed effects model. There is a lot of history behind fixed effects and random effects terminology in panel data econometrics, which I omitted. In my personal opinion these models are best explained in Wooldridge's "Econometric analysis of cross section and panel data". As far as I know there is no such history in mixed effects model, but on the other hand I come from econometrics background, so I might be mistaken.
Here is a report of the Ray Tracer written by myself Christopher Chedeau. I've taken the file format and most of the examples from the Ray Tracer of our friends Maxime Mouial and Clément Bœsch. The source is available on Github. Check out the demo, or click on any of the images. Objects Our Ray Tracer supports 4 object types: Plane, Sphere, Cylinder and Cone. The core idea of the Ray Tracer is to send rays that will be reflected on items. Given a ray (origin and direction), we need to know if it intersect an object on the scene, and if it does, how to get a ray' that will be reflected on the object. Knowing that, we open up our high school math book and come up with all the following formulas. Legend: Ray Origin \(O\), Ray Direction \(D\), Intersection Position \(O'\), Intersection Normal \(N\) and Item Radius \(r\). Intersection Normal Plane \[t = \frac{O_z}{D_z}\] \[ N = \left\{ \begin{array}{l} x = 0 \\ y = 0 \\ z = -sign(D_z) \end{array} \right. \] Sphere \[ \begin{array}{l l l} & t^2 & (O \cdot O) \\ + & 2t & (O \cdot D) \\ + & & (O \cdot O) - r^2 \end{array} = 0\] \[ N = \left\{ \begin{array}{l} x = O'_x \\ y = O'_y \\ z = O'_z \end{array} \right. \] Cylinder \[ \begin{array}{l l l} & t^2 & (D_x D_x + D_y D_y) \\ + & 2t & (O_x D_x + O_y D_y) \\ + & & (O_x O_x + O_y O_y - r^2) \end{array} = 0\] \[ N = \left\{ \begin{array}{l} x = O'_x \\ y = O'_y \\ z = 0 \end{array} \right. \] Cone \[ \begin{array}{l l l} & t^2 & (D_x D_x + D_y D_y - r^2 D_z D_z) \\ + & 2t & (O_x D_x + O_y D_y - r^2 O_z D_z) \\ + & & (O_x O_x + O_y O_y - r^2 O_z O_z) \end{array} = 0\] \[ N = \left\{ \begin{array}{l} x = O'_x \\ y = O'_y \\ z = - O'_z * tan(r^2) \end{array} \right. \] In order to solve the equation \(at^2 + bt + c = 0\), we use \[\Delta = b^2 - 4ac \]\[ \begin{array}{c c c} \Delta \geq 0 & t_1 = \frac{-b - \sqrt{\Delta}}{2a} & t_2 = \frac{-b + \sqrt{\Delta}}{2a} \end{array} \] And here is the formula for the reflected ray: \[ \left\{ \begin{array}{l} O' = O + tD + \varepsilon D' \\ D' = D - 2 (D \cdot N) * N \end{array} \right. \] In order to fight numerical precision errors, we are going to move the origin of the reflected point a little bit in the direction of the reflected ray (\(\varepsilon D'\)). It will avoid to falsely detect a collision with the current object. Coordinates, Groups and Rotations We want to move and rotate objects. In order to do that, we compute a transformation matrix (and it's inverse) for each object in the scene using the following code: \[ T = \begin{array}{l} (Identity * Translate_g * RotateX_g * RotateY_g * RotateZ_g) * \\ (Identity * Translate_i * RotateX_i * RotateY_i * RotateZ_i) \end{array} \]\[ I = T^{-1} \] \[Translate(x, y, z) = \left(\begin{array}{c c c c} 1 & 0 & 0 & x \\ 0 & 1 & 0 & y \\ 0 & 0 & 1 & z \\ 0 & 0 & 0 & 1 \end{array}\right)\] \[RotateX(\alpha) = \left(\begin{array}{c c c c} 1 & 0 & 0 & 0 \\ 0 & cos(\alpha) & -sin(\alpha) & 0 \\ 0 & sin(\alpha) & cos(\alpha) & 0 \\ 0 & 0 & 0 & 1 \end{array}\right)\] \[RotateY(\alpha) = \left(\begin{array}{c c c c} cos(\alpha) & 0 & sin(\alpha) & 0 \\ 0 & 1 & 0 & 0 \\ -sin(\alpha) & 0 & cos(\alpha) & 0 \\ 0 & 0 & 0 & 1 \end{array}\right)\] \[RotateZ(\alpha) = \left(\begin{array}{c c c c} cos(\alpha) & -sin(\alpha) & 0 & 0 \\ sin(\alpha) & cos(\alpha) & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right)\] We have written the intersection and normal calculations in the object's coordinate system instead of the world's coordinate system. It makes them easier to write. We use the transformation matrix to do object -> world and the inverse matrix to do world -> object. \[ \left\{\begin{array}{l} O_{world} = T * O_{object} \\ D_{world} = (T * D_{object}) - (T * 0_4) \end{array}\right. \] \[ \left\{\begin{array}{l} O_{object} = I * O_{world} \\ D_{object} = (I * D_{world}) - (I * 0_4) \end{array}\right. \] \[0_4 = \left(\begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array}\right) \] Bounding Box The previous equations give us objects with infinite dimensions (except for the sphere) whereas objects in real life have finite dimensions. To simulate this, it is possible to provide two points that will form a bounding box around the object. On the intersection test, we are going to use the nearest point that is inside the bounding box. This gives us the ability to do various objects such as mirrors, table surface and legs, light bubbles and even a Pokeball! Light An object is composed of an Intensity \(I_o\), a Color \(C_o\) and a Brightness \(B_o\). Each light has a Color \(C_l\) and there is an ambient color \(C_a\). Using all those properties, we can calculate the color of a point using the following formula: \[ I_o * (C_o + B_o) * \left(C_a + \sum_{l}{(N \cdot D) * C_l}\right) \] Only the lights visible from the intersection point are used in the sum. In order to check this, we send a shadow ray from the intersection point to the light and see if it intersects any object. The following images are examples to demonstrate the lights. Textures In order to put a texture on an object, we need to map a point \((x, y, z)\) in the object's coordinate system into a point \((x, y)\) in the texture's coordinate system. For planes, it is straightforward, we just the \(z\) coordinate (which is equal to zero anyway). For spheres, cylinders and cones it is a bit more involved. Here is the formula where \(w\) and \(h\) are the width and height of the texture. \[ \begin{array}{c c} \phi = acos(\frac{O'_y}{r}) & \theta = \frac{acos\left(\frac{O'_x}{r * sin(\phi)}\right)}{2\pi} \end{array} \]\[ \begin{array}{c c} x = w * \left\{\begin{array}{l l} \theta & \text{if } O'_x < 0 \\ 1 - \theta & \text{else}\end{array}\right. & y = h * \frac{\phi}{\pi} \end{array} \] Once we have the texture coordinates, we can easily create a checkerboard or put a texture. We added options such as scaling and repeat in order to control how the texture is placed. We also support the alpha mask in order to make a color from a texture transparent. Progressive Rendering Ray tracing is a slow technique. At first, I generated pixels line by line, but I found out that the first few lines do not hold much information. Instead, what we want to do is to have a fast overview of the scene and then improve on the details. In order to do that, during the first iteration we are only generating 1 pixel for a 32x32 square. Then we generate 1 pixel for a 16x16 square and so on ... We generate the top-left pixel and fill all the unknown pixels with it. In order not to regenerate pixels we already seen, I came up with a condition to know if a pixel has already been generated. \(size\) is the current square size (32, 16, ...). \[\left\{\begin{array}{l} x \equiv 0 \pmod{size * 2}\\ y \equiv 0 \pmod{size * 2} \end{array}\right. \] Supersampling Aliasing is a problem with Ray Tracing and we solve this issue using supersampling. Basically, we send more than one ray for each pixel. We have to chose representative points from a square. There are multiple strategies: in the middle, in a grid or random. Check the result of various combinations in the following image: Perlin Noise We can generate random textures using Perlin Noise. We can control several parameters such as \(octaves\), the number of basic noise, the initial scale \(f\) and the factor of contribution \(p\) of the high frequency noises. \[ noise(x, y, z) = \sum_{i = 0}^{octaves}{p^i * PerlinNoise(\frac{2^i}{f}x, \frac{2^i}{f}y, \frac{2^i}{f}z)} \] \[noise\] \[noise * 20 - \lfloor noise * 20 \rfloor\] \[\frac{cos(noise) + 1}{2}\] As seen in the example, we can apply additional functions after the noise has been generated to make interesting effects. Portal Last but not least, Portals from the self-titled game. They are easy to reproduce in a Ray Tracer and yet, I haven't seen any done. If a ray enters portal A, it will go out from portal B. It is trivial to implement it, it is just a coordinates system transformation. Like we did for world and object transformation, we do it between A and B using their transformation matrix. \[ \left\{\begin{array}{l} O_{a}' = T * O_{b} \\ D_{a}' = (T * D_{b}) - (T * 0_4) \end{array}\right. \] \[ \left\{\begin{array}{l} O_{b}' = T * O_{a} \\ D_{b}' = (T * D_{a}) - (T * 0_4) \end{array}\right. \] Scene Editor In order to create scenes more easily, we have defined a scene description language. We developed a basic CodeMirror syntax highlighting script. Just enter write your scene down and press Ray Trace 🙂
This might look like a copy of another question, but what I'm about to propose here is new. There's this question, Find the least positive integral value of n for which $(\frac{1+i}{1-i})^n = 1$ While solving, if we multiply what is within the bracket, by the conjugate of denominator and divide by the same thing, we get $i$ in the bracket, that means, the question boils down to $i ^ n= 1$ We know that the least positive value $n$ can have is $4$, for $i^n$ to be $1$. Done. Now, IF I were to solve it by taking mod on both sides of the given equation, I would get $\Big(\frac{|1+i|}{|1-i|}\Big)^n = |1|$ $\Big(\frac{\sqrt{2}}{\sqrt{2}}\Big)^n = 1$ $1^n = 1$ NOTE that the least positive value of $n$ changes from $4$ to $1$. Why is it so? I read an answer on stack exchange, that it is valid to do anything with the equation until it maintains the equality. I didn't destroy the equality, so why does the answer vary? Is there any restriction as to where to use the "taking-mod-both-sides" thing?
So I'm following Szabo's book "An Introduction to String Theory and D-brane Dynamics (2nd ed, 2011); still on the canonical treatment in chapter 3. After doing a mode expansion, we get (up to a constant) a set of nice ladder operators \begin{equation}[\alpha_m^\mu, \alpha_n^\nu] = m\delta_{m+n}\eta^{\mu\nu}\end{equation} where $\delta_{m+n}$ is unity if $m=-n$, zero otherwise; $\eta$ is the Minkowski metric in D dimensions, $m$ and $n$ are integer labels and the $\alpha$s are Fourier coefficients. I'm omitting the LM/RM sets and concentrating on the open string. We use these to define these composite operators: \begin{equation} L_n = \frac{1}{2}\sum\limits_{m=-\infty}^\infty{\alpha_{n-m}\cdot\alpha_m}\end{equation} which we want to use for generate conformal transformations. Cool. So we need some Witt-like algebra, and because we're quantising we're expecting some weirdness, and it turns out you get the Virasoro algebra. But I just can't get the central term. For a start, it doesn't look right: each pair of $\alpha$s can give me a factor of m, so having \begin{equation}[L_m,L_n]\end{equation}, we have four $\alpha$s and hence we'd expect to be able to extract some central term of order $n^2$. But the Virasoro central term is $n(n^2-1)$ - so where does the extra n come from? There are no conveniently-placed sums that might give rise to this. Every time I attempt to plug through the algebra I end up with something silly, for example $\sum\limits_{p=-\infty}^\infty p$. So no central term at all! So the root of my question is: where does the extra n come from? If I can understand that, hopefully I'll be able to do the rest of the derivation myself (don't take away my fun:)).
I decided to write this article from a question I saw in stackoverflow.com Here the link to the question. The questioner tries to write an algorithm to identify whether a “word” is a palindrome or it is not. The algorithm is written using the Java programming language I do not want to analyze the algorithm proposed by the questioner, but I want to analyze the most voted answer algorithm. The latter has 73 votes versus 65 votes that have the accepted answer (to today’s date, June 6, 2016). Here the link to the algorithm that I want to analyze. Here the code: The algorithm works correctly and the logic is very intuitive. Basically compares for equality the input word against its reverse. The problem with this algorithm is that it is very inefficient compared to the optimal algorithm. I made a comment to the author of the algorithm on Stackoverflow: “Compare the complexity of your algorithm with respect to others.” The user @aioobe replied: “I think it’s the same complexity as the other solutions, no?” He is right. I was not very specific in my comment. @aioobe surely assumed that I was referring to the asymptotic computational complexity and he is ok, because usually, when we say “complexity” without specifying anything else, it is assumed that we are referring to the asymptotic computational complexity. Asymptotic Computational Complexity Asymptotic asymptotic computational means how the algorithms respond in time and space as the input grows. It is usually associated with the O-notation introduced by Paul Bachmann in his book Die Analytische Zahlentheorie in 1894. [1] This way we can measure the scalability of algorithms without relying on machine architecture, CPU speed, the programming language in which is implemented the algorithm, etc… While it is very useful in many circumstances, this method of measurement it is not accurate, but it is approximated. For more information on O-notation and asymptotic complexity, see [2]. Concrete Computational Complexity Another way to measure algorithms is no use approximations but concrete quantity of operations, depending on the input to the algorithm. For example, imagine the algorithm to find the minimum (or maximum) of \( n \) elements. We can say that the algorithm has linear complexity (in time), or the algorithm is \( O(n) \). But specifically, the algorithm needs comparisons to find the minimum element. Back to Palindromes Again, the code of the algorithm. I called Algorithm I to the previous code. (“I” as inefficient). We could say that Algorithm I is \( O(n) \), but, how can we guarantee it without knowing the complexity of the components in which the algorithm is based? To do this, we must review the Java documentation. For example, consider the String.equals() function. [3] As you may have noticed on the previous page, the Java documentation does not include the time and space complexity of algorithms and data structures. I consider this a failure, because it hinders us specifying the complexity of our algorithms, at least of the algorithms that are based on classes provided by Java. To continue trying to specify the complexity of the Algorithm I, we have no choice, we have to review the source code of Java classes. Consider the source code of String.equals() (click on the link and find the equals function.) As you can verify the code String.equals() has linear complexity in time, \( O(n) \). Specifically, String.equals() does \( n \) comparisons ( inequality). (Beyond the noise imposed by Java, such as casts, instanceof, etc …). The space complexity of String.equals() is constant, that is, \( O(1) \). This means that it uses a constant memory beyond the input of the algorithm. Determining the complexity We will determine the complexity of the Algorithm I analyzing each component. String.equals(). Linear time, \( n \) inequality comparisons. Constant space. StringBuilder.reverse(). Linear time, \( 2 \left\lfloor\dfrac{n}{2}\right\rfloor \) assignments. Constant space. StringBuilder.toString(). Linear time, \( n \) assignments. Linear space, \( n \) elements. StringBuilder constructor. Linear time, \( n + 16 \) assignments. Linear space, \( n + 16 \) elements. So the overall complexity of Algorithm I is: Time: In the worst case, when the word is a palindrome, this algorithm takes \( n \) inequality comparisons and \( 2n + 2 \left\lfloor\dfrac{n}{2}\right\rfloor + 16 \) assignments. Space: \( 2n + 16 \) elements. As you can see, this algorithm is very inefficient, it uses a lot of memory (unnecessarily) and as discussed below, takes over \( 8x \) operations optimal algorithm. Improving the algorithm (naïve version) Call the following code Algorithm N (N as naïve). Algorithm N presents a substantial improvement over the Algorithm I. Time: In the worst case, when the word is a palindrome, this algorithm takes \( n \) inequality comparisons. Space: Constant No extra memory usage and runs approximately \( \dfrac{1}{4} \) of operations that the Algorithm I.While the code is a bit more complex now, it is an easily understandable code, the increased code complexity is negligible compared to the improvement in efficiency. Optimal Algorithm As we can see in the picture below, there is no need to do \( n \) comparisons to determine whether a word is a palindrome. It is enough to do just (about) half the comparisons: If n is even, it performs \( \dfrac{n}{2} \) comparisons If n is odd, it performs \( \dfrac{n - 1}{2} \) comparisons. The following code will be called Algorithm O (O as optimal). Time: In the worst case, when the word is a palindrome, this algorithm performs \( \left\lfloor\dfrac{n}{2}\right\rfloor \) inequality comparisons. Space: Constant Algorithm I, detailed analysis As explained earlier, the Algorithm I is much more inefficient than Algorithms N and Algorithms O. But besides the complexity analysis, we have to consider other issues that affect the efficiency of the Algorithm I. Each component used in the Algorithm I come with certain performance penalties that go unnoticed. Let us analyze in detail. Construction of StringBuilder Dynamic memory allocation of the StringBuilder object (heap, free store, or whatever you call it). Zero-Initialization of members of StringBuilder. [4] Dynamic memory allocation of the StringBuilder’s internal array. According to the documentation, the size of the internal array is equal to 16 characters plus the size of the original String. [5] Zero-Initialization of Array’s members. Length and the array itself. [4] Copy of the bytes of the original String to the StringBuilder’s internal array. reverse() (StringBuilder) Mentioned above. This function does not use additional memory and it is efficient at runtime. toString() (StringBuilder) Dynamic memory allocation of the String object (heap, free store, or whatever you call it). Zero-Initialization of members of String. [4] Dynamic memory allocation of the String’s internal array. Zero-Initialization of Array’s members. Length and the array itself. [4] Copy of the bytes from the StringBuilder’s internal array to the String’s internal array. equals() (String) Mentioned above. This function does not use additional memory and it is efficient at runtime. Garbage Collection The GC must release any additional memory (unnecessary) that was used and obviously this operation is not “free.” Data Cache Misses Another drawback associated with unnecessary memory consumption is the probability that our objects are too large to fit into the cache, causing cache misses impacting on runtime performance. Another factor that increases the probability of cache missesare the indirections (references, pointers) to distant memory locations. Memory footprint To analyze the memory consumption of Algorithm I, we will use a concrete example. We will use a 9-char palindrome, the word is “evitative”. While memory consumption of Algorithm I depends on the Virtual Machine and the Runtime Environment that we are using, in this case we will use a specific platform that is detailed here. Basically in our example two objects are created, one StringBuilder and one String. StringBuilder: The StringBuilder objects have the following memory representation. A StringBuilder object consists of two parts (not necessarily contiguous in memory): First part: usage size of the array (length() of StringBuilder) and a reference to the array where the data resides. Second part: array size (capacity() of StringBuilder) and the array. The array size is 16 characters plus the number of characters of the original String (“evitative”) [5]. In the picture, those 16 extra characters are shown in red, to emphasize that it is wasted memory. In Java, the characters have a size of 2 bytes. [6] So in our example, the array will have a size of \( 2 \cdot (9 + 16) = 50 \) bytes. In Java all objects have a header (if known any implementation that does not have it, let me know) in our platform the header is 12 bytes. In other popular platforms can be 8 or 16 bytes, see here [7]. Another thing to consider is the padding, which is basically a memory space that is added to meet the alignment requirement. In our case, the objects must be placed in 8-multiples memory addresses. In summary, our StringBuilder object has the following memory size (in bytes): First part: \( 24 \) Second part: \( 16 + 2n + 32 + padding \) [8] Total: \( 8(\left\lceil\dfrac{n}{4}\right\rceil + 9) \) bytes String: The String objects have the following memory representation. A String object consists of two parts (not necessarily contiguous in memory): First part: reference to array (where data resides) and a hash. Second part: the size of the array (length() of String) and the array. The String object here described belongs to the Java 8. In older versions of Java, the String class had more fields, therefore memory size was bigger. [7] In summary, our String object has the following memory size (in bytes): First part: \( 24 \) First part: \( 16 + 2n + padding \) [8] Total: \( 8(\left\lceil\dfrac{n}{4}\right\rceil + 5) \) bytes Total memory footprint: The total memory used by our StringBuilder and String objects is \( 16(\left\lceil\dfrac{n}{4}\right\rceil + 7) \) bytes. In our example \( n = 9 \), so the memory used is \( 16(\left\lceil\dfrac{9}{4}\right\rceil + 7) \) bytes, which represent 160 bytes of extra memory, only to determine whether “evitative” is a palindrome or not. Remember, these 160 bytes is a totally unnecessary memory consumption. Benchmarks I was doing some benchmarks where it can be seen that the Algorithm I is about 8.5x slower than Algorithms O and N. I’m leaving out some other benchmarks that show a \( \approx 500x \) difference against Algorithm I. You can see the source code of the benchmarks in my GitHub account. The explanation of the benchmarks and the code will be pending for a future article. Algorithm N versus Algorithm O While previously we saw that the Algorithm O performs half of operations than Algorithm N, in the worst case; the runtime of both algorithms is affected by several factors, including: the length of the word, if the word is palindrome or not, and other factors relevant to the platform. In many cases Algorithm N is faster than Algorithm O. We will discuss this in a future article. Ultimate solution? I believe that none of the three algorithms presented in this article represent an ultimate solution, and none of them is a Component. A component should be something that can be reused and, in many cases, the algorithms described in this article are not suitable for re-use. In addition, the three algorithms only accept a String as input. A palindrome is not just a sequence of characters that reads the same forwards and backwards. A palindrome can be found in music, in numbers and also in nature. I am no expert in genetics, but I know that palindromes can be found in DNA strands. Palindromes in DNA are so important that some experts consider them to be responsible for preventing the extinction of the human race (and other species too). Without palindromes in DNA, incorrigible and irreversible genetic mutations would occur, causing the extinction of the species, with the passing of generations. In a future article we’ll talk about this topic. Pending issues Replicate the Algorithm I in C# and analyze its efficiency in execution time and memory consumption. Conclusions The Algorithm I is an excellent example of brevity and readability, it makes good use of abstractions available to achieve this goal. The problem with Algorithm I is that it is a terrible example of efficiency. Abstractions facilitate our work, let us concentrate on the problem to be solved without having to think about the context, in this case the computer. While abstractions are good, they have a big disadvantage. They make us forget how the machine works. Modern programmers often abuse of abstractions and they do not have knowledge about very important things that affect the behavior of our programs, such as memory, cache, load/store buffers, branch prediction, pipelines, memory models, vector instructions (SIMD), etc … As programmers, we must know in detail the computer, the programming language and the complexity of the components we use. Unfortunately modern programmers are just focused on things like testing, agile, metaprogramming and frameworks/libraries whose lifetime is longer than two years. I do not want to forget to mention that the best of all abstractions was discovered by Leibniz in 1679. That abstraction is what allows us to model the real world in a computer. That abstraction is the Bit. Finally, it is noteworthy that Algorithm I could be useful as a postcondition: Acknowledgements I want to thank Mario dal Lago and Javier Velilla for reviewing the article and suggest corrections. And finally, since we can say that we owe our lives to palindromes, so, a special thank to them. :) Notes Byte = 8-bits. There are architectures where 1 byte is not necessarily equivalent to 8 bits. These architectures are unusual today. There is no standard that specifies the size of a byte, but can say that the de facto standard is that 1 byte = 8 bits, is most common in modern computer architectures. Platform used for the analysis in this article: CPU RAM: 8192 MBytes, DDR3 Operating System: Windows 10 Home 64-bit Java For analysis on other platforms, please refer to [7]. References [1] Zahlen means “Numbers” in German. Hence the set of integers is identified with the letter \( \mathbb{Z} \). [2] The Art of Computer Programming Volume 1, by Donald E. Knuth [3rd Edition, page 107]. [3] Why I call String.equals() “function” and no “method”?. Here is the answer. [4] In Java the integral data types and arrays of integral data types are initialized to 0, guaranteed by the language specification. 4.12.5 Initial Values of Variables [5] Java String class [6] Java Primitive Data Types [7] Analysis of memory consumption on other platforms. [8] The formulas for calculating the padding of the internal arrays of StringBuilder and String (in bytes): StringBuilder internal array padding = \( 8\left\lceil\dfrac{2n + 48}{8}\right\rceil - (2n + 48) \) String internal array padding = \( 8\left\lceil\dfrac{2n + 16}{8}\right\rceil - (2n + 16) \) (These formulas are specific to the platform described in the article) The general formula for the padding of objects is:Share
Here's the work i've done to find the Maclaurin series. However, I'm having a very hard time finding a representation for the series using sum, n for the nth term, and x from g(x). One way to get the "${+}{+}{-}{-}$" pattern is with $(-1)^{n(n-1)/2}$. To alternate between $\sin(2)$ and $\cos(2)$: $$\sin(2)\frac{(-1)^n+1}{2}+\cos(2)\frac{(-1)^{n-1}+1}{2}$$ So you could write $$\sum_{n=0}^{\infty}(-1)^{n(n-1)/2}\left(\sin(2)\frac{(-1)^n+1}{2}+\cos(2)\frac{(-1)^{n-1}+1}{2}\right)\frac{x^n}{n!}$$
I am trying to understand Rømer's determination of the speed of light ($c$). The geometry of the situation is shown in the image below. The determination involves measuring apparent fluctuations in the orbital period of Io. (Jupiter's moon) The Earth starts from point A. $r(t)$ is the distance between the Earth and Jupiter. $r_e$ is the radius of the (assumed) circular orbit of the Earth around the Sun, while $r_0$ is the same for Jupiter. $T$ is the period of the Earth's orbit. Under the assumption that the Jupiter-Io system is stationary, $r(t)$ can be expressed as $$r(t) = \sqrt{r_E^2 + r_0^2 -2r_0 r_E \cos \left(\frac{2\pi t}{T}\right)}$$ If we further assume that the period of Io's orbit around Jupiter, $\Delta t$ is much smaller that $T$, then it can be shown that the distance the Earth moves, $\Delta r$ when Io completes one orbit is: $$\Delta r = \frac{2\pi r_E \Delta t}{T} \sin\left( \frac{2\pi t}{T} \right)$$ The point I am stuck is about why is there an apparent fluctuation in Io's orbit as observed on the Earth? And how can we derive the observed delay using these expressions?
Discussion on Bangladesh Mathematical Olympiad (BdMO) National Moon Site Admin Posts: 751 Joined: Tue Nov 02, 2010 7:52 pm Location: Dhaka, Bangladesh Contact: Problem 10: Consider a function $f: \mathbb{N}_0\to \mathbb{N}_0$ following the relations: $f(0)=0$ $f(np)=f(n)$ $f(n)=n+f\left ( \left \lfloor \dfrac{n}{p} \right \rfloor \right)$ when $n$ is not divisible by $p$ Here $p > 1$ is a positive integer, $\mathbb{N}_0$ is the set of all nonnegative integers and $\lfloor x \rfloor$ is the largest integer smaller or equal to $x$. Let, $a_k$ be the maximum value of $f (n)$ for $0\leq n \leq p^k$. Find $a_k$. Moon Site Admin Posts: 751 Joined: Tue Nov 02, 2010 7:52 pm Location: Dhaka, Bangladesh Contact: sourav das Posts: 461 Joined: Wed Dec 15, 2010 10:05 am Location: Dhaka Contact: Corei13 used Moon bhaia's method (A direct killing method hi, hi, hi). A different way hint (My contest solution way) (A soft way killing method, ha ha ha) You spin my head right round right round,(-$from$ "$THE$ $UGLY$ $TRUTH$" ) When you go down, when you go down down...... *Mahi* Posts: 1175 Joined: Wed Dec 29, 2010 12:46 pm Location: 23.786228,90.354974 Contact: sourav das wrote: Corei13 used Moon bhaia's method (A direct killing method hi, hi, hi). A different way hint (My contest solution way) (A soft way killing method, ha ha ha) I did the same as Corei13/moon bhai in both H.Sec 9 and 10 Please read Forum Guide and Rules before you post. Use $L^AT_EX$, It makes our work a lot easier! Nur Muhammad Shafiullah | Mahi
Given that $f\colon [-1,1] \to \mathbb{R}$ is a continuous function such that $ \int_{-1}^{1} f(t) \ dt =1$, how do I evaluate the limit of this integral: $$\lim_{n \to \infty} \int_{-1}^{1} f(t) \cos^{2}{nt} \,dt$$ What I did was to write $\cos^{2}{nt} = \frac{1+\cos{2nt}}{2}$ and substitute it in the integral so that I can make use of the given hypothesis of $\int_{-1}^{1} f(t) \ dt =1$. So the integral becomes, \begin{align*} \int_{-1}^{1} f(t)\cos^{2}{nt} \ dt = \int_{-1}^{1} f(t) \biggl[\frac{1+\cos{2nt}}{2}\biggr] \ dt & \\ \hspace{3cm} = \frac{1}{2}\int_{-1}^{1}f(t) \ dt + \int_{-1}^{1} \frac{f(t)\cos{2nt}}{2} \ dt \end{align*} But I don't really know how I can evaluate the second integral and also I can't realize as to why that integral condition on $f$ has been assumed. Moreover without assuming that condition on $f$ is it possible to evaluate this integral? If yes, then what would the answer be?
Don't be intimidated, semiclassical quantization is very simple, and it can be straightforwardly understood from a few examples which lead to the general case. Consider a particle in a box. The classical motions are reflections off the wall. These make a box in phase space, as the particle goes left, hits the wall, goes right, and hits the other wall. If the particle has momentum p and the length of the box is L, the area enclosed by this motion in phase space is $$ p L $$ and the condition is that this is an integer multiple of $h=2\pi\hbar$. This gives the momentum quantization condition from quantum mechanics. For a 1 dimesional system, the rule is that $$ \int p dx = n h$$ With a possible offset, so that the right-hand side might be $(n+1/2)h$, or $(n+3/4)h$, as appropriate, but the spacing between levels is given by this rule to leading order in h. This rule can be understood from deBroglie's relation--- the momentum at any x is the wavenumber, or the rate of change of the phase of the wavefunction. The condition (in natural units where $h=2\pi$ ) is saying that the phase change as you follow a classical orbit should be an integer multiple of $2\pi$, i.e. that the wave should form a standing wave. This formula is not exact, because the quantum wave doesn't follow the classical trajectory, but the WKB approximation just takes this as a starting point, and makes a wave whose phase is given by the value of this integral, and whose amplitude is the reciprocal of the square root of the classical velocity. The reason this works was known already before quantum theory was fully formulated. But to understand it requires familiarity with action-angle variables Action-angle variables Consider an orbit of a particle in one-dimension, with position x and momentum p. You call the area in phase space enclosed by the orbit J, and this is the action. J is only a function of H and it is constant in time (by definition). The conjugate variable to J is a variable which distinguishes the points of the orbit, and this is called $\theta$. Now you notice that the area in phase space is invariant under canonical transformations (for infinitesimal canonical transformations this is Liouville's theorem), so that the area between the orbits at J and J+dJ is the same as the area in x-p coordinates between J and J+dJ, which is just dJ because that's the definition of J. But this area in J,$\theta$ coordinates is dJ times the period of $\theta$, so $\theta$ has the same period for all J, which I will take to be $2\pi$. The rate at which $\theta$ increases with time is given by Hamilton's equations $$ \dot{\theta} = {\partial H\over \partial J} = H'(J) $$ And this is constant over the entire orbit, because H is constant, and so is J. So you learn that $\theta$ increase monotonically at a constant rate at each J, and the time period of $\theta$ is: $$ T = {2\pi\over H'(J)} $$ Semiclassical quantization Suppose you weakly couple this one-dimensional system to electromagnetism. The classical orbital frequency is going to be the frequency of the emitted photons (and double this frequency, and three times this frequency), so that if you want to have discrete photon-emission transitions, you must ensure that emitting a photon of frequency $f={1\over T}$, and taking away energy $hf$ leaves you with a quantum state to fall to. So if there is a quantum state corresponding to a classical motion with one value of J, at energy H(J), there must be another quantum state with energy $$ H(J) - {2\pi h\over T} = H(J) - H'(J)h \approx H(J-h) $$ in other words, the quantum states must be spaced evenly in J. To this order, this means that there are states at J-h,J-2h,J-3h and so on, and transitions to these states have to reproduce the classical radiation harmonics produced when you weakly couple the thing to electromagnetism. So the quantization rule is $J=nh$, up to a possible offset. The derivation makes it clear that it is only true to leading order in h. This was Bohr's correspondence argument for the quantization condition. When you have more than one degree of freedom, and the system is integrable, you have action variables $J_1,J_2...J_n$ and conjugate angle variables periodic with period $2\pi$ each. You can couple any of the degrees of freedom to electromagnetism weakly, and each classical period of the $\theta$ variable in time is $$T_k = {\partial H \over \partial J_k}$$ so the statement is that for each orbit, each J variable is quantized according to the Bohr rule. $$ J_k = nh $$ The $J_k$ variable is the area enclosed in the one dimensional projection of the motion in those coordinates where the motion separates into multiperiodic motion (this is the torus of Bar Moshe's answer). This is Sommerfeld's extension of Bohr quantization. So the integral $\int p dq$ is taken with p and q any conjugate variables which make a period motion. In 1d, there is nothing to do, in multiple dimensions, you just choose variables which separately execute a 1d motion, and in general, you have to find J variables. This procedure doesn't work for classically chaotic systems.This post imported from StackExchange Physics at 2017-03-13 12:20 (UTC), posted by SE-user Ron Maimon
Welcome to the Kancolle Wiki! Currently account creation is by request only. Once you submit your request please wait 1-2 days for our administrators to process your request. You can also visit our IRC channel at #kancollewikito directly contact an administrator. Difference between revisions of "Sandbox/Plane Proficiency" (Created page with "=Introduction= Planes have the ability to earn experience and gain ranks. Experience is gained from combat, be it normal sorties or LBAS attacks and defense. The only pla...") (No difference) Revision as of 06:35, 13 September 2019 Contents Introduction Planes have the ability to earn experience and gain ranks. Experience is gained from combat, be it normal sorties or LBAS attacks and defense. The only planes unable to earn experience are Autogyros, the Type 3 Spotter/Liaison (ASW) and Land-based reconnaissance planes. Certain elite planes such as the Tenzan Model 12 (Tomonaga Squadron) Will already have some ranks when they are obtained. Overview As planes earn experience, they will progress through the ranks as follows: Rank Level Displayed Insignia Experience Fighter Power Bonus Innate Bonus Notes 0 (blank) 0-9 0 0 0 1 | 10-24 0 0 +1 2 || 25-39 +2 +1 +1 3 ||| 40-54 +5 +1 +2 4 \ 55-69 +9 +1 +2 5 \\ 70-84 +14 +3 +2 6 \\\ 85-99 +14 +3 +2-3 7 >> 100-120 +22 +6 +3 Important Notes Planes will gain ranks at different speeds. Exactly what determines that is unknown. Please see below for more details. Planes getting shot down will lose ranks. A slot getting emptied has a high chance of resetting plane ranks. Abyssal planes do not have proficiency. Opposing PvP fleets do have proficiency. It is important to secure air superiority where possible to preserve plane proficiency. The innate bonus is calculated as [math]\sqrt{\frac{\text{Exp}_\text{plane}}{10}}[/math]; where [math]\text{Exp}_\text{plane}[/math] is the current experience level of the plane. Rank Effects Fighter Power Bonus The biggest bonus from plane proficiency is giving a fighter power bonus to help during the air control phase. The innate bonus is added onto the fighter power bonus of the relevant plane to get the full fighter power bonus. Plane Type Fighter Power Bonus +25 +9 +3 Important Notes Fighter-bombers are considered dive bombers and only get the +9 bonus instead of the +25. This is a flat bonus applied after the fighter power of the slot is calculated. This means it is not affected by rounding. Even if the plane has 0 Template:Anti-Air, it will still get the relevant bonus. Critical Bonus There are two separate critical modifier bonuses for plane proficiency. One affects normal carrier attacks and carrier night cut-ins, the other affects carrier cut-in attacks. Carrier Attack Modifier [math]\text{Mod}_\text{CVAtk} = 1 + \text{Count}_\text{equipped} \times 0.1 + 0.1\text{(If max proficiency plane in first slot)}[/math] [math]\text{Count}_\text{equipped}[/math] is the number of max proficiency planes equipped. This means a carrier with two maxed proficiency planes equipped in the top two slots, it will have a 1.3x bonus. This makes her total critical strike modifier 1.95x (1.5 x 1.3). This also applies to carrier night attacks. Unlike other carrier night attacks formulas, non-night capable planes are taken into account for calculating the proficiency critical bonus. Carrier Cut-In Attacks [math]\text{Mod}_\text{CVCI} = 1.106 + 0.15\text{(If the plane in the first slot participates in the attack)}[/math] The average skill level of all planes is taken into account when calculating the modifier. The 1.106 value assumes all planes equipped are of max proficiency. If you have multiple aircraft of the same type equipped, the plane in the first slot, "the Captain" will always take priority. This bonus is multiplicative with the base critical modifier. This means if the Captain participates in the CVCI, the attack gets a critical strike modifier of 1.875x (1.5 x 1.25). Accuracy Bonus The average proficiency of all torpedo bombers, dive bombers, seaplane bombers and large flying boats on the carrier are taken into account for calculating the accuracy bonus. [math]\text{Acc Bonus} = \sqrt{0.1 \times \text{Exp}_\text{Avg}} + \text{Mod}_\text{proficiency}[/math] [math]\text{Exp}_\text{Avg}[/math] is the average experience level of all relevant planes on the carrier. [math]\text{Mod}_\text{Skill}[/math] is the extra bonus depending on the proficiency level. Experience Level Proficiency Bonus Accuracy Bonus 0-9 0 0 10-24 0 +1 25-39 +1 +2 40-54 +2 +4 55-69 +3 +5 70-79 +4 +6 80-99 +6 +8-9 100-120 +9 +12 Important Notes Remember that fighters are not countedfor the accuracy bonus. Ranking Planes Carrier-Based Planes Carrier-based planes can be ranked anywhere. The most resource efficient way is to send them on sorties to surface nodes with little to no air power. It is possible to rank them through LBAS but it gets expensive. The following are recommended spots: 1-1 2-2: 3CV+ off route Land-Based Planes Land-based planes can only be ranked through LBAS. This makes ranking them quite expensive. Outside of events, only 6-4 and 6-5 have LBAS available to use. Of these, 6-5 is the best place because it allows the use of two land bases and has a Range 1 submarine node at B. This allows any land-based plane to be rank and only requires a fleet of at least 1 light cruiser and a mix of destroyers and destroyer escorts. Leveling Speed Leveling speed is measured in the estimated number of battles it will take to get a plane from no rank to >>.
Calculate a value for the standard enthalpy of formation for liquid ethanethiol, $\ce{C2H5SH}$. Use the equation given below and enthalpy of combustion data from the following table. Just assume values and comment the methods to solve the problem, I'll try solving it. Edit: I'm not sure whether you're supposed to use $$\Delta H^\circ = \sum{\Delta H_f(\text{product})} - \sum{\Delta H_f(\text{reactant})}$$ because the given is enthalpy of combustion data, not heat of formation. I'm not quite sure what an enthalpy of combustion is.