text
stringlengths 256
16.4k
|
|---|
The following paper gives you really all of the missing steps in a very detailed form:A Complete Solution to the Black-Scholes Option Pricing Formula by Ravi Shukla and Michael TomasFrom the paper:"This presentation is purely for pedagogical purposes. In the course of doing work onoption pricing, we found no complete solution for the Black-Scholes ...
First, my notation. $K$ is the strike price, $S$ is the stock price, $r$ is the continuously compounded risk-free rate, $T$ is time at expiration, $t$ is time at issue, $\sigma$ is volatility, $\delta$ is continuously compounded dividend rate.The Black-Scholes formula for a European call is$C = Se^{-\delta (T-t)} N(d_1) - Ke^{-r(T-t)} N(d_2)$$d_1 = \...
It is important to note that he says: "In the risk-neutral world, $\frac{C(t,S_t)}{B_t}$ is a martingale." That is true by definition of what the risk-neutral measure is, also called martingale measure for exactly that reason.A risk-neutral measure is defined such that asset prices deflated by the numeraire (unit with which prices are measured) are ...
$C= S_0 N(d_1) - K e^{-rT} N(d_2)$$C$, $S_0$ and $K$ have units of currency (e.g. USD).$N(d1)$ and $N(d_2)$ are unit-less (dimensionless), the formula is dimensionally correct.Considering,$d1 = \frac {ln{\frac {S_0} K} + r T + \frac {\sigma^2} {2} T} {\sigma \sqrt T }$$r$ and $\sigma^2$ have units of "per year", as they are stated on an annualized ...
What you need is to identify the distribution of the asset price $S_T$, conditional on the information set $\mathcal{F}_{t}$ at time $t$, for $0\leq t < T$. Note that\begin{align*}S_T &= S_t \exp\bigg(\int_{t}^T \Big(r_s-\frac{\sigma_s^2}{2}\Big)ds + \int_t^T\sigma_s dW_s \bigg).\end{align*}Let\begin{align*}P(t, T) = \exp\bigg(-\int_t^T r_s ds ...
Time $T$ boundary condition is correct $u(T,x)=(x-K_1)^+-(x-K_2)^+$.Time $x\to 0$ boundary condition is known and is equal to $0$.Time $x\to\infty$ boundary condition is also known and is correct $\lim_{x\to\infty}u(t,x)=(K_2-K_1)e^{-r(T-t)}.$You need to be precise if you want your boundary be "absorbing" or "reflecting".
The PDE is defined for $x \in ]-\infty, +\infty[$ but the finite difference scheme requires a truncated domain $[x_{\min}, x_{\max}]$, and the choice of $x_{\min}$ and $x_{\max}$ will affect the quality of the result, regardless of the scheme being explicit, implicit, or mixed.A good rule of thumb is to choose the truncation $[x_{\min}, x_{\max}]$ such ...
(1) No, the stochastic differential equation for Heston model does not have an explicit solution. What does exist is an explicit formula for the Fourier transform of a call option price. See e.g. http://www.zeliade.com/whitepapers/zwp-0004.pdf for a decent survey.(2) Yes, implied vol always exists. You can check that the Black-Scholes price of an option ...
According to wikipedia one chooses $\pi_t=-V_t+\frac{\partial V}{\partial S}S_t$. This means, you are shorten $V$ and long $\frac{\partial V}{\partial S}$ shares of $S$. The general theory of self-financing strategies assumes that your market consists of a $\mathbb{R}^{d+1}$ process $S$, with $d$ risky assets and one risk free (bank account). A trading ...
The following paper gives a simple derivation of the BSM (via a simple integration approach instead of the classical PDE approach) and the Greeks plus some intuition for each:Derivation and Comparative Statics of the Black-Scholes Call and Put Option Pricing Formulas by Garven, J.You find the derivation of the Greeks in chapter 4 (called "comparative ...
When a pay-off is piecewise linear plus jumps, it the same as the portfolio of calls and digital calls. Its price must agree with that of the portfolio by no arbitrage. Every time there is a jump we add in a digital call and every time there is a change in gradient we add in calls equal to the gradient change.Here we have a call struck at $K$. Just below $...
In short answer, Yes: the backward PDE solution with $v(t,L)=0$ and the expectation coincides under the Black-Scholes market.In the one dimensional case, this topic is mathematically treated in the theory of the scale function and the spead measure. See Revez-Yor 3rd.ed. Ch.VII.3 for details.I don't know whether there are some rigorous theories on the ...
You miss the cross-derivative term in the Ito formula you use to express $d\left ( \frac {C_t}{S_t} \right)$. More specifically (see [Remark] below),$$d\left ( \frac {C_t}{S_t} \right) = \frac {1}{S_t} dC_t - \frac {C_t}{S_t^2} dS_t + \frac {C_t}{S_t^3} d\langle S_t, S_t \rangle{\color{green}{- \frac {1}{S_t^2} d\langle C_t, S_t \rangle}}$$This last ...
The B/S PDE for a contingent claim $V(S, t)$ is\begin{equation}\frac{\partial V}{\partial t} + r S \frac{\partial V}{\partial S} + \frac{1}{2} \sigma^2 S^2 \frac{\partial V^2}{\partial S^2} - r V = 0\end{equation}subject to the terminal condition $V(S, T) = \ln(S / K)$. According to the hint, the solution to $V(S, t$) takes the form\begin{equation}V(...
Ikonen and Toivanen don't say that the LCP is solved exactly, they simply say that the modified back-substitution is a valid algorithm to solve the LCP.A numerical error may arise around the location of optimal exercise, since it does not fall directly on the finite difference grid. I think that however, the error is of the same order as the discretization ...
Under the standard assumptions, generally speaking, any contract that depends on the current values of t and S, and which are paid for at the start satisfy this PDE. In the financial context, the boundary conditions would be different for the different contracts, so the solutions of the PDE would be different for the different contracts. Thus different ...
These are well known trivial solutions to the Black-Scholes PDE. The first one is just the price of the underlying stock and the second is interest bearing money in a bank. These are trivially true because there is no optionality involved (which is expressed in the boundary and terminal condition of the respective contract to price).
The option pricing formula must satisfy the PDE you have derived for all values of $K$. The only way this can be the case is if the two parts that you separate are both equal to zero. Suppose the joint PDE (before you separate it into two) is satisfied for some value of $K$. But suppose that the $Q$-part in parentheses on the second line is not zero. ...
Something is off in your plot. The value of a call should be very near zero with a strike price $10$ for the stock prices and times you have plotted. At first I thought you may have plotted "moneyness" defined as $S/K$ instead of $S$, but then your values are too low for that. May want to check your implementation.Besides that, the plot is telling you ...
Who gave you that idea?You absolutely can use Finite Differences for other PDEs. They are routinely used to solve hyperbolic PDEs (wave equation, both first and second order) and elliptic PDEs (steady state diffusion/heat equation). You can even mix and match the equation types and create PDEs that have characteristic of both hyperbolic and parabolic ...
I believe the setup of the first part you presented is inaccurate.The whole point of the hedge argument is that you can setup a self-financing portfolio that only holds a certain amount of stock and invests/borrows at a specific financing rate. It can be shown that such portfolio almost surely has the same payoff as the option at maturity. The option ...
The above equation is the price of a call option. It has nothing stochastic inside it. It only depends on the current price and the time. So no Ito is needed.You should just compute the derivatives of your solution v (like you do for any deterministic multivariable function), plug them into the PDE and verify that it's satisfied.
The key point here is that the portfolio must be self-financing, namely the initial option premium $V_0$ should be enough to allow you to hedge it throughout its life. If not, the option price $V_0$ is either too low or too high.Because the option is written on the asset $S$, buying or selling $S$ is how you neutralize the changes in value of the option: ...
You are correct that showing the self-financing condition for the BS-portfolio is not as straightforward as one may think:A portfolio $V_t(\alpha_t,\beta_t)$ (for stock $S_t$ and zerobond $B_t$) is self-financing iff:$$V_t=\alpha_tS_t+\beta_t B_t$$It further implies$$dV_t=\alpha_tdS_t+\beta_tdB_t$$To replicate a derivative $C(S_t,t)$ by a self-...
Your questions lacks a bit of detail. However: Since you are referring to a PDE it appears as if the Black Scholes formula is proved by considering a discrete model (1 standard-deviation move per time-step), then taking a limit "time-step size to zero".For example, in a tree, you are essentially approximating the normal distributed increment with a ...
The self-financing model that leads to the Black Scholes formula generally only makes distributional assumptions not assumptions about the absolute variability of the underlying assets.Was such assumption part of the discretization approach? Because then infinitesimal asset value changes without changes in positions in the assets would form the definition ...
American options pricing (swaption is just a kind of option) is a bit tricky due to the early exercise. Here is a page listing possible approaches, including some numeric methods, and some close form approximation formula.As I understand, lattice methods (tree, PDE discretization such as forward shooting) are fine to price American options. There're ...
|
Search
Now showing items 1-10 of 33
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
|
Let $f_{n}$ be some sequence of functions.
If $f_{n}$ uniformly converges to $f$ then is it true that $f^{\prime}_{n}\rightarrow f^{\prime}$?
Is there an example that proves/disproves this?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Counterexample: $f_n = \frac1n\sin(n^2 x)$.
The additional condition required is that a function $g$ exists to which the $f'_n$ uniformly converge (in which case of course $g=f'$). In particular, this PDF proves on p. 9 that this condition suffices, but on p. 8 it shows that $f_n=\frac{x}{1+nx^2}$ is a uniformly convergent sequence whose derivatives form a convergent but not uniformly convergent counterexample.
As mentioned above this not true in general. But one might still aks what a sufficient condition for "$f_n' \to f'$ uniformly" might be. If you already know that $f_n$ converges uniformly to $f$ and $f_n'$ converges uniformly to some function say $g$, then by the fundamental theorem of calculus one indeed obtains: $g=f'$ hence $\space$$f_n' \to f'$ uniformly.
|
I was reviewing past exam questions for a math competition known as UIL Calculator Applications when I stumbled across the following question:
A large amount of dough is rolled out and as many circular cookies as possible are cut from the rolled-out dough. The remaining dough is piled together, rerolled and more circular cookies are similarly cut. What percent of the original amount of dough is left over?
Here's my incorrect attempt, if you'd like:
First, I imagined that I was cutting one circle out of a square with side length $x$, with 4 points tangent to the circle. Therefore, the remaining area would be $$x^2-\pi(\frac{x}2)^2$$
Then, I imagined that I was cutting nine circles out of a square with side length $x$, with each circle tangent to the square and/or to adjacent circles. Therefore, the remaining area would be $$x^2-9\pi(\frac{x}6)^2$$ which is the same result as the first scenario. So the percent of original amount of dough left over after 1 "cut" for any number of circles is $$\frac{x^2(1-\frac{\pi}4)}{x^2}\approx21.46\%$$ and after 2 "cuts" the percent of original amount of dough left over is $$(21.46\%)^2\approx4.61\%$$
According to the answer key, however, the answer rounded to 3 significant digits is $.867\%$ and I'm not sure as to how.
|
If $A+B+C=\pi$ :
$$ \sin A + \sin B + \sin C \le \frac{3\sqrt{3}}{2} \\ \cos A + \cos B + \cos C \le \frac{3}{2} \\ \tan A + \tan B + \tan C \le 3\sqrt{3} $$
with the equalities holding in the case of an equilateral triangle ($A=B=C=\frac{\pi}{3}$). I've also found out that of all the triangles inscribed in a circle, an equilateral triangle has the largest area.
Why does the maximum of the things I've described exist in the case of an equilateral triangle ? Is it just so or is there a reason for this fact ? Whenever I encounter a question which asks me to maximize something in the case of a triangle, I've taken to simply taking it as an equilateral triangle. Is this safe ? And what are the other situations in which the maximum of something is obtained in the case of an equilateral triangle ?
|
Difference between revisions of "Colloquia/Fall18"
(→Spring Abstracts)
(→Spring 2018)
Line 67: Line 67:
| April 13
| April 13
| [https://www.math.brown.edu/~jpipher/ Jill Pipher] (Brown)
| [https://www.math.brown.edu/~jpipher/ Jill Pipher] (Brown)
−
|[[#Jill Pipher| Mathematical ideas in cryptography ]]
+
|[[#Jill Pipher| Mathematical ideas in cryptography ]]
| WIMAW
| WIMAW
|
|
Revision as of 17:30, 8 April 2018 Contents 1 Mathematics Colloquium 1.1 Spring 2018 1.2 Spring Abstracts 1.2.1 January 29 Li Chao (Columbia) 1.2.2 February 2 Thomas Fai (Harvard) 1.2.3 February 5 Alex Lubotzky (Hebrew University) 1.2.4 February 6 Alex Lubotzky (Hebrew University) 1.2.5 February 9 Wes Pegden (CMU) 1.2.6 March 2 Aaron Bertram (Utah) 1.2.7 March 16 Anne Gelb (Dartmouth) 1.2.8 April 5 John Baez (UC Riverside) 1.2.9 April 6 Edray Goins (Purdue) 1.2.10 April 13, Jill Pipher, Brown University 1.2.11 April 16 Christine Berkesch Zamaere (Minnesota) 1.3 Past Colloquia Mathematics Colloquium
All colloquia are on Fridays at 4:00 pm in Van Vleck B239,
unless otherwise indicated. Spring 2018
date speaker title host(s) January 29 (Monday) Li Chao (Columbia) Elliptic curves and Goldfeld's conjecture Jordan Ellenberg February 2 (Room: 911) Thomas Fai (Harvard) The Lubricated Immersed Boundary Method Spagnolie, Smith February 5 (Monday, Room: 911) Alex Lubotzky (Hebrew University) High dimensional expanders: From Ramanujan graphs to Ramanujan complexes Ellenberg, Gurevitch February 6 (Tuesday 2 pm, Room 911) Alex Lubotzky (Hebrew University) Groups' approximation, stability and high dimensional expanders Ellenberg, Gurevitch February 9 Wes Pegden (CMU) The fractal nature of the Abelian Sandpile Roch March 2 Aaron Bertram (University of Utah) Stability in Algebraic Geometry Caldararu March 16 (Room: 911) Anne Gelb (Dartmouth) Reducing the effects of bad data measurements using variance based weighted joint sparsity WIMAW April 5 (Thursday, Room: 911) John Baez (UC Riverside) Monoidal categories of networks Craciun April 6 Edray Goins (Purdue) Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups Melanie April 13 Jill Pipher (Brown) Mathematical ideas in cryptography WIMAW April 16 (Monday) Christine Berkesch Zamaere (University of Minnesota) Free complexes on smooth toric varieties Erman, Sam April 25 (Wednesday) Hitoshi Ishii (Waseda University) Wasow lecture TBA Tran May 4 Henry Cohn (Microsoft Research and MIT) TBA Ellenberg date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty Spring Abstracts January 29 Li Chao (Columbia)
Title: Elliptic curves and Goldfeld's conjecture
Abstract: An elliptic curve is a plane curve defined by a cubic equation. Determining whether such an equation has infinitely many rational solutions has been a central problem in number theory for centuries, which lead to the celebrated conjecture of Birch and Swinnerton-Dyer. Within a family of elliptic curves (such as the Mordell curve family y^2=x^3-d), a conjecture of Goldfeld further predicts that there should be infinitely many rational solutions exactly half of the time. We will start with a history of this problem, discuss our recent work (with D. Kriz) towards Goldfeld's conjecture and illustrate the key ideas and ingredients behind these new progresses.
February 2 Thomas Fai (Harvard)
Title: The Lubricated Immersed Boundary Method
Abstract: Many real-world examples of fluid-structure interaction, including the transit of red blood cells through the narrow slits in the spleen, involve the near-contact of elastic structures separated by thin layers of fluid. The separation of length scales between these fine lubrication layers and the larger elastic objects poses significant computational challenges. Motivated by the challenge of resolving such multiscale problems, we introduce an immersed boundary method that uses elements of lubrication theory to resolve thin fluid layers between immersed boundaries. We apply this method to two-dimensional flows of increasing complexity, including eccentric rotating cylinders and elastic vesicles near walls in shear flow, to show its increased accuracy compared to the classical immersed boundary method. We present preliminary simulation results of cell suspensions, a problem in which near-contact occurs at multiple levels, such as cell-wall, cell-cell, and intracellular interactions, to highlight the importance of resolving thin fluid layers in order to obtain the correct overall dynamics.
February 5 Alex Lubotzky (Hebrew University)
Title: High dimensional expanders: From Ramanujan graphs to Ramanujan complexes
Abstract:
Expander graphs in general, and Ramanujan graphs , in particular, have played a major role in computer science in the last 5 decades and more recently also in pure math. The first explicit construction of bounded degree expanding graphs was given by Margulis in the early 70's. In mid 80' Margulis and Lubotzky-Phillips-Sarnak provided Ramanujan graphs which are optimal such expanders.
In recent years a high dimensional theory of expanders is emerging. A notion of topological expanders was defined by Gromov in 2010 who proved that the complete d-dimensional simplical complexes are such. He raised the basic question of existence of such bounded degree complexes of dimension d>1.
This question was answered recently affirmatively (by T. Kaufman, D. Kazdhan and A. Lubotzky for d=2 and by S. Evra and T. Kaufman for general d) by showing that the d-skeleton of (d+1)-dimensional Ramanujan complexes provide such topological expanders. We will describe these developments and the general area of high dimensional expanders.
February 6 Alex Lubotzky (Hebrew University)
Title: Groups' approximation, stability and high dimensional expanders
Abstract:
Several well-known open questions, such as: are all groups sofic or hyperlinear?, have a common form: can all groups be approximated by asymptotic homomorphisms into the symmetric groups Sym(n) (in the sofic case) or the unitary groups U(n) (in the hyperlinear case)? In the case of U(n), the question can be asked with respect to different metrics and norms. We answer, for the first time, one of these versions, showing that there exist fintely presented groups which are not approximated by U(n) with respect to the Frobenius (=L_2) norm.
The strategy is via the notion of "stability": some higher dimensional cohomology vanishing phenomena is proven to imply stability and using high dimensional expanders, it is shown that some non-residually finite groups (central extensions of some lattices in p-adic Lie groups) are Frobenious stable and hence cannot be Frobenius approximated.
All notions will be explained. Joint work with M, De Chiffre, L. Glebsky and A. Thom.
February 9 Wes Pegden (CMU)
Title: The fractal nature of the Abelian Sandpile
Abstract: The Abelian Sandpile is a simple diffusion process on the integer lattice, in which configurations of chips disperse according to a simple rule: when a vertex has at least 4 chips, it can distribute one chip to each neighbor.
Introduced in the statistical physics community in the 1980s, the Abelian sandpile exhibits striking fractal behavior which long resisted rigorous mathematical analysis (or even a plausible explanation). We now have a relatively robust mathematical understanding of this fractal nature of the sandpile, which involves surprising connections between integer superharmonic functions on the lattice, discrete tilings of the plane, and Apollonian circle packings. In this talk, we will survey our work in this area, and discuss avenues of current and future research.
March 2 Aaron Bertram (Utah)
Title: Stability in Algebraic Geometry
Abstract: Stability was originally introduced in algebraic geometry in the context of finding a projective quotient space for the action of an algebraic group on a projective manifold. This, in turn, led in the 1960s to a notion of slope-stability for vector bundles on a Riemann surface, which was an important tool in the classification of vector bundles. In the 1990s, mirror symmetry considerations led Michael Douglas to notions of stability for "D-branes" (on a higher-dimensional manifold) that corresponded to no previously known mathematical definition. We now understand each of these notions of stability as a distinct point of a complex "stability manifold" that is an important invariant of the (derived) category of complexes of vector bundles of a projective manifold. In this talk I want to give some examples to illustrate the various stabilities, and also to describe some current work in the area.
March 16 Anne Gelb (Dartmouth)
Title: Reducing the effects of bad data measurements using variance based weighted joint sparsity
Abstract: We introduce the variance based joint sparsity (VBJS) method for sparse signal recovery and image reconstruction from multiple measurement vectors. Joint sparsity techniques employing $\ell_{2,1}$ minimization are typically used, but the algorithm is computationally intensive and requires fine tuning of parameters. The VBJS method uses a weighted $\ell_1$ joint sparsity algorithm, where the weights depend on the pixel-wise variance. The VBJS method is accurate, robust, cost efficient and also reduces the effects of false data.
April 5 John Baez (UC Riverside)
Title: Monoidal categories of networks
Abstract: Nature and the world of human technology are full of networks. People like to draw diagrams of networks: flow charts, electrical circuit diagrams, chemical reaction networks, signal-flow graphs, Bayesian networks, food webs, Feynman diagrams and the like. Far from mere informal tools, many of these diagrammatic languages fit into a rigorous framework: category theory. I will explain a bit of how this works and discuss some applications.
April 6 Edray Goins (Purdue)
Title: Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups
Abstract: A Belyĭ map [math] \beta: \mathbb P^1(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] is a rational function with at most three critical values; we may assume these values are [math] \{ 0, \, 1, \, \infty \}. [/math] A Dessin d'Enfant is a planar bipartite graph obtained by considering the preimage of a path between two of these critical values, usually taken to be the line segment from 0 to 1. Such graphs can be drawn on the sphere by composing with stereographic projection: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq \mathbb P^1(\mathbb C) \simeq S^2(\mathbb R). [/math] Replacing [math] \mathbb P^1 [/math] with an elliptic curve [math]E [/math], there is a similar definition of a Belyĭ map [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C). [/math] Since [math] E(\mathbb C) \simeq \mathbb T^2(\mathbb R) [/math] is a torus, we call [math] (E, \beta) [/math] a toroidal Belyĭ pair. The corresponding Dessin d'Enfant can be drawn on the torus by composing with an elliptic logarithm: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq E(\mathbb C) \simeq \mathbb T^2(\mathbb R). [/math]
This project seeks to create a database of such Belyĭ pairs, their corresponding Dessins d'Enfant, and their monodromy groups. For each positive integer [math] N [/math], there are only finitely many toroidal Belyĭ pairs [math] (E, \beta) [/math] with [math] \deg \, \beta = N. [/math] Using the Hurwitz Genus formula, we can begin this database by considering all possible degree sequences [math] \mathcal D [/math] on the ramification indices as multisets on three partitions of N. For each degree sequence, we compute all possible monodromy groups [math] G = \text{im} \, \bigl[ \pi_1 \bigl( \mathbb P^1(\mathbb C) - \{ 0, \, 1, \, \infty \} \bigr) \to S_N \bigr]; [/math] they are the ``Galois closure
of the group of automorphisms of the graph. Finally, for each possible monodromy group, we compute explicit formulas for Belyĭ maps [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] associated to some elliptic curve [math] E: \ y^2 = x^3 + A \, x + B. [/math] We will discuss some of the challenges of determining the structure of these groups, and present visualizations of group actions on the torus.
This work is part of PRiME (Purdue Research in Mathematics Experience) with Chineze Christopher, Robert Dicks, Gina Ferolito, Joseph Sauder, and Danika Van Niel with assistance by Edray Goins and Abhishek Parab.
April 13, Jill Pipher, Brown University
Title: Mathematical ideas in cryptography
Abstract: This talk does not assume prior knowledge of public key crypto (PKC). I'll talk about the history of the subject and some current areas of research, including homomorphic encryption.
April 16 Christine Berkesch Zamaere (Minnesota)
Title: Free complexes on smooth toric varieties
Abstract: Free resolutions have been a key part of using homological algebra to compute and characterize geometric invariants over projective space. Over more general smooth toric varieties, this is not the case. We will discuss the another family of complexes, called virtual resolutions, which appear to play the role of free resolutions in this setting. This is joint work with Daniel Erman and Gregory G. Smith.
|
This post discusses how to introduce finite energy resources (ex. oil, natural gas) into the real business-cycle (RBC) model. The title comes from the fact that I will model the stock of energy as a cake eating problem.
Simple Cake Eating Problem
Imagine you have a cake (that never goes bad) and you need to decide how to optimally spread its consumption over time. The catch is, the amount of cake you have is fixed at .Set up the problem recursively:\begin{equation}V(k)=\max_{c,k’} \frac{c^{1-\frac{1}{\gamma}}}{1-\gamma}+\beta V(k’)\end{equation}such that (Budget Constraint or BC). In this simple model, measures how much you care about your consumption of cake tomorrow, and measures your desire to smooth cake consumption over time. is the value function. Basically, it says how much having units of cake is worth to you today, given optimal consumption for the rest of time.Letting be the multiplier on the BC, taking first order conditions we get:
\begin{equation} \text{[c]} \quad c^{-\frac{1}{\gamma}}=\lambda \end{equation} \begin{equation} \text{[k’]} \quad \beta v’(k’) = \lambda \end{equation} \begin{equation} \text{[Envelope]} \quad v’(k)= \lambda \end{equation}
Combining these, we get that:\begin{equation}c’=\beta^{\gamma} c\end{equation}Now remember, total amount of cake is fixed at , so it must be that total lifetime consumption of cake is equal to the size of the cake.\begin{equation}\sum\limits_{t=0}^{\infty} c_t = k_0\end{equation}Using our formula for optimal consumption we get:
\begin{equation} c_0=(1-\beta^{\gamma}) k_0 \end{equation} \begin{equation} c_t=\beta^\gamma c_{t-1} \text{ for all }t>0 \end{equation} Below I plot the optimal consumption of a cake of size 100 for different values of and (b is and g is ). We see that the agent with higher consumes almost a constant amount of cake every day, while the agent with a high prefers to eat a large share of the cake the day he gets it.
RBC Model
First, let’s go over how to solve the RBC model (without labor) by hand. This will be the framework upon which we introduce energy consumption. The discussion here is based on class notes from Luigi Bocola’s Macroeconomics 411-3 lectures.
Equilibrium Conditions
Let uppercase letters denote quantities in levels:
Production Function: Law of Motion for Capital (LOK): Preferences are , so the Euler Equation is: Interest rate comes from firm FOCs (assuming firms pay for depreciation): Finally, technology is stochastic: where and is N(0,1). Log-Linearize about the non-stochastic steady-state
Let lowercase letters denote log-deviations from the steady state (except for which is just itself). Let uppercase letters with no subscript denote equilibrium quantities in levels.
Production Function: LOK: . Use the equation for in the steady state to get and use the expression for LOK in steady state to get . Now, put this back into LOK, along with our expression for : \begin{equation} k_{t+1}=(1+r)k_t + \alpha \frac{r+\delta}{1-\alpha} z_t - \Big( \frac{r+\delta}{1-\alpha}-\delta \Big) c_t \end{equation} we can rewrite this as . Using the fact that in the steady state, , the Euler Equation becomes: \begin{equation} \sigma E_t [r_{t+1}]=E_t[c_{t+1}-c_t] \end{equation} Log-linearizing our expression for , using R=1+r and our expression for Y/K from above, we get , putting this back into the Euler Equation we get: \begin{equation} \sigma \alpha \frac{r+\delta}{1+r}E_t[z_{t+1}-k_{t+1}]=E_t[\Delta c_{t+1}] \end{equation} where . We rewrite as: . Guess a Policy Function
We want to eliminate consumption from all the equations above, so we conjecture a linear policy rule:\begin{equation}c_t=\eta_{ck} k_t + \eta_{cz} z_t\end{equation}where and capture how consumption responds to current capital stock and technology, respectively.
Putting this into our LOK to eliminate : \begin{equation} k_{t+1}=(\lambda_1 + (1-\lambda_1 -\lambda_2) \eta_{ck}) k_t + (\lambda_2 + (1-\lambda_1 - \lambda_2) \eta_{cz}) z_t \end{equation} In addition, substitute this into our Euler Equation, and take expectations: \begin{equation} \eta_{ck} \Delta{k_{t+1}} + \eta_{cz}(\phi-1) z_t = \sigma \lambda_3 (\phi z_t - k_{t+1}) \end{equation} Where we used the fact that is totally determined at time so we don’t need expectations, and from our technology process. Solving the Model
The final work is to substitute our LOK into the Euler Equation to eliminate :\begin{equation}\eta_{ck}(\eta_{kk}-1)k_t+[\eta_{ck} \eta_{kz} + \eta_{cz}(\phi-1)] z_t= \sigma \lambda_3 [ \eta_{kz}-\phi]z_t - \sigma \lambda_3 \eta_{kk} k_t\end{equation}Where we used the following facts:
from our conjectured policy function from our technology process from our LOK, where and . To solve the model, we set and solve for , and set to solve for . We will have two solutions for and we will choose the positive one to get a stable solution. The benefit of this formulation is that we can solve for the time series dynamics of all the variables of interest such as , and . Future Work
With the basics established, you can add the cake-eating problem to the RBC model by modifying the production function. Rather than , solve the model with where is energy usage and the stock of energy follows
|
I am currently trying to understand a little bit of axiomatic set theory, following Enderton's fun book "Elements of Set Theory" and am a little unclear about the set of natural numbers as defined in Chapter 4.
Firstly, let me note that the axioms in the book preceding the study of the natural numbers are the axiom of extensionality, the empty set axiom, pairing, unions, power sets and the axiom for constructing subsets.
At the beginning of Chapter 4 is given the definition of the successor $a^{+}=a \cup \{a\}$ of a set $a$. Enderton then defines $0=\emptyset$, $1=0^{+}$, $2=1^{+}$ and so on.
A set $a$ is called
inductive if $\emptyset \in A$ and $a \in A \implies a^{+} \in A$. The axiom of infinity then asserts the existence of an inductive set, and the set of natural numbers $\omega$ is defined to be the intersection of all inductive sets. The existence of this set follows from the axioms mentioned.
Now clearly each set $0,1,2 \ldots$ belongs to the set $\omega$ since it contains $0=\emptyset$ and is closed under successors.
My question: is the converse true? Is every element of $\omega$ obtained from $0=\phi$ by applying the successor operation to $0$ finitely many times?
I presume that this can be deduced, but as far as I can tell it is not addressed in the book.
Note: If I had a way of constructing the "set" $X=\{0,1,2 ,\ldots n,n^{+},\ldots \}$ then it would be inductive, by construction, and therefore contain $\omega$ but I do not know how to construct the aforementioned set from the axioms given. So an equivalent question is: how can I construct $X$? (Perhaps the later axiom of replacement might help somehow?)
Grateful for any help!
|
Are there efficient algorithms for calculating determinants of matrices?
closed as off-topic by Shailesh, Namaste, Delta-u, José Carlos Santos, Strants Jul 20 '18 at 14:18
This question appears to be off-topic. The users who voted to close gave this specific reason:
" This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Shailesh, Namaste, Delta-u, Strants
You can perform a slight variation of Gauss elimination on the determinant to bring it into row echelon form.
adding a scalar multiple of a row to another row will not change the value of the determinant, because the determinant is linear in its arguments (here: row vectors) and because it is alternating: two equal arguments will make it vanish (here: the addition) switching rows will flip the sign of the value of the determinant, because it is an alternating form
Laplace expansion along colums then has only contributions at the diagonal positions.
This lowers the complexity for a $n\times n$ matrix from $O(n!)$ for the Leibniz formula or Laplace expansion to $O(n^3)$ for Gauss elimination.
Suppose I have a square matrix $A \in \mathbb{C}^{n \times n} $
now
$$A = LU $$
in general the determinant takes $ \mathcal{O}(n!)$
instead $$det(A) = det(LU) = det(L)det(U) $$ to calculate this
$$ det(A) = \prod_{i=1}^{n} l_{ii} \prod_{i=1}^{n} u_{ii} $$
however if the numerical stability is in question you could use the QR decomp
$$ A = QR$$ $$ det(A) = det(QR) = det(Q)det(R) $$ the determinant of $Q$ is 1 as it is orthogonal
$$ det(A) = \prod_{i=1}^{n} r_{ii}$$
now if the matrix is positive definite hermitian we can use the cholesky decomp
$$ A = R^{*}R$$ $$ det(A) = det(R^{*}R) = det(R^{*})det(R)$$ $$ det(A) = \prod_{i=1}^{n} r_{ii}^{*} \prod_{i=1}^{n} r_{ii}$$
There is also an algorithm called the Bareiss algorithm which has $\mathcal{O}(n^{2})$ time complexity for Toeplitz matrices.
|
Please, help me make equivalent transformations with this formula (A∨C→B)(A→C)(¬B→¬A∧C)(¬A→(C→B))(B→¬C→¬A). Thanks.
You cannot "mix" in this way different "conventions" regarding symbols.
If you want to use
propositional connectives (like : $\lor, \land$) instead of boolean operators (like : $\cdot, +$) you have to rewrite your formula with $\lor$ in place of $+$ and $\land$ in place of $\cdot$ (justaxposition).
If so, I think that your formula must be :
$((A∨C)→B) \land (A→C) \land (¬B→(¬A \land C)) \land (¬A→(C→B)) \land (B→(¬C→¬A))$
having restored some missing parentheses.
Then we eliminate $\rightarrow$, through the equivalence between $P \rightarrow Q$ and $\lnot P \lor Q$.
In this way, splitting the problem, we have five
conjuncts to consider: (i) $((A∨C)→B)$ is : $\lnot (A∨C) \lor B$ which, by De Morgan, is : $(\lnot A \land \lnot C) \lor B$, which in turn is equivalent to : $(\lnot A \lor B) \land (B \lor \lnot C)$, by distributivity.
In
boolean form is : $(\bar {A} + B)(B + \bar {C})$. (ii) $(A→C)$ is simply : $(\lnot A \lor C)$. In boolean : $(\bar {A} + C)$. (iii) $(¬B→(¬A \land C))$ is : $B \lor (\lnot A \land C)$, using double negation, which in turn is equivalent to : $(\lnot A \lor B) \land (B \lor C)$, by distributivity.
In
boolean form is : $(\bar {A} + B)(B + C)$. (iv) $(¬A→(C→B))$ is : $A \lor (\lnot C \lor B)$. In boolean : $(A + B + \bar {C})$. (v) $(B→(¬C→¬A))$ is : $\lnot B \lor (C \lor \lnot A)$. In boolean : $(\bar {A} + \bar {B} + C)$.
Now, we can "reassemble" the
conjuncts (i) to (v) without redundant terms :
$(\bar {A} + B)(B + \bar {C})(\bar {A} + C)(B + C)(A + B + \bar {C})(\bar {A} + \bar {B} + C)$.
Now we start with boolean simplification, "inserting" the missing terms [i.e. : $(\bar {A} + B)$ is rewritten as : $(\bar {A} + B + C\bar {C})$ i.e. $(\bar {A} + B + C)(\bar {A} + B + \bar {C})$ ] :
$(\bar {A} + B + C)(\bar {A} + B + \bar {C})(A + B + \bar {C})(\bar{A} + B + \bar {C})(\bar {A} + B + C)(\bar {A} + \bar {B} + C)(A + B + C)(\bar {A} + B + C)(A + B + \bar {C})(\bar {A} + \bar {B} + C)$
and cancel the redundant terms :
$(\bar {A} + B + C)(\bar {A} + B + \bar {C})(A + B + \bar {C})(\bar {A} + \bar {B} + C)(A + B + C)$.
This is the standard Product-of-Sums (POS) Form.
|
The formal notion you're looking for is that of an affine space. You have already linked a question (2) to which I've given a fairly thorough technical answer, so I'll avoid the formalism here.
Yes, you are essentially correct. Displacement vectors can be identified with the "difference" between two points, and are fundamentally baked-in to the structure of an affine space. If you then choose an origin point, you can define the
position vector corresponding to any individual point as the displacement vector between that point and your chosen origin.
Therefore, displacement vectors are ever-so-slightly more fundamental in the sense that position vectors require you to make an additional choice of origin from the set of points in your affine space, while the displacements themselves require no such additional structure.
To answer your follow up question, the term "vector" shows up in a wide variety of distinct but related contexts. If you want a formal definition, then I would say that that a
displacement vector is an element of a vector space which is incorporated into an affine space.
More generally, a
vector is simply an element of a vector space. This is perhaps among the least satisfying answers imaginable, but there's really no other way to say it. The vectors which arise in differential geometry (i.e. tangent vectors to a manifold) are completely different beasts to the vectors which we've discussed here.
Lastly, in physics we often define a vector by how its components transform under rotations (or Lorentz transformations, in the case of 4-vectors). You can take that as the defining characteristic, but this can be overly broad$^\dagger$ and requires a careful discussion of symmetry transformations to treat properly.
$^\dagger$For instance, in SR the gamma matrices $\gamma^\mu$ transform like 4-vectors despite not actually being 4-vectors.
|
Assume you want to create a security which replicates the implied volatility of the market, that is when $\sigma$ goes up, the value of the security $X$.
The method you could use is to buy call options on that market for an amount $C$.
We know that call options have a positive vega $\nu = \frac{\partial C}{\partial \sigma}= S \Phi(d_1)\sqrt{\tau} > 0$, so if the portfolio was made of the call $X=C$, then the effect of $\sigma$ on the security is as we desired.
However, there is of course a major issue: the security $X$ would also have embedded security risk, time risk and interest rate risk. You can use the greeks to hedge against $\Delta$, $\Theta$ and $\rho$ (which are the derivative of the call option respective to each source of risk).
In practice, I think you definitely need $X$ to be $\Theta$-neutral and $\Delta$-neutral, but would you also hedge against $\rho$ or other greeks? Have the effect of these variable been really important on option prices to make a significant impact, or would the cost of hedging be too high for the potential benefit?
|
PURPOSE
This app can be used to identify the relation between independent variables and specified quantiles of a dependent variable. Independent variables include continuous or categorical independent variables. And the model can support interaction effects. It supports multiple bandwidth methods including bootstrap method. And for given predictors, it can predict the response from the built model.
INSTALLATION
Download QuantileReg.opx file, and then drag-and-drop onto the Origin workspace. An icon will appear in the Apps Gallery window.
NOTE: This tool requires OriginPro.
OPERATION
Make a worksheet for input data active. Click the
Quantile Regression icon in the Apps Gallery window. In the opened dialog, select a column from the worksheet as
Dependent Variable in Input tab. Choose Continuous or Categorical Independent Variables. If you want to predict the response for test data, check Predict Response edit box, and select columns for Continuous or Categorical Independent Variables. Note that number of columns for Continuous or Categorical for test data must be same as that for input data. In the
Model tab, choose Main Effects or Custom Model from Model Type drop-down list. For the latter, you can custom interaction terms in the pop-up Custom Model dialog. Include Intercept option determines whether to include intercept parameter. If categorical independent variables are chosen, choose (1, 0) or (-1, 1) from Coding Type drop-down list. In the
Settings tab, type quantile values in Quantile(s) to Estimate (0-1) edit box. Quantile values must be between 0 and 1, and each quantile value must be separated by space. In Standard Error Settings group, choose Method to Estimate Covariance, four methods are available: IID, Kernel, HKS and Bootstrap XY. For first three methods, choose Bandwidth Method to determine the bandwidth: Hall-Sheather or Bofinger. For the last method, set an integer number to Bootstrap Replications. Specify Max Iter to define the maximum number of iterations. In the
Quantities tab, choose which quantities to compute. Quantities options include Standard Error, Confidence Limits, t-Value and Prob>|t| in Fit Parameters branch, Raw Sum of Deviations, Min Sum of Deviations and Pseudo R-Squared in Fit Statistics branch, Covariance Matrix, Fitted Y, Residuals and Predicted Y in Fitted Result branch. Confidence Level (%) in Fit Parameters branch must be between 0 and 100. Predicted Y in Fitted Result branch is available only Predict Response is checked in Input tab. Click
OK button, a report sheet and a report data sheet will be created.
ALGORITHM
The specified quantile of the dependent variable can be found by minimizing the sum of the loss function for residuals.
\(\min \displaystyle { \sum_{i} \rho_{\tau} \left( y_i-x_i^T \beta \right ) }\)
where \(\rho_{\tau}\) is the piecewise loss function, and \(\beta\) is the vector for fitted parameters.
\(\rho_{\tau}(z)= \begin {cases} (\tau-1)z, & \text{if } z<0 \\ \tau z, & \text{if } z \geqslant 0 \end {cases}\).
If \(\tau = 0.5\), the loss function becomes \(0.5 \displaystyle \sum_i \left \vert y_i-x_i^T \beta \right \vert\) , and the regression becomes the method of least absolute deviations (LAD).
Preudo R-squared \(R_p^2\) is similar to R-squared in the least squares method, but it subtracts quantile instead of mean, and uses the loss function instead of square of residuals:
\(R_p^2 = 1 - \frac {\displaystyle \sum_i \rho_{\tau} \left( y_i - x_i^T \beta \right )}{\displaystyle \sum_i \rho_{\tau} \left( y_i - Q(\tau) \right)}\) , where \(Q(\tau)\) is the quantile of \(\tau\) for input data \(y_i\).
Sample OPJU File
This app provides a sample OPJU file. Right click on the Quantile Regression icon in the Apps Gallery window, and choose Show Samples Folder from the short-cut menu. A folder will open. Drag-and-drop the project file QRSample.opju from the folder onto Origin. The Notes window in the project shows detailed steps.
Note: If you wish to save the OPJU after changing, it is recommended that you save to a different folder location (e.g. User Files Folder). NOTES
If there is only one continuous independent variable, fitted curve will be shown in the report.
|
I updated my work to show the steps of how I got my expansion.
I just want to know if what I have worked on so far is correct or if I messed something up along the way.
I have the function
$$f(x) = \int_0^x \frac {\log(1+t)}{t}dt$$
I would like a confirmation for my Taylor expansion of $\log(1+t)$ about $x_0 = 0$. I got the following,
$$\log(1+t) = \sum_{k=1}^{n} (-1)^{k+1} \;\frac {t^k}{k}+\; \frac {(-1)^n \; t^{n+1}}{(n+1)(1+\xi(t))^{n+1}}$$
Where $\xi$ is between 0 and t. (Below are my steps of how I got this)
First we have that the $n^{th}$ derivative of $log(1+t)$ is
$$\frac{(-1)^{n+1} \; (n-1)!}{(1+t)^n}$$
and therefore for the $n+1$ derivative we have
$$\frac{(-1)^{n} \; n!}{(1+t)^{n+1}}$$
Taylors Theorem with Remainder states $f(x) = p_n(x) + R_n(x)$ for
$$p_n(x)=\sum_{k=0}^{n} \frac {(x-x_0)^k}{k!} \; f^{(k)}(x_0)$$
and
$$R_n(x) = \frac {(x-x_0)^{n+1}}{(n+1)!} \; f^{(n+1)}(\xi_x)$$
for $\xi_x$ between $x_0$ and $x$.
This is what I used to derive the remainder term. (the pointwise version not the integral version).
So I have
$$R_n(t) = \frac {t^{n+1}}{(n+1)!} \; \frac{(-1)^{n} \; n!}{(1+t)^{n+1}} = \frac {(-1)^n \; t^{n+1}}{(n+1)(1+\xi(t))^{n+1}}$$
If this is correct is it ok to just divide by $t$ to get a Taylor expansion for the integrand, and if so what do I have to do to ensure I am handling $t=0$ correctly?
Thank you!!!
|
Given a general ellipse $$Ax^2+Bxy+Cy^2+Dx+Ey+F=0$$ where $B^2<4AC$, what are the major and minor axes of symmetry in the form $ax+by+c=0$?
It is possible of course to first work out the angle of rotation such that $xy,x,y$ terms disappear, in order to get an upright ellipse of the form $x^2/p^2+y^2/q^2=1$ and proceed from there. This may involve some messy trigonometric manipulations.
Could there be another approach perhaps, considering only quadratic/diophantine and linear equations?
Addendum
Here's a graphical implementation based on the answer by Ng Chung Tak.
Addendum 2
Based on the answers by amd and by Ng Chung Tak, the equations for the axes are
$$\color{red}{\left(y-\frac {2AE-BD}{B^2-4AC}\right)=\frac {C-A\pm \sqrt{(A-C)^2+B^2}}B\left(x-\frac {2CD-BE}{B^2-4AC}\right)}$$
Note that $$\frac{C-A\pm \sqrt{(A-C)^2+B^2}}B\cdot \color{lightgrey}{\frac {C-A\mp\sqrt{(A-C)^2+B^2}}{C-A\mp\sqrt{(A-C)^2+B^2}}}=-\frac B{C-A\mp\sqrt{(A-C)^2+B^2}}$$ i.e. it is equal to the negative of its own reciprocal. Hence the equations for the axes can also be written as
$$\color{red}{\left(x-\frac {2CD-BE}{B^2-4AC}\right)=-\frac {C-A\mp \sqrt{(A-C)^2+B^2}}B\left(y-\frac {2AE-BD}{B^2-4AC}\right)}$$
hence the two symmetrical anti-symmetrical forms for the axes. Here's the graphical implementation.
|
I'm having trouble understanding on which side the induced representation functor is adjoint to the restriction functor.
For simplicity I'm assuming $H$ is a subgroup of $G$ (we could also see it as a morphism $H\to G$). Let me denote $\mathbf{Rep}_H$ the category of representations of $H$ over a fixed field $K$ (similarly for $G$), let $\mathbf{Res}_H^G: \mathbf{Rep}_G \to \mathbf{Rep}_H$ be the restriction functor.
One usual description of the induced representation is the following: given $(\rho,V)$ a representation of $H$, let $\mathbf{Ind}_H^G\rho = \{f: G\to V \mid \forall h\in H, \forall x\in G, f(hx)=h\cdot f(x)\}$ with $g\cdot f(x) =f(xg)$.
With this definition it's easy to prove $Hom_G(\pi, \mathbf{Ind}_H^G\rho)\simeq Hom_H(\mathbf{Res}_H^G\pi, \rho)$, which makes $\mathbf{Ind}_H^G$ right-adjoint to $\mathbf{Res}_H^G$. If I'm not mistaken, the explicit arrows are $\lambda\mapsto (v\mapsto \lambda(v)(1))$ and in the reverse direction $f\mapsto (v\mapsto (g\mapsto f(g\cdot v)))$.
First of all this is a bit surprising as $\mathbf{Res}_H^G$ is more of a forgetful-type functor so we'd expect the "obvious functor in the other direction" to be its left adjoint, but as the proof goes through so simply we can let this surprise on the side.
However, Wiki states the following : "In the case of finite groups, they are actually both left- and right-adjoint to one another" (from the article on Frobenius Reciprocity)
Moreover in the same article they state that there is a natural isomorphism $Hom_{K[G]}(K[G]\otimes_{K[H]}V, W) \simeq Hom_{K[H]}(V,W)$ where a representation is simply seen as a $K[G]$(resp. $K[H]$-)module (and $K[G]$ is a $(K[G],K[H])$-bimodule to make sense of the tensor product and the $K[G]$-module structure on it). Once again, this isomorphism seems easy to establish : one direction is $\lambda\mapsto (v\mapsto \lambda(1\otimes v))$ and the other $f\mapsto (g\otimes v\mapsto g\cdot f(v))$.
This seems to work for arbitrary groups, not just finite ones. So I assume the sentence in the wikipedia article means that $K[G]\otimes_{K[H]}V \simeq \mathbf{Ind}_H^GV$ only if $G$ is finite (the "only" being in the sense "in general").
If that's not the case, what does this sentence mean ? If it is, is this isomorphism natural (in $V$ ? and in $(H,G)\in \mathbf{FinGrp}^\to$ ?) ? What is the isomorphism? If my interpretation is correct, can anything be said on the relationship between $K[G]\otimes_{K[H]}V$ and $\mathbf{Ind}_H^GV$ when $G$ is infinite ?
If my interpretation is not correct and the "$K[G]\otimes_{K[H]}V$" construction does not work for infinite $G$ (because of some mistake I made), then does $\mathbf{Res}_H^G$ have a left adjoint in general ?
It seems as though one can apply the general adjoint functor theorem to prove that it does, the only bit I'm not sure about being the fact that it preserves limits... But I think it preserves products and equalizers so it should suffice, right ?
If this is correct, and still working under the assumption that my interpretation isn't correct, can the aforementioned left adjoint be explicited? Does it have anything to do with $\mathbf{Ind}_H^G$ ? with $K[G]\otimes_{K[H]}-$ ?
Any correction of anything I said, besides the explicit questions, is very welcome as well as an answer to the questions !
|
This question already has an answer here:
Evaluate $$\lim_{n \rightarrow \infty~} \dfrac {[(n+1)(n+2)\cdots(n+n)]^{\dfrac {1}{n}}}{n}$$ using Cesáro-Stolz theorem.
I know there are many question like this, but i want to solve it using Cesáro-Stolz method and no others.
I took log and applied Cesáro-Stolz, I get $$\log{2}+n\log\cfrac{n}{n+1}$$
Which gives me answer as $\frac{2}{e}$ . But answer is $\frac{4}{e}$. Could someone help?.
Edit: On taking log, $$\lim_{n \to \infty} \frac{-n\log n + \sum\limits_{k=1}^{n} \log \left(k+n\right)}{n} \\= \lim_{n \to \infty} \left(-(n+1)\log (n+1) + \sum\limits_{k=1}^{n+1} \log \left(k+n\right)\right) - \left(-n\log n + \sum\limits_{k=1}^{n} \log \left(k+n\right)\right) \\ = \lim_{n \to \infty} \log \frac{2n+1}{n+1} - n\log \left(1+\frac{1}{n}\right) = \log 2 - 1$$ Which gives $2/e$
|
Your expression for $p(x)$ is $$ \sum_{k_i\ge 0}\left(\frac{x^{\sum_{i=1}^\infty ik_i}}{\prod_{i=1}^\infty k_i!(i!)^{k_i}}\right)=\sum_{k_i\ge 0}\left(\prod_{i=1}^\infty\frac{x^{i k_i}}{k_i!(i!)^{k_i}}\right). $$ Now, when you expand $\displaystyle \prod_{i=1}^\infty \left(\sum_{k_i=0}^\infty \frac{x^{ik_i}}{k_i!(i!)^{k_i}}\right)$ what you do is, for each $i$, pick a $k_i$, then form the product of all the corresponding $\displaystyle \frac{x^{ik_i}}{k_i!(i!)^{k_i}}$, resulting in $\displaystyle \prod_{i=1}^\infty\frac{x^{i k_i}}{k_i!(i!)^{k_i}}$, and then add all these expressions, but that is precisely what the displayed sum is. (Of course, one picks $k_i=0$ for almost all $i$ in order for the expressions to be meaningful.)
As the comments with Brian indicate, the confusion is perhaps over the way the author is using notation. It is perhaps better to write the first expression as follows: Let ${\mathbb N}^{\mathbb N}_*$ be the set of all functions $f:\mathbb N^+\to\mathbb N$ (all
sequences) such that $f(n)=0$ for all but finitely many $n$. The first sum is then $$ \sum_{f\in\mathbb N^{\mathbb N}_*}\left(\frac{x^{\sum_{i=1}^\infty i f(i)}}{\prod_{i=1}^\infty (f(i))!(i!)^{f(i)}}\right). $$ On the other hand, the product is just $$ \prod_{i=1}^\infty \left(\sum_{j=0}^\infty \frac{x^{i j}}{j!(i!)^{j}}\right). $$When you expand, you pick for each $i$ a $j$ (which, naturally, depends on $i$, so we can call it $f(i)$), with the understanding that you pick $j=0$ almost all the time. Etc.
The way the book writes the expressions, essentially the same notation is used to mean two completely different things: First, $k_i\ge0$ means you are looking at an infinite sequences $(k_1,k_2,\dots)$ with almost all $k_i$ being $0$ (this is just an $f\in{\mathbb N}^{\mathbb N}_*$). The second time, in $\sum_{k_i=0}^\infty$, the author now just means $\sum_{n=0}^\infty$, but is using $k_i$ as the index, rather than $n$.
At the bottom of it, what the author is using is a generalized distributive law, a more general case of which would be that in a sufficiently complete Boolean algebra, $$ \bigwedge_{a\in X}\bigvee\{u_{a,i}\mid i\in I_a\}=\bigvee_{f\in\prod_{a\in X}I_a}\bigwedge\{u_{a,f(a)}\mid a\in X\} $$for $X$ a non-empty set, $I_a$ a non-empty index set (for each $a\in X$) and arbitrary elements $u_{a,i}$ of the Boolean algebra (for $a\in X, i\in I_a$), though I doubt that this more general presentation would actually clarify things. This family of generalized distributive laws, by the way, is just a reformulation of the axiom of choice.
One final remark is that it is not capriciousness that makes us look only at functions in $\mathbb N^{\mathbb N}_*$ rather than all functions $f:\mathbb N^+\to\mathbb N$: We are actually looking at all functions, but only the ones in ${\mathbb N}^{\mathbb N}_*$ "matter". There are two ways of interpreting the expansions we have. One is purely formal, and then the convention that almost all $k_i$ must be $0$ is essentially a matter of definitions, but this convention is adopted because of the second way: Namely, we can consider the expansions as defining analytic functions (for an appropriate interval where they converge, typically $|x|<1$).
Note that given a sequence $k_1,k_2,\dots$, we have that $\displaystyle \lim_{n\to\infty}\prod_{i=1}^n \frac{x^{i k_i}}{k_i!(i!)^{k_i}}=0$ if $k_i\ne 0$ infinitely often (more precisely, we say that the product
diverges to $0$), essentially because $\displaystyle\frac{x^i}{i!}\to0$ as $i\to\infty$. But then the only sequences $f:i\mapsto k_i$ that contribute to the sum are the ones in ${\mathbb N}^{\mathbb N}_*$ anyway.
|
Given curve $c$ in $\mathbb{R}^3$, there are $3$ 2-dimensional submanifolds $N_i$ containing $c$ s.t. they are ruled surfaces $$N_i(s,t)=c(t)+s V_i$$ where $\{V_i\}$ is an orthonormal set.
Clearly, $N_i$ has a intrinsic metric so that $c$ has a normal curvature $k_i$
Hence show that $\sum_i\ k_i\geq k\ \ast$
Proof : If there is a surface $S$ containing $c$, then there is two ruled surfaces $$S_N =(N,T:=c'),\ S_n=(n,T)$$ generated by a unit out normal vector $N$ and a unit vector $n\in T_{c'}S$, which is orthogonal to $c'$.
Hence $k =\sqrt{k_N^2+k_n^2}$ where $k_N,\ k_n$ is normal curvature on these surfaces.
How can we finish the proof ?
[Add] I will enumerate some facts :
(1) Convex simple closed curve in $\mathbb{R}^2$ has total curvature $2\pi$
(2) Total curvature of $c$ is a limit of sum of external angles in polygonal line $c_n$ where $c_n\rightarrow c$.
(3) $\ast$ can be restated in terms of total curvature.
(4) If $u, \ v$ is unit vector, then we fix $v$ and an orthonormal set $\{V_i\}$. Then we have three circles $C_i:=(V_i^\perp + u)\cap S^2(1)$ Hence we find an $u_i'\in C_i$ s.t. $|x-v|\leq |u_i'-v|$ for $x\in C_i$.
If $u_i',\ v$ has external angle $\theta_i$, then $\ast$ is equivlant to $\sum_i\ \theta_i\geq \pi-\angle\ (u,v)$.
(5) If $f:=\cos^{-1} : [-1,1]\rightarrow [0,\pi]$ is strictly decreasing, then note that $f| [-1,0]$ is convex and $f|[0,1]$ is concave.
Then $\ast$ is equivalent to $$ f (Ax -\sqrt{(1-A^2)(1-x^2)} ) + f(By-\sqrt{(1-B^2)(1-y^2)} ) + f(Cz-\sqrt{(1-C^2)(1-z^2)} ) \leq 2\pi + f( u\cdot v)$$ where $u=(A,B,C),\ v=(x,y,z)$.
|
Olga Taussky-Todd and John Francis are two of the most recognizable figures in matrix analysis. Both were directly involved with the study of flutter (dynamic instability) of aircraft wings, while working in London. Taussky worked at the National Physical Laboratory, during World War II, and Francis worked at the National Defense Research Corporation, in the late 50s.
Studying flutter leads to delicate boundary problems in partial differential equations, but R.A. Frazer recognized its close connection with the problem of locating the eigenvalues of a matrix.
In abstract, a useful theoretical tool to attack this problem is Geršgorin's theorem, though its efficiency may vary wildly from matrix to matrix: Suppose $A$ is an $n\times n$ matrix with complex entries $a_{ij}$, $1\le i,j\le n$. If, for each $i$, we let $r_i=\sum_{j=1,j\ne i}^n|a_{ij}|$, then for each eigenvalue $\lambda$ of $A$ there is an $i$ such that $|\lambda-a_{ii}|\le r_i$. That is, the eigenvalues of $A$ are located in the union of the $n$ closed discs centered at the diagonal entries of $A$, $\{z\in\mathbb C\mid |z-a_{ii}|\le r_i\}$, $1\le i\le n$.
Taussky writes:
A large group of young girls, drafted into war work, did the calculation on hand-operated machines, following the instructions of Frazer and his assistants.
As described in section 6 of
How I became a torchbearer for matrix theory, Geršgorin's theorem proved indeed to be a key tool in carrying out the relevant computations.
Once again, I didn't ask to be assigned to matrix problems. They found me.
By the time Francis came to work on the problem, the connection with computation of eigenvalues was well established, and Francis was assigned to write the relevant computer programs to carry out this task. Trying to accelerate the time the computations required, he created the
shifted $QR$-algorithm which, even nowadays, is one of the most efficient tools to compute eigenvalues (and, via companion matrices, roots of arbitrary polynomials).
As David Watkins writes in
Francis's algorithm:
[Francis's method] became and has continued to be the big workhorse of eigensystem computations. A version of Francis’s algorithm was used by MATLAB when I asked it to
compute the eigensystem of a $1000\times 1000$ matrix on my laptop.
|
Exact Near Instantaneous Frequency Formulas Best at Zero Crossings
Introduction
This is an article that is the last of my digression from trying to give a better understanding of the Discrete Fourier Transform (DFT). It is along the lines of the last two.
In those articles, I presented exact formulas for calculating the frequency of a pure tone signal as instantaneously as possible in the time domain. Although the formulas work for both real and complex signals (something that does not happen with frequency domain formulas), for real signals they are only applicable near the peak values of the signal and perform poorly, or not at all, at zero crossings. This article fills that gap by providing formulas that work best at zero crossings and perform poorly, or not at all, at peaks.
You should first read my two previous on "Exact Near Instantaneous Frequency Formulas Best at Peaks"[1][2] to understand the context of this article. Just like the previous formulas, these also work with complex signals.
Pure Tone Signal Definition
This is the equation used in all my real tone blog articles to describe a discrete single pure tone: $$ S_n = M \cdot \cos( \alpha n + \phi ) \tag {1} $$ This is the corresponding equation for a pure complex tone. $$ S_n = M \cdot e^{i(\alpha n + \phi )} \tag {2} $$
Differences of Real Neighbor Pairs
Instead of summing neighbor pairs, this approach takes the difference. This is what makes them work best at zero crossings and perform poorly at peaks.
These are the definitions of the neighbor pair signal values for a real valued signal. $$ \begin{aligned} S_{n+m} &= M \cdot \cos[ \alpha (n+m) + \phi ] \\ &= M \cdot \cos[ (\alpha n + \phi) + (\alpha m) ] \\ &= M \cdot \left[ \cos( \alpha n + \phi ) \cos( \alpha m ) - \sin( \alpha n + \phi ) \sin( \alpha m ) \right] \end{aligned} \tag {3} $$ $$ S_{n-m} = M \cdot \left[ \cos( \alpha n + \phi ) \cos( \alpha m ) + \sin( \alpha n + \phi ) \sin( \alpha m ) \right] \tag {4} $$ Taking their difference simplifies the equation as two of the terms cancel. Unlike the sum version in the previous articles, the expression for the center signal value does not reappear. $$ \begin{aligned} D_{n,m} &= S_{n+m} - S_{n-m} \\ &= -2M \sin( \alpha n + \phi ) \sin( \alpha m ) \end{aligned} \tag {5} $$
Ratios of Real Neighbor Differences
When cosines are raised to a power, as in my two previous articles, the result is a linear expression of cosines of multiples of the angle. This is not true for sines. When sines are raised to a power, the result is a mix of sines and cosines of multiples of the angle. Therefore the same approach that was used in the previous articles can't be used here in the same manner.
Instead, a different trick is employed by exploiting the properties of the double angle formula for sines. $$ \begin{aligned} D_{n,2m} &= -2M \sin( \alpha n + \phi ) \sin( \alpha 2m ) \\ &= -2M \sin( \alpha n + \phi ) 2 \sin( \alpha m ) \cos( \alpha m ) \end{aligned} \tag {6} $$ By applying the double angle formula on $ D_{n,2m} $, a copy of $ D_{n,m} $ emerges which can be factored out by taking the quotient of the two. $$ \begin{aligned} Q_{n,m} &= \frac{ D_{n,2m} }{ D_{n,m} } \\ &= \frac{ -2M \sin( \alpha n + \phi ) 2 \sin( \alpha m ) \cos( \alpha m ) }{ -2M \sin( \alpha n + \phi ) \sin( \alpha m ) } \\ &= 2 \cos( \alpha m ) \end{aligned} \tag {7} $$ This results in an expression which is a simple cosine.
Differences of Complex Neighbor Pairs
The same approach can be taken using the definition of a pure complex tone. $$ \begin{aligned} S_{n+m} &= M \cdot e^{ i ( \alpha (n+m) + \phi ) } \\ &= M \cdot e^{ i ( \alpha n + \phi ) } e^{ i \alpha m } \\ \end{aligned} \tag {8} $$ $$ S_{n-m} = M \cdot e^{ i ( \alpha n + \phi ) } e^{ -i \alpha m } \tag {9} $$ Taking the difference, and applying Euler's Equation[3], shows that the difference is an imaginary multiple of the signal value. $$ \begin{aligned} D_{n,m} &= S_{n+m} - S_{n-m} \\ &= M \cdot e^{ i ( \alpha n + \phi ) } \left( e^{ i \alpha m } - e^{ -i \alpha m } \right) \\ &= S_n \cdot 2i \cdot \sin( \alpha m ) \end{aligned} \tag {10} $$ The result is quite different than the real values case. Because the signal value reappears in the formula, the value of $\alpha$ can be directly calculated from this formula. $$ \alpha_{n,m} = \frac{1}{m} \sin^{-1} \left( \frac{ S_{n+m} - S_{n-m} }{ 2 i S_{n} } \right) \tag {11} $$ This formula only works with complex tones. The problem with this formula is that it only works up to half the Nyquist frequency. It is analogous to the base case formulas in my previous two blogs which works in either case: $$ \alpha_{n,m} = \frac{1}{m} \cos^{-1} \left( \frac{ S_{n+m} + S_{n-m} }{ 2 S_{n} } \right) \tag {12} $$ The problem with this formula in the complex case is that it does not distinguish between positive and negative values of $\alpha$. A tone with a negative frequency will get a positive frequency as the answer. This problem can be avoided by using (11) in conjunction or instead.
Ratios of Complex Neighbor Differences
The same ratio approach as used in the real case can be used to find the quotient value in the complex case. $$ \begin{aligned} D_{n,2m} &= S_n \cdot 2i \cdot \sin( \alpha 2m ) \\ &= S_n \cdot 2i \cdot 2 \sin( \alpha m ) \cos( \alpha m ) \end{aligned} \tag {13} $$ $$ \begin{aligned} Q_{n,m} &= \frac{ D_{n,2m} }{ D_{n,m} } \\ &= \frac{ S_n \cdot 2i \cdot 2 \sin( \alpha m ) \cos( \alpha m ) }{ S_n \cdot 2i \cdot \sin( \alpha m ) } \\ &= 2 \cos( \alpha m ) \end{aligned} \tag {14} $$ Surprisingly, or perhaps not, the result is identical to the real case.
Simple Frequency Calculation
The value for the frequency term $ \alpha $ can now be solved for from both (7) for the real case and (14) for the complex case. $$ \alpha_{n,m} = \frac{1}{m} \cos^{-1}\left( \frac{ Q_{n,m} }{ 2 } \right) \tag {15} $$ Plugging in the definition of $ Q_{n,m} $ from (5) and (7), or (10) and (14), gives the result in terms of signal values: $$ \alpha_{n,m} = \frac{1}{m} \cos^{-1} \left[ \frac{ 1 }{ 2 } \left( \frac{ S_{n+2m} - S_{n-2m} }{ S_{n+m} - S_{n-m} } \right) \right] \tag {16} $$ Since the frequency term ($\alpha$) can be calculated at any point along the signal, and with any spacing. Any weighted average, with the weights adding up to one, can be used to calculate $\alpha$ over an interval in order to mitigate the effects of noise.
Comparison to Turner's Formula
In Rick Lyons' article[4], which prompted my digression from the frequency domain to the time domain, he cites a formula by Turner for a four point calculation of frequency for a pure real tone. This is equation (2) from Lyons' article translated into my nomenclature and generalized with a sample spacing parameter ($m$) built in: $$ \alpha_{n,m} = \frac{1}{m} \cos^{-1} \left[ \frac{ 1 }{ 2 } \left( \frac{ S_{n+2m} - S_{n-m} }{ S_{n+m} - S_{n} } - 1 \right) \right] \tag {17} $$ The similarity between (16) and (17) is striking. They are both four sample formulas. The difference is the width of the stance on the signal. The samples in (16) are centered around sample $n$ and are five samples wide. The samples in (17) are centered around sample $n+\frac{1}{2}$ and are four samples wide. Thus in terms of being more instantaneous, Turner's formula is superior.
Like the formulas derived in this article, Turner's formula will work best at zero crossings and perform poorly at peaks for real signals. Also, like the formulas in this article, Turner's 4 Sample formula will also work on complex tones, with the positive/negative caveat mentioned above. This is not mentioned in Turner's derivation or Lyons' article. The proof of this is shown in Appendix A.
Like (16), (17) can be calculate at various points along an interval with various spacing and used in a weighted average to get a noise mitigated result.
Turner's 3 Sample formula, equation (1) in Lyons' article, is the same as (12) when converted to arbitrary sample spacing. This is the base case for the formulas that are best at the peak.
Cosine Binomial Frequency Calculation
The quotient of the differences approach has the advantage that the amplitude term has already been cancelled out. Therefore the direct calculation approach as shown above is possible. However, since (7) and (14) can be used to find the cosine values of multiples of $\alpha$, the same approach as used in the previous two articles can be used as well.
The two families of formulas derived in the previous two articles can be represented by this formula with $x$ taking the value of zero or one. $$ q = \frac{\left[x + \cos(\alpha)\right]^{k}}{\left[x + \cos(\alpha)\right]^{k-1}} =x + \cos(\alpha) \tag {18} $$ When the binomial expressions are multiplied out the result is an expression of cosine values of multiples of $\alpha$. The resulting coefficients for $x=0$ and $x=1$ can be found in the previous articles. For other values of $x$, the coefficient values can be pre-computed for implementation. Once $q$ has been evaluated, the value of $\alpha$ can be found by this equation: $$ \alpha = \cos^{-1}(q-x) \tag {19} $$ Once again, since it is an inverse cosine being evaluated, negative $\alpha$ values in complex signal cases will seem to be positive.
Since the Turner formulas can also be used to find the cosine value of $\alpha$ multiples from signal values, they can also be used in the binomial expansion evaluations.
Conclusion
Although they work for complex signals, these formulas were derived to provide near instaneous frequency readings for real pure tone signals. Two set of formulas were derived in the previous articles for when the signal is near peaks and the formulas in this article are for when the signal is near zero crossings. The application of these formulas is limited to the use on relatively noiseless pure tones. They will be most useful in applications where low latency is critical. When latency is not critical, frequency domain solutions, such as evaluating two or three bins of a DFT and using corresponding formulas will yield superior results. If anybody actually ever uses these formulas, I would love to hear about it in the comments.
Reference
[1] Dawg, Cedron, Exact Near Instantaneous Frequency Formulas Best at Peaks (Part 1)
[2] Dawg, Cedron, Exact Near Instantaneous Frequency Formulas Best at Peaks (Part 2)
[3] Dawg, Cedron, The Exponential Nature of the Complex Unit Circle
[4] Lyons, Rick, Sinusoidal Frequency Estimation Based on Time-Domain Samples
Appendix A: Proof of Turner's Formula in Complex Pure Tones
Here is the proof that Turner's 4 Sample Formula works for a complex pure tone as well as a real one as long as the tone has a positive frequency.
Start by plugging in the signal definition (2) into the formula (17): $$ \alpha_{n,m} = \frac{1}{m} \cos^{-1} \left[ \frac{ 1 }{ 2 } \left( \frac{ M \cdot e^{i[\alpha (n+2m) + \phi ]} - M \cdot e^{i[\alpha (n-m) + \phi ]} }{ M \cdot e^{i[\alpha (n+m) + \phi ]} - M \cdot e^{i[\alpha n + \phi ]} } - 1 \right) \right] \tag {20} $$ Every term in the quotient can have the signal formula factored out. $$ \alpha_{n,m} = \frac{1}{m} \cos^{-1} \left[ \frac{ 1 }{ 2 } \left( \frac{ e^{i\alpha 2m } - e^{-i\alpha m } }{ e^{i\alpha m } - 1 } - 1 \right) \right] \tag {21} $$ The numerator and denominator can be divided by $ e^{i\alpha \frac{1}{2}m } $ to make the exponents symmetric about zero. $$ \alpha_{n,m} = \frac{1}{m} \cos^{-1} \left[ \frac{ 1 }{ 2 } \left( \frac{ e^{i\alpha \frac{3}{2}m } - e^{-i\alpha \frac{3}{2}m } }{ e^{i\alpha \frac{1}{2}m } - e^{-i\alpha \frac{1}{2}m } } - 1 \right) \right] \tag {22} $$ The quotient can now be recognized as being the difference of two cubes divided by a difference. $$ \frac{ a^3 - b^3 }{ a - b } = a^2 + ab + b^2 \tag {23} $$ This simplifies the quotient considerably. $$ \alpha_{n,m} = \frac{1}{m} \cos^{-1} \left[ \frac{ 1 }{ 2 } \left( e^{i\alpha m } + 1 + e^{-i\alpha m } - 1 \right) \right] \tag {24} $$ The ones cancel, and applying Euler's Equation[3] to the other terms leaves a cosine expression. $$ \alpha_{n,m} = \frac{1}{m} \cos^{-1} \left[ cos( \alpha m ) \right] \tag {25} $$ For positive values of $\alpha$, less than $\pi$, the inverse cosine cancels the cosine. Then the $m$'s cancel and this remains: $$ \alpha_{n,m} = \alpha \tag {26} $$ For any sample point ($n$), for any spacing ($m$), the answer the formula will give is the frequency term of the signal.
Quite Easily Done.
Previous post by Cedron Dawg:
Exact Near Instantaneous Frequency Formulas Best at Peaks (Part 2)
Next post by Cedron Dawg:
Two Bin Exact Frequency Formulas for a Pure Real Tone in a DFT
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.
|
It's one of my real analysis professor's favourite sayings that "being obvious does not imply that it's true".
Now, I know a fair few examples of things that are obviously true and that can be proved to be true (like the Jordan curve theorem).
But what are some theorems (preferably short ones) which, when put into layman's terms, the average person would claim to be true, but, which, actually, are false (i.e. counter-intuitively-false theorems)?
The only ones that spring to my mind are the Monty Hall problem and the divergence of $\sum\limits_{n=1}^{\infty}\frac{1}{n}$ (counter-intuitive for me, at least, since $\frac{1}{n} \to 0$ ).
I suppose, also, that $$\lim\limits_{n \to \infty}\left(1+\frac{1}{n}\right)^n = e=\sum\limits_{n=0}^{\infty}\frac{1}{n!}$$ is not obvious, since one 'expects' that $\left(1+\frac{1}{n}\right)^n \to (1+0)^n=1$.
I'm looking
just for theorems and not their (dis)proof -- I'm happy to research that myself.
Thanks!
|
LEARNING OBJECTIVES
By the end of this lesson, you will be able to:
Graph an absolute value function. Solve an absolute value equation. Solve an absolute value inequality.
Until the 1920s, the so-called spiral nebulae were believed to be clouds of dust and gas in our own galaxy, some tens of thousands of light years away. Then, astronomer Edwin Hubble proved that these objects are galaxies in their own right, at distances of millions of light years. Today, astronomers can detect galaxies that are billions of light years away. Distances in the universe can be measured in all directions. As such, it is useful to consider distance as an absolute value function. In this section, we will investigate
absolute value functions. Understanding Absolute Value
Recall that in its basic form [latex]\displaystyle{f}\left({x}\right)={|x|}[/latex], the absolute value function, is one of our toolkit functions. The
absolute value function is commonly thought of as providing the distance the number is from zero on a number line. Algebraically, for whatever the input value is, the output is the value without regard to sign. A General Note: Absolute Value Function
The absolute value function can be defined as a piecewise function
[latex]f(x) = \begin{cases} x ,\ x \geq 0 \\ -x , x < 0\\ \end{cases} [/latex]
Example 1: Determine a Number within a Prescribed Distance
Describe all values [latex]x[/latex] within or including a distance of 4 from the number 5.
Solution
We want the distance between [latex]x[/latex] and 5 to be less than or equal to 4. We can draw a number line to represent the condition to be satisfied.
The distance from [latex]x[/latex] to 5 can be represented using the absolute value as [latex]|x - 5|[/latex]. We want the values of [latex]x[/latex] that satisfy the condition [latex]|x - 5|\le 4[/latex].
Try It 1
Describe all values [latex]x[/latex] within a distance of 3 from the number 2.
Example 2: Resistance of a Resistor
Electrical parts, such as resistors and capacitors, come with specified values of their operating parameters: resistance, capacitance, etc. However, due to imprecision in manufacturing, the actual values of these parameters vary somewhat from piece to piece, even when they are supposed to be the same. The best that manufacturers can do is to try to guarantee that the variations will stay within a specified range, often [latex]\displaystyle\text{\pm 1%,}\pm\text{5%,}[/latex] or [latex]\displaystyle\pm\text{10%}[/latex].
Suppose we have a resistor rated at 680 ohms, [latex]\pm 5%[/latex]. Use the absolute value function to express the range of possible values of the actual resistance.
Solution
5% of 680 ohms is 34 ohms. The absolute value of the difference between the actual and nominal resistance should not exceed the stated variability, so, with the resistance [latex]R[/latex] in ohms,
Try It 2
Students who score within 20 points of 80 will pass a test. Write this as a distance from 80 using absolute value notation.
|
By definition, any language (decision problem) $L$ is defined as a subset of $\{0,1\}^*$, where $\{0,1\}$ is the alphabet.
$L^c$ is said to be the complement of the language, and it seems to be defined as follows - $\forall w, w\in L \implies w \notin L^c$ and $w \notin L \implies w \in L^c$.
Equivalently, according to Arora and Barak's book "Computational Complexity: A Modern Approach", $L^c = \{0,1\}^* - L$.
I will demonstrate my question with an example - The definition of the complexity class $coNP$ is, as per Arora and Barak's book, $coNP = \{L : L^c \in NP \}$.
But if we take a specific case, lets say
$SAT = \{f(x_1, x_2,\cdots,x_n) : \exists x \in \{0,1\}^n f(x) = 1\}$,
then the $coNP$ counterpart is supposed to be
$\overline{SAT} = \{f(x_1,x_2,\cdots,x_n) : \forall x \in \{0,1\}^n f(x) = 0\}$
Which means, $\overline{SAT}$ contains all the boolean formulae that are false for all inputs.
According to the definitions above though $(L^c = \{0,1\}^* - L)$, lots of random words should be included in the language $\overline{SAT}$ that are not even "well formed" boolean formulae to begin with.
Should the complement be over all boolean formulae instead of $\{0,1\}^*$?
Is the complement of every language always defined over a subset of words that are "well formed" instead of having been defined over $\{0,1\}^*?$
|
to answer this question it is helpful to have a clear definition of the different types of limits involved (the limit of a sequence, the normal limit and the one-sided limits which is probably meant by 'lateral limits' in your book):
As stated for example here the left- and right-hand limits are defined as
Right-hand limit: $\displaystyle \lim_{x \to a^+} f(x) = L$ if and only if
For every $\epsilon>0$ there exists a $\delta>0$ such that for all $x$ with $0 < x-a <\delta$: $|f(x)-L| < \epsilon$.
Left-hand limit: $\displaystyle \lim_{x \to a^-} f(x) = L$ if and only if
For every $\epsilon>0$ there exists a $\delta>0$ such that for all $x$ with $0 < a-x <\delta$: $|f(x)-L| < \epsilon$.
Limit $\displaystyle \lim_{x \to a} f(x) = L$ if and only if
For every $\epsilon>0$ there exists a $\delta>0$ such that for all $x$ with $0 < |a-x| <\delta$: $|f(x)-L| < \epsilon$.
Limit of a sequence: Let $a_n\in\mathbb{R}$ for all $n\in\mathbb{N}$ be a sequence of real numbers. Then $\lim_{n\rightarrow\infty} a_n=L$ if and only if
For every $\epsilon>0$ there exists $N\in\mathbb{N}$ such that for all $n>N$: $|a_n-L| < \epsilon$.
Given this definition and by noting that $(|a-x| < \delta)\Leftrightarrow(0<a-x < \delta\lor0<x-a < \delta)$ it is an immediate consquence that $\lim_{x \to a} f(x) = L$ if and only if $\lim_{x \to a^{+}} f(x)=L$ and $\lim_{x \to a^{-}} f(x)=L$.
Now to answer your questions:
This is possible because for all $n\in\mathbb{N}$ $u_n>1$ and $v_n<1$ so the sequence limits (cf. the definition above) only depend on the behaviour of f above respectively below 1. Example: Consider $f(x)=\begin{cases}0 &\text{if }x\geq 1\\2&\text{if }x<1\end{cases}$. Then for all $n\in\mathbb{N}$ $f(u_n)=0$ and $f(v_n)=2$.
I am not sure what you mean by 'define', but if you just want an example see my answer to the first question.
No - No assumptions are necessary. Looking at the definitions for the left- and the right-hand limits of $f$ at $1$ it is (hopefully) clear that (if these limits exist) $\lim_{x \to 1^+} f(x)=\lim_{n\rightarrow\infty} f(u_n)$ and $\lim_{x \to 1^-} f(x)=\lim_{n\rightarrow\infty} f(v_n)$. So they cannot be the same since $2\not=0$. If either $\lim_{x \to 1^+} f(x)$ or $\lim_{x \to 1^-} f(x)$ does not exist neither can the limit $\lim_{x \to 1} f(x)$ since both $0 < 1-x <\delta$ and $0 < x-1 <\delta$ imply that $0 < |1-x| <\delta$.
Taking into account the example I suspect that the left- and right-hand limits are what is meant by 'lateral limits'.
Yes, b) can be justified (taking my answer to 4. to be correct) as follows: (I'll only do the case of the left-hand limit, but the right-hand limit is very similar.)
Claim: If the limit $\lim_{x \to 1^-} f(x)$ exists, it holds that $\lim_{x \to 1^-} f(x)=2$
Proof (by contradiction) Assume that $\lim_{x \to 1^-} f(x)=L$ with $L\not=2$. Then for every $\epsilon>0$ there exists $\delta>0$ such that for all x with $0<1-x<\delta$: $|f(x)-L|<\epsilon$. Since $\forall n\in\mathbb{N}:v_n<1$ and $v_n\rightarrow 1$ (as $n\rightarrow\infty$) there exists $N_1\in\mathbb{N}$ such that for all $n>N_1$: $0<1-v_n=|v_n-1|<\epsilon$. Furthermore, since $\lim_{n \to \infty} f(v_n)=2$, there exists $N_2\in\mathbb{N}$ such that for all $n>N_2$: $|f(v_n)-2|<\epsilon$. Thus, letting $N:=\max\{N_1,N_2\}$, it holds that for all $n>N$: $|f(v_n)-2|<\epsilon$ and $|f(v_n)-L|<\epsilon$. But this implies (for any $n>N$) $|L-2|=|f(v_n)-2-(f(v_n)-L)|\leq|f(v_n)-2|+|-(f(v_n)-L)|<\epsilon+\epsilon=2\epsilon$. Since this holds for every $\epsilon>0$ it holds for $\epsilon=\frac{|L-2|}{4}>0$ (since by assumption $|L-2|>0$). But this implies $|L-2|<2\frac{|L-2|}{4}$ which is equivalent (since by assumption $|L-2|>0$) to $2<1$ which is a contradiction. So the assumption must have been false. qed
It is however important to note that
$f$ does not necessarily have lateral limits! A counterexample is given by $f(x)=\begin{cases}0&\text{if }x\in\{u_n|n\in\mathbb{N}\}\\2&\text{if }x\in\{v_n|n\in\mathbb{N}\}\\-1&\text{otherwise}\end{cases}$
Edit As requested a (hopefully) simplified explanation of the definitions and the idea behind the proof in the answer to question 5). Note: While I'll try to stay formally accurate the following is merely an explanation and should no be used as a substitute for the actual definitions / proof and rather as helper to understand them better.
On metric spaces instead of the limit $\lim_{x\rightarrow a} f(x)$ one may look at any sequence $x_n\in\mathbb{R}$ such that $\lim_{n\rightarrow\infty}x_n=a$ and consider $\lim_{n\rightarrow\infty} f(x_n)$. If the limit $\lim_{x\rightarrow a} f(x)$ exists then so does $\lim_{n\rightarrow\infty} f(x_n)$. To arrive the converse it is not enough to look at one such sequence $(x_n)_{n\in\mathbb{N}}$ but instead one needs to consider
(1) $\lim_{n\rightarrow\infty}f(x_n)$
for all sequences $(x_n)_{n\in\mathbb{N}}$ with $\lim_{n\rightarrow\infty} x_n=a$. If (1) exists for all such sequences $(x_n)_{n\in\mathbb{N}}$ and has the same value, then $\lim_{x\rightarrow a} f(x)$ exists and has this value as well.
It therefore helpful to look at the
Limit of a sequence first: This definition simply means that, ignoring a finite number of elements of the sequence, all other elements are arbitrarily close to the limit.
Left- and right-hand limit: The idea behind these limits is to try and define limits in points where the function is discontinuous (i.e. the 'normal' limit does not exist). As discussed above one may (when taking the limit of $f$ in $a$) look at the behaviour of $f(x_n)$ for sequences $x_n\rightarrow a$ (for $n\rightarrow\infty$). The left- and right-hand limits consider only sequences that satisfy the additional condition that they approach their limit either from below (in case of the left-hand limit) or above (in case of the right-hand limit) and are never above respectively below their limit. Thus only the behaviour of $f$ on one side (as opposed to both sided for the 'normal' limit) of the point in which the limit is taken matters. That is to calculate for example $\lim_{x\rightarrow a^{-}} f(x)$ the definition of f on $(-\infty,a]$ matters, but $f$ can be arbitrary on $(a,\infty)$ and no knowledge of the behaviour of $f$ on this interval is needed.
Limit: The condition for the limit simply says that if $x$ is very close to $a$ then and $L=\lim_{x\rightarrow a} f(x)$ then $f(x)$ is very close to $L$.´
Proof in the answer to Q5: The basic idea is to realise that $v_n$ is one of the sequences considered for the left-hand limit. The sequence $v_n$ gets arbitrarily close to $1$. Thus for every $\delta$, after ignoring finitely many (i.e. $N_1$ many) elements of the sequence $v_n$, $x=v_n$ will be close enough to $1$ such that (by definition of the left-hand limit) $f(x)$ is very close ($\epsilon$) to the left-hand limit. But this means that $f(v_n)$ gets arbitrarily close to the left-hand limit $L$ and thus (since we assumed this limit exists) $\lim_{n\rightarrow\infty} f(v_n)=L$.
|
Consider the system of 3 ordinary differential equations
$$\dot{x}=v$$
$$\dot{v}=a$$
$$\dot{a}=-Aa+v^{2}-x$$
which can also be written as a single 3rd order ODE
$$\dddot{x}=-A\ddot{x}+\dot{x}^{2}-x$$
$A$ is an arbitrary constant and the dot means derivative with respect to time, i.e. $\dot{x}=dx/dt,\ddot{x}=d^{2}x/dt^{2}$, etc. This system can be thought as describing the time evolution of the position $x$, velocity $v$ and acceleration $a$ of a particle.
Are there any limits where we can solve analytically this system, i.e. find $x(t),v(t),a(t)$?
For example when $A=0$? A perturbative solution would also be good. Or maybe there is a way of reparametrizing time to make the system a known integrable one?
I know that the simpler system
$$\dddot{x}=-A\ddot{x}\iff \dot{a}=-Aa$$
has the solution
$$a(t)=c_{1}e^{-At}$$ which means that
$$x(t)=\frac{c_{1}}{A^{2}}e^{-At}+c_{2}t+c_{3}$$
|
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
81. Search for charged Higgs bosons in the H-+/- -> tb decay channel in pp collisions at root s=8 TeV using the ATLAS detector
JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 03/2016, Issue 3
Charged Higgs bosons heavier than the top quark and decaying via H-+/- -> tb are searched for in proton-proton collisions measured with the ATLAS experiment at...
MIXINGS | Higgs physics | PARTON DISTRIBUTIONS | SUPERSYMMETRY | NEUTRINO MASSES | MONTE-CARLO | TOP-QUARK | PLUS PLUS | Hadron-Hadron scattering | PHYSICS | PAIR PRODUCTION | ASSOCIATION | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
MIXINGS | Higgs physics | PARTON DISTRIBUTIONS | SUPERSYMMETRY | NEUTRINO MASSES | MONTE-CARLO | TOP-QUARK | PLUS PLUS | Hadron-Hadron scattering | PHYSICS | PAIR PRODUCTION | ASSOCIATION | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
Journal Article
Nature, ISSN 0028-0836, 01/2019, Volume 565, Issue 7737, pp. 101 - 105
A defining feature of adaptive immunity is the development of long-lived memory T cells to curtail infection. Recent studies have identified a unique stem-like...
Journal Article
83. Search for the standard model Higgs boson in the decay channel H→ZZ→4l in pp collisions at √s=7TeV
Physical Review Letters, ISSN 0031-9007, 03/2012, Volume 108, Issue 11
Journal Article
84. Search for Minimal Supersymmetric Standard Model Higgs bosons $H/A$ and for a $Z^{\prime}$ boson in the $\tau \tau$ final state produced in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS Detector
European Physical Journal C: Particles and Fields, ISSN 1434-6044, 08/2016, Volume 76, Issue 11, p. 585
Journal Article
85. A homopolymeric adenosine tract in the promoter region of nspA influences factor H-mediated serum resistance in Neisseria meningitidis
SCIENTIFIC REPORTS, ISSN 2045-2322, 02/2019, Volume 9, Issue 1, pp. 2736 - 13
Although usually asymptomatically colonizing the human nasopharynx, the Gram-negative bacterium Neisseria meningitidis (meningococcus) can spread to the blood...
LIPOOLIGOSACCHARIDE SIALYLATION | COMPLEMENT | OUTER-MEMBRANE PROTEIN | BACTERICIDAL ACTIVITY | MULTIDISCIPLINARY SCIENCES | VACCINE | GENES | BINDING-PROTEIN | GROUP-B | MONOCLONAL-ANTIBODIES | EXPRESSION | Epidemics | Adenosine | Polysaccharides | Complement factor H | Clonal deletion | Nasopharynx | Membrane proteins | Meningococcal disease
LIPOOLIGOSACCHARIDE SIALYLATION | COMPLEMENT | OUTER-MEMBRANE PROTEIN | BACTERICIDAL ACTIVITY | MULTIDISCIPLINARY SCIENCES | VACCINE | GENES | BINDING-PROTEIN | GROUP-B | MONOCLONAL-ANTIBODIES | EXPRESSION | Epidemics | Adenosine | Polysaccharides | Complement factor H | Clonal deletion | Nasopharynx | Membrane proteins | Meningococcal disease
Journal Article
Applied Physics Letters, ISSN 0003-6951, 04/2008, Volume 92, Issue 17, pp. 171906 - 171906-3
Atomic H exposure of a GaAs surface at 390 ° C is a relatively simple method for removing the native oxides without altering the surface stoichiometry. In-situ...
HYDROGEN | DESORPTION | PHYSICS, APPLIED | OXIDES | GAAS | TEMPERATURE RANGE 0400-1000 K | CARRIER MOBILITY | SUBSTRATES | STOICHIOMETRY | INDIUM ARSENIDES | ELECTRON DIFFRACTION | MATERIALS SCIENCE | TEMPERATURE DEPENDENCE | GALLIUM ARSENIDES | X-RAY PHOTOELECTRON SPECTROSCOPY | SURFACE CLEANING | SEMICONDUCTOR MATERIALS
HYDROGEN | DESORPTION | PHYSICS, APPLIED | OXIDES | GAAS | TEMPERATURE RANGE 0400-1000 K | CARRIER MOBILITY | SUBSTRATES | STOICHIOMETRY | INDIUM ARSENIDES | ELECTRON DIFFRACTION | MATERIALS SCIENCE | TEMPERATURE DEPENDENCE | GALLIUM ARSENIDES | X-RAY PHOTOELECTRON SPECTROSCOPY | SURFACE CLEANING | SEMICONDUCTOR MATERIALS
Journal Article
87. Searches for a heavy scalar boson H decaying to a pair of 125 GeV Higgs bosons hh or for a heavy pseudoscalar boson A decaying to Zh, in the final states with h → ττ
Physics letters B, ISSN 0370-2693, 04/2018
Journal Article
88. Search for H→γγ produced in association with top quarks and constraints on the Yukawa coupling between the top quark and the Higgs boson using data taken at 7 TeV and 8 TeV with the ATLAS detector
Physics Letters B, ISSN 0370-2693, 01/2015, Volume 740, pp. 222 - 242
Journal Article
89. Measurement of the Higgs boson mass in the $H\rightarrow ZZ^ \rightarrow 4\ell$ and $H \rightarrow \gamma\gamma$ channels with $\sqrt{s}=13$ TeV $pp$ collisions using the ATLAS detector
06/2018
Phys. Lett. B 784 (2018) 345 The mass of the Higgs boson is measured in the $H\rightarrow ZZ^* \rightarrow 4\ell$ and in the $H\rightarrow \gamma\gamma$ decay...
Physics - High Energy Physics - Experiment
Physics - High Energy Physics - Experiment
Journal Article
90. Combined search for anomalous pseudoscalar HVV couplings in VH(H $\to b \bar b$) production and H $\to$ VV decay
ISSN 1873-2445, 2016
bottom: particle identification | experimental results | topology | CMS | Higgs particle: coupling | potential: pseudoscalar | CERN LHC Coll | jet: bottom | vector boson: associated production | 8000 GeV-cms | Higgs particle: hadronic decay | data analysis method | coupling constant: ratio | vector boson: pair production | Higgs particle: coupling constant | coupling: pseudoscalar | quark | Higgs particle: hadroproduction | p p: colliding beams | p p: scattering | vector boson: leptonic decay | electroweak interaction
Journal Article
91. Signaling via the kinase p38 alpha programs dendritic cells to drive T(H)17 differentiation and autoimmune inflammation
NATURE IMMUNOLOGY, ISSN 1529-2908, 02/2012, Volume 13, Issue 2, pp. 152 - 161
Dendritic cells (DCs) bridge innate and adaptive immunity, but how DC-derived signals regulate T cell lineage choices remains unclear. We report here that the...
EFFECTOR | ENCEPHALOMYELITIS | IMMUNITY | ACTIVATION | PATHWAY | RECEPTOR | CNS | IL-6 PRODUCTION | INDUCTION | IMMUNOLOGY | T-HELPER-CELLS
EFFECTOR | ENCEPHALOMYELITIS | IMMUNITY | ACTIVATION | PATHWAY | RECEPTOR | CNS | IL-6 PRODUCTION | INDUCTION | IMMUNOLOGY | T-HELPER-CELLS
Journal Article
92. Search for charged Higgs bosons decaying via H-+/- -> tau nu in t(t)over-bar events using pp collision data at root s=7 TeV with the ATLAS detector
JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 06/2012, Volume 6, Issue 6, p. 039
The results of a search for charged Higgs bosons are presented. The analysis is based on 4.6 fb(-1) of proton-proton collision data at root s = 7TeV collected...
PROTON-PROTON COLLISIONS | MONTE-CARLO | MIXINGS | SUPERSYMMETRY | MSSM | MODELS | PHENOMENOLOGY | PERFORMANCE | Hadron-Hadron Scattering | NEUTRINO MASSES | VIOLATION | PHYSICS, PARTICLES & FIELDS | Physics | High Energy Physics - Experiment | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
PROTON-PROTON COLLISIONS | MONTE-CARLO | MIXINGS | SUPERSYMMETRY | MSSM | MODELS | PHENOMENOLOGY | PERFORMANCE | Hadron-Hadron Scattering | NEUTRINO MASSES | VIOLATION | PHYSICS, PARTICLES & FIELDS | Physics | High Energy Physics - Experiment | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
Journal Article
93. Search for the Standard Model Higgs boson in the H -> WW(()) -> lvlv decay mode with 4.7 fb(-1) of ATLAS data at root s=7 TeV
PHYSICS LETTERS B, ISSN 0370-2693, 09/2012, Volume 716, Issue 1, pp. 62 - 81
A search for the Standard Model Higgs boson in the H -> WW(*()) -> lvlv (l = e.mu) decay mode is presented. The search is performed using proton-proton...
ATLAS | PARTON DISTRIBUTIONS | MASSES | Higgs | LHC | ASTRONOMY & ASTROPHYSICS | PHYSICS, NUCLEAR | COLLIDERS | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
ATLAS | PARTON DISTRIBUTIONS | MASSES | Higgs | LHC | ASTRONOMY & ASTROPHYSICS | PHYSICS, NUCLEAR | COLLIDERS | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
Journal Article
94. Study of ( W/Z ) H production and Higgs boson couplings using H → W W ∗ decays with the ATLAS detector
JHEP, ISSN 1029-8479, 08/2015
Journal Article
95. Search for t t ¯ H $$ \mathrm{t}\overline{\mathrm{t}}\mathrm{H} $$ production in the all-jet final state in proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV
Journal of High Energy Physics, ISSN 1029-8479, 06/2018, Volume 2018, Issue 6
Journal Article
96. Human apoE targeted replacement mouse lines: h-apoE4 and h-apoE3 mice differ on spatial memory performance and avoidance behavior
Behavioural Brain Research, ISSN 0166-4328, 2005, Volume 159, Issue 1, pp. 1 - 14
Apolipoprotein E4 (apoE4), one of the three most common human apoE (h-apoE) isoforms, is a major genetic risk factor for Alzheimer's disease and for cognitive...
Knock-in mouse | Gender differences | Avoidance response | Gene-targeted mouse | Alzheimer's disease | ApoE-isoform | Spatial memory | avoidance response | gene-targeted mouse | ENVIRONMENT INTERACTION | COGNITIVE PERFORMANCE | REPEATED EXPOSURE | spatial memory | ALZHEIMERS-DISEASE | NMDA-ANTAGONIST | knock-in mouse | NEUROSCIENCES | gender differences | apoE-isoform | BEHAVIORAL SCIENCES | RECOGNITION MEMORY | APOLIPOPROTEIN-E-KNOCKOUT | GLUTAMYL-L-ASPARTATE | TRANSGENIC MICE | WORKING-MEMORY | Apolipoproteins E - deficiency | Humans | Male | Apolipoproteins E - metabolism | Exploratory Behavior - physiology | Female | Apolipoprotein E4 | Gene Targeting | Maze Learning - physiology | Mice, Inbred C57BL | Spatial Behavior - physiology | Genotype | Mice, Transgenic | Avoidance Learning - physiology | Conditioning, Classical - physiology | Apolipoprotein E3 | Protein Isoforms - physiology | Mice, Knockout | Animals | Analysis of Variance | Apolipoproteins E - genetics | Sex Factors | Apolipoproteins E - physiology | Mice | Protein Isoforms - genetics | Apolipoproteins | Index Medicus
Knock-in mouse | Gender differences | Avoidance response | Gene-targeted mouse | Alzheimer's disease | ApoE-isoform | Spatial memory | avoidance response | gene-targeted mouse | ENVIRONMENT INTERACTION | COGNITIVE PERFORMANCE | REPEATED EXPOSURE | spatial memory | ALZHEIMERS-DISEASE | NMDA-ANTAGONIST | knock-in mouse | NEUROSCIENCES | gender differences | apoE-isoform | BEHAVIORAL SCIENCES | RECOGNITION MEMORY | APOLIPOPROTEIN-E-KNOCKOUT | GLUTAMYL-L-ASPARTATE | TRANSGENIC MICE | WORKING-MEMORY | Apolipoproteins E - deficiency | Humans | Male | Apolipoproteins E - metabolism | Exploratory Behavior - physiology | Female | Apolipoprotein E4 | Gene Targeting | Maze Learning - physiology | Mice, Inbred C57BL | Spatial Behavior - physiology | Genotype | Mice, Transgenic | Avoidance Learning - physiology | Conditioning, Classical - physiology | Apolipoprotein E3 | Protein Isoforms - physiology | Mice, Knockout | Animals | Analysis of Variance | Apolipoproteins E - genetics | Sex Factors | Apolipoproteins E - physiology | Mice | Protein Isoforms - genetics | Apolipoproteins | Index Medicus
Journal Article
|
Consider the graphs of a quadratic function and a linear function on the same set of axes. cutting2 points of intersection $b^2-4ac \gt 0$ touching1 point of intersection $b^2-4ac = 0$ missingno points of intersection $b^2-4ac \lt 0$ In the graphs meet, the coordinates of the points of intersection of the graphs can be found [...]
Example 1 Find the equation of the quadratic graph. Show Solution \( \begin{align} \displaystyle y &= a(x+2)(x+1) \\ 2 &= a(0+2)(0+1) &\text{substitute }(0,2)\\ 2 &= 2a \\ a &= 1 \\ y &= 1(x+2)(x+1) \\ \therefore y &= x^2+3x+2 \\ \end{align} \) Example 2 Find the equation of the quadratic graph. Show Solution \( \begin{align} [...]
$$y=(x-a)^2+b$$ Example 1 Draw the graph of $y=(x-1)^2+2$. Show Solution The vertex is $(1,2)$ and the graph is concave up. Example 2 Draw the graph of $y=(x-1)^2-2$. Show Solution The vertex is $(1,-2)$ and the graph is concave up. Example 3 Draw the graph of $y=(x+1)^2+2$. Show Solution The vertex is $(-1,2)$ and the graph [...]
$$ y=(x+2)^2+1 $$ $$ y=-(x-2)^2-1$$ $$ y=-(x+2)^2-1 $$ $$ y=-(x-2)^2+1 $$ $$ y=-(x+2)^2+1 $$ $$ y=(x-2)^2-1 $$ $$ y=(x+2)^2-1 $$ $$ y=(x-2)^2+1 $$ Example 1 Find the vertex of $y=x^2 + 4x +2$. Show Solution \( \begin{align} \displaystyle y &= x^2 + 4x +2 \\ &= x^2 + 4x + 4 - 2\\ &= (x+2)^2-2 [...]
The equation of the axis of symmetry of $y=ax^2+bx+c$ is $x=-\dfrac{b}{2a}$. Example 1 Find the equation of the axis of symmetry of $y=x^2+4x-2$. Show Solution \( \begin{align} \displaystyle x &= -\dfrac{b}{2a} \\ &= -\dfrac{4}{2 \times 1} \\ \therefore x &= -2 \\ \end{align} \) Example 2 Find the equation of the axis of symmetry of [...]
$x$-intercepts when $y=0$ $y$-intercepts when $x=0$ Example 1 Sketch the graphs of $y=x^2+x-2$ by stating $x$- and $y$-intercepts. Show Solution $x$-intercepts when $y=0$ \( \begin{align} \displaystyle x^2+x-2 &= 0 \\ (x+2)(x-1) &= 0 \\ x &= -2, 1 \\ \end{align} \) $y$-intercepts when $x=0$ \( \begin{align} \displaystyle y &= 0^2+0-2 \\ &= -2 \\ \end{align} [...]
$$y=a(x-b)(x-c)$$ Concave up for $a \gt 0$ Concave down for $a \lt 0$ Example 1 Draw the graph of $y=(x-1)(x+2)$. Show Solution \( \begin{align} \displaystyle (x-1)(x+2) & = 0 \\ x-1 &= 0 \text{ or } x+2 = 0 \\ \therefore x &= 1 \text{ or } x = -2 \\ \end{align} \) Example 2 [...]
$$y=(x-a)^2+b$$ Example 1 Draw the graph of $y=(x-1)^2+2$. Show Solution The vertex is $(1,2)$ and the graph is concave up. Example 2 Draw the graph of $y=(x-1)^2-2$. Show Solution The vertex is $(1,-2)$ and the graph is concave up. Example 3 Draw the graph of $y=(x+1)^2+2$. Show Solution The vertex is $(-1,2)$ and the graph [...]
|
I apologize if the question is too simple but to me it's a bit difficult to see. I have certain data, let's say data in units of mass [kg] over a range of few years. I get the input of mass every 2 days during a period of 20 years. In theory the mass should be constant over time but it has a small variation each timestep. Thus, if I plot mass vs time I have a very noisy plot. In order to analyse the results in a better way, I would like to take the average of the mass every 2 years, so my average will have units of [mass/year]. Basically I want to spread the data over time. I also would like to associate a standard deviation to this average, but I'm not sure of how to compute it, due to the units I have. The equation for the mean is basically for this case (if I'm correct):
$\frac{\sum m_i}{Period = 2 yr}$
instead of the usual equation $\frac{\sum m_i}{N}$, where $m_i$ are the masses. For example using some data:
Time Mass[day] [kg] 2 3.5 4 2.5 6 3.7 8 3.810 3.712 3.214 3.716 3.418 3.720 3.6
If I wanted to do the same but instead of a period of every two years, every 10 days then my first two points would be (using the first equation):
Period 1: 1.72 kg/dayPeriod 2: 1.76 kg/day.
Now my question is how can I associate a standard deviation to each of the two past values?
I know that the standard deviation equation is: $\sqrt{\frac{1}{N} \sum (m_i - \hat{m}_i)^2}$
But with my units I don't see how to compute this. Or am I doing this in a wrong way? I think I can also just compute first the mean of the masses, using the second equation ($\sum m_i/N$)i.e.
Mean:Period 1: 3.43 kgPeriod 2: 3.52 kg
and then also estimate the standard deviation
Std:Period 1: 0.48 kgPeriod 2: 0.19 kg
and in the end I just know from the last example that in 2 days the mean for period 1 is 3.43 in two days so in 1 day it will be only half, Period 1: 1.71 kg/day and for Period 2: 1.76 kg/day and I estimate the standard deviation in the same way. Basically my question is how to use the equation for standard deviation in this case where I'm averaging over time with units rather than number of counts.
Thanks!
|
This question already has an answer here:
Exercise :
Prove that : $$\frac{|a+b|}{1+|a+b|} \leq \frac{|a|}{1+|a|} + \frac{|b|}{1+|b|}$$ for $a,b \in \mathbb R$.
Methods I have tried so far include: Using the triangle inequality on the numerator on the left side, but I got an expression which was sometimes too big, so it's impossible.
Using a similar method on the right side, but I got a pretty nasty expression so I don't think that's the way.
Going case by case for every pair of a,b but this is also very long.
Can someone give a hint?
|
Let's assume that this is a problem in projectile motion--i.e. all the force is imparted to the rocket at the start and afterwards the rocket has constant downward acceleration of $32.2\ \mathrm{ft}/\mathrm{s}^2$. Let's also assume that the 20° is the angle of inclination from the horizontal. (These is not clear from your problem statement.)
Resolve the initial velocity vector into its horizontal and vertical components. Use the horizontal component and the fact that the horizontal velocity remains constant to find the time of impact. This does not require "much work"--just solve the equation
$$20000 = 1000\cos(20°)\cdot t$$
or
$$20000 = 1000\cos(20°)\cdot\Delta t$$
depending on your notation. You are correct that the time of impact is 21.3 seconds, to three significant digits, but it is not at all clear that this is the desired precision. Your given data does not seem to support that many significant digits. Are you sure that is the correct precision?
Then use the vertical component of the initial velocity and the equations of constant acceleration to find the final displacement of the rocket. You are given the time (you just found it), the initial velocity (the vertical component), and the acceleration ($-32.2\ \mathrm{ft}/\mathrm{s}^2$), so find the displacement, using the equation
$$s = ut + \frac 12at^2$$
or
$$\Delta x = v_i\Delta t + \frac 12a(\Delta t)^2$$
depending on your notation. That gives the result $-19.4$ feet, again with questionable precision. So the rocket ends up lower than it started, which is possible if it starts from an elevated platform or hill or some such.
But again, the precision in all this is questionable, so this answer is debatable but is near the starting level.
|
I’ve been working on updates for the
simstudy package. In the past few weeks, a couple of folks independently reached out to me about generating correlated binary data. One user was not impressed by the copula algorithm that is already implemented. I’ve added an option to use an algorithm developed by Emrich and Piedmonte in 1991, and will be incorporating that option soon in the functions
genCorGen and
addCorGen. I’ll write about that change some point soon.
A second researcher was trying to generate data using parameters that could be recovered using GEE model estimation. I’ve always done this by using an underlying mixed effects model, but of course, the marginal model parameter estimates might be quite different from the conditional parameters. (I’ve written about this a number of times, most recently here.) As a result, the model and the data generation process don’t match, which may not be such a big deal, but is not so helpful when trying to illuminate the models.
One simple solution is using a
beta-binomial mixture data generating process. The beta distribution is a continuous probability distribution that is defined on the interval from 0 to 1, so it is not too unreasonable as model for probabilities. If we assume that cluster-level probabilities have a beta distribution, and that within each cluster the individual outcomes have a binomial distribution defined by the cluster-specific probability, we will get the data generation process we are looking for. Generating the clustered data
In these examples, I am using 500 clusters, each with cluster size of 40 individuals. There is a cluster-level covariate
x that takes on integer values between 1 and 3. The beta distribution is typically defined using two shape parameters usually referenced as \(\alpha\) and \(\beta\), where \(E(Y) = \alpha / (\alpha + \beta)\), and \(Var(Y) = (\alpha\beta)/[(\alpha + \beta)^2(\alpha + \beta + 1)]\). In
simstudy, the distribution is specified using the mean probability (\(p_m\)) and a
precision parameter (\(\phi_\beta > 0\)) (that is specified using the variance argument). Under this specification, \(Var(Y) = p_m(1 – p_m)/(1 + \phi_\beta)\). Precision is inversely related to variability: lower precision is higher variability.
In this simple simulation, the cluster probabilities are a function of the cluster-level covariate and precision parameter \(\phi_\beta\). Specifically
\[logodds(p_{clust}) = -2.0 + 0.65x.\]
The binomial variable of interest \(b\) is a function of \(p_{clust}\) only, and represents a count of individuals in the cluster with a “success”:
library(simstudy)set.seed(87387)phi.beta <- 3 # precisionn <- 40 # cluster sizedef <- defData(varname = "n", formula = n, dist = 'nonrandom', id = "cID")def <- defData(def, varname = "x", formula = "1;3", dist = 'uniformInt')def <- defData(def, varname = "p", formula = "-2.0 + 0.65 * x", variance = phi.beta, dist = "beta", link = "logit")def <- defData(def, varname = "b", formula = "p", variance = n, dist = "binomial")dc <- genData(500, def)dc
## cID n x p b## 1: 1 40 2 0.101696930 4## 2: 2 40 2 0.713156596 32## 3: 3 40 1 0.020676443 2## 4: 4 40 2 0.091444678 4## 5: 5 40 2 0.139946091 6## --- ## 496: 496 40 1 0.062513419 4## 497: 497 40 1 0.223149651 5## 498: 498 40 3 0.452904009 14## 499: 499 40 2 0.005143594 1## 500: 500 40 2 0.481283809 16
The generated data with \(\phi_\beta = 3\) is shown on the left below. Data sets with increasing precision (less variability) are shown to the right:
The relationship of \(\phi_\beta\) and variance is made clear by evaluating the variance of the cluster probabilities at each level of \(x\) and comparing these variance estimates with the theoretical values suggested by parameters specified in the data generation process:
p.clust = 1/(1 + exp(2 - 0.65*(1:3)))cbind(dc[, .(obs = round(var(p), 3)), keyby = x], theory = round( (p.clust*(1 - p.clust))/(1 + phi.beta), 3))
## x obs theory## 1: 1 0.041 0.041## 2: 2 0.054 0.055## 3: 3 0.061 0.062
Beta and beta-binomial regression
Before getting to the GEE estimation, here are two less frequently used regression models: beta and beta-binomial regression. Beta regression may not be super-useful, because we would need to observe (and measure) the probabilities directly. In this case, we randomly generated the probabilities, so it is fair to estimate a regression model to recover the same parameters we used to generate the data! But, back in the real world, we might only observe \(\hat{p}\), which results from generating data based on the underlying true \(p\). This is where we will need the beta-binomial regression (and later, the GEE model).
First, here is the beta regression using package
betareg, which provides quite good estimates of the two coefficients and the precision parameter \(\phi_\beta\), which is not so surprising given the large number of clusters in our sample:
library(betareg)model.beta <- betareg(p ~ x, data = dc, link = "logit")summary(model.beta)
## ## Call:## betareg(formula = p ~ x, data = dc, link = "logit")## ## Standardized weighted residuals 2:## Min 1Q Median 3Q Max ## -3.7420 -0.6070 0.0306 0.6699 3.4952 ## ## Coefficients (mean model with logit link):## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -2.09663 0.12643 -16.58 <2e-16 ***## x 0.70080 0.05646 12.41 <2e-16 ***## ## Phi coefficients (precision model with identity link):## Estimate Std. Error z value Pr(>|z|) ## (phi) 3.0805 0.1795 17.16 <2e-16 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Type of estimator: ML (maximum likelihood)## Log-likelihood: 155.2 on 3 Df## Pseudo R-squared: 0.2388## Number of iterations: 13 (BFGS) + 1 (Fisher scoring)
The beta-binomial regression model, which is estimated using package
aod, is a reasonable model to fit in this case where we have observed binomial outcomes and unobserved underlying probabilities:
library(aod)model.betabinom <- betabin(cbind(b, n - b) ~ x, ~ 1, data = dc)model.betabinom
## Beta-binomial model## -------------------## betabin(formula = cbind(b, n - b) ~ x, random = ~1, data = dc)## ## Convergence was obtained after 100 iterations.## ## Fixed-effect coefficients:## Estimate Std. Error z value Pr(> |z|)## (Intercept) -2.103e+00 1.361e-01 -1.546e+01 0e+00## x 6.897e-01 6.024e-02 1.145e+01 0e+00## ## Overdispersion coefficients:## Estimate Std. Error z value Pr(> z)## phi.(Intercept) 2.412e-01 1.236e-02 1.951e+01 0e+00## ## Log-likelihood statistics## Log-lik nbpar df res. Deviance AIC AICc ## -1.711e+03 3 497 1.752e+03 3.428e+03 3.428e+03
A couple of interesting things to note here. First is that the coefficient estimates are pretty similar to the beta regression model. However, the standard errors are slightly higher, as they should be, since we are using only observed probabilities and not the true (albeit randomly selected or generated) probabilities. So, there is another level of uncertainty beyond sampling error.
Second, there is a new parameter: \(\phi_{overdisp}\). What is that, and how does that relate to \(\phi_\beta\)? The variance of a binomial random variable \(Y\) with a single underlying probability is \(Var(Y) = np(1-p)\). However, when the underlying probability varies across different subgroups (or clusters), the variance is augmented by \(\phi_{overdisp}\): \(Var(Y) = np(1-p)[1 + (n-1)\phi_{overdisp}]\). It turns out to be the case that \(\phi_{overdisp} = 1/(1+\phi_\beta)\):
round([email protected], 3) # from the beta - binomial model
## phi.(Intercept) ## 0.241
round(1/(1 + coef(model.beta)["(phi)"]), 3) # from the beta model
## (phi) ## 0.245
The observed variances of the binomial outcome \(b\) at each level of \(x\) come quite close to the theoretical variances based on \(\phi_\beta\):
phi.overdisp <- 1/(1+phi.beta)cbind(dc[, .(obs = round(var(b),1)), keyby = x], theory = round( n*p.clust*(1-p.clust)*(1 + (n-1)*phi.overdisp), 1))
## x obs theory## 1: 1 69.6 70.3## 2: 2 90.4 95.3## 3: 3 105.2 107.4
GEE and individual level data
With individual level binary outcomes (as opposed to count data we were working with before), GEE models are appropriate. The code below generates individual-level for each cluster level:
defI <- defDataAdd(varname = "y", formula = "p", dist = "binary")di <- genCluster(dc, "cID", numIndsVar = "n", level1ID = "id")di <- addColumns(defI, di)di
## cID n x p b id y## 1: 1 40 2 0.1016969 4 1 0## 2: 1 40 2 0.1016969 4 2 0## 3: 1 40 2 0.1016969 4 3 0## 4: 1 40 2 0.1016969 4 4 0## 5: 1 40 2 0.1016969 4 5 1## --- ## 19996: 500 40 2 0.4812838 16 19996 0## 19997: 500 40 2 0.4812838 16 19997 0## 19998: 500 40 2 0.4812838 16 19998 1## 19999: 500 40 2 0.4812838 16 19999 1## 20000: 500 40 2 0.4812838 16 20000 0
The GEE model provides estimates of the coefficients as well as the working correlation. If we assume an “exchangeable” correlation matrix, in which each individual is correlated with all other individuals in the cluster but is not correlated with individuals in other clusters, we will get a single correlation estimate, which is labeled as
alpha in the GEE output:
library(geepack)geefit <- geeglm(y ~ x, family = "binomial", data = di, id = cID, corstr = "exchangeable" )summary(geefit)
## ## Call:## geeglm(formula = y ~ x, family = "binomial", data = di, id = cID, ## corstr = "exchangeable")## ## Coefficients:## Estimate Std.err Wald Pr(>|W|) ## (Intercept) -2.07376 0.14980 191.6 <2e-16 ***## x 0.68734 0.06566 109.6 <2e-16 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Estimated Scale Parameters:## Estimate Std.err## (Intercept) 1 0.03235## ## Correlation: Structure = exchangeable Link = identity ## ## Estimated Correlation Parameters:## Estimate Std.err## alpha 0.256 0.01746## Number of clusters: 500 Maximum cluster size: 40
In this case,
alpha (\(\alpha\)) is estimated at 0.25, which is quite close to the previous estimate of \(\phi_{overdisp}\), 0.24. So, it appears to be the case that if we have a target correlation \(\alpha\), we know the corresponding \(\phi_\beta\) to use in the beta-binomial data generation process. That is, \(\phi_\beta = (1 – \alpha)/\alpha\).
While this is certainly not a proof of anything, let’s give it a go with a target \(\alpha = 0.44\):
phi.beta.new <- (1-0.44)/0.44def <- updateDef(def, "p", newvariance = phi.beta.new) dc2 <- genData(500, def)di2 <- genCluster(dc2, "cID", numIndsVar = "n", level1ID = "id")di2 <- addColumns(defI, di2)geefit <- geeglm(y ~ x, family = "binomial", data = di2, id = cID, corstr = "exchangeable" )summary(geefit)
## ## Call:## geeglm(formula = y ~ x, family = "binomial", data = di2, id = cID, ## corstr = "exchangeable")## ## Coefficients:## Estimate Std.err Wald Pr(>|W|) ## (Intercept) -1.7101 0.1800 90.3 < 2e-16 ***## x 0.5685 0.0806 49.8 1.7e-12 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Estimated Scale Parameters:## Estimate Std.err## (Intercept) 1 0.0307## ## Correlation: Structure = exchangeable Link = identity ## ## Estimated Correlation Parameters:## Estimate Std.err## alpha 0.444 0.0242## Number of clusters: 500 Maximum cluster size: 40
Addendum
Above, I suggested that the estimator of the effect of
x based on the beta model will have less variation than the estimator based on the beta-binomial model. I drew 5000 samples from the data generating process and estimated the models each time. Below is a density distribution of the estimates of each of the models from all 5000 iterations. As expected, the beta-binomial process has more variability, as do the related estimates; we can see this in the relative “peakedness”" of the beta density:
Also based on these 5000 iterations, the GEE model estimation appears to be less efficient than the beta-binomial model. This is not surprising since the beta-binomial model was the actual process that generated the data (so it is truly the correct model). The GEE model is robust to mis-specification of the correlation structure, but the price we pay for that robustness is a slightly less precise estimate (even if we happen to get the correlation structure right):
|
L # 1
Show that
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Last edited by krassi_holmz (2006-03-09 02:44:53)
IPBLE: Increasing Performance By Lowering Expectations.
Offline
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
L # 2
If
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Let
log x = x' log y = y' log z = z'. Then:
x'+y'+z'=0.
Rewriting in terms of x' gives:
IPBLE: Increasing Performance By Lowering Expectations.
Offline
Well done, krassi_holmz!
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
L # 3
If x²y³=a and log (x/y)=b, then what is the value of (logx)/(logy)?
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
loga=2logx+3logy
b=logx-logy loga+3b=5logx loga-2b=3logy+2logy=5logy logx/logy=(loga+3b)/(loga-2b). Last edited by krassi_holmz (2006-03-10 20:06:29)
IPBLE: Increasing Performance By Lowering Expectations.
Offline
Very well done, krassi_holmz!
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
L # 4
Offline
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
You are not supposed to use a calculator or log tables for L # 4. Try again!
Last edited by JaneFairfax (2009-01-04 23:40:20)
Offline
No, I didn't
I remember
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
You still used a calculator / log table in the past to get those figures (or someone else did and showed them to you). I say again:
no calculators or log tables to be used (directly or indirectly) at all!! Last edited by JaneFairfax (2009-01-06 00:30:04)
Offline
Offline
log a = 2log x + 3log y
b = log x log y
log a + 3 b = 5log x
loga - 2b = 3logy + 2logy = 5logy
logx / logy = (loga+3b) / (loga-2b)
Offline
Hi ganesh
for L # 1 since log(a)= 1 / log(b), log(a)=1 b a a we have 1/log(abc)+1/log(abc)+1/log(abc)= a b c log(a)+log(b)+log(c)= log(abc)=1 abc abc abc abc Best Regards Riad Zaidan
Offline
Hi ganesh
for L # 2 I think that the following proof is easier: Assume Log(x)/(b-c)=Log(y)/(c-a)=Log(z)/(a-b)=t So Log(x)=t(b-c),Log(y)=t(c-a) , Log(z)=t(a-b) So Log(x)+Log(y)+Log(z)=tb-tc+tc-ta+ta-tb=0 So Log(xyz)=0 so xyz=1 Q.E.D Best Regards Riad Zaidan
Offline
Gentleman,
Thanks for the proofs.
Regards.
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
log_2(16) = \log_2 \left ( \frac{64}{4} \right ) = \log_2(64) - \log_2(4) = 6 - 2 = 4, \,
log_2(\sqrt[3]4) = \frac {1}{3} \log_2 (4) = \frac {2}{3}. \,
Offline
L # 4
I don't want a method that will rely on defining certain functions, taking derivatives,
noting concavity, etc.
Change of base:
Each side is positive, and multiplying by the positive denominator
keeps whatever direction of the alleged inequality the same direction:
On the right-hand side, the first factor is equal to a positive number less than 1,
while the second factor is equal to a positive number greater than 1. These facts are by inspection combined with the nature of exponents/logarithms.
Because of (log A)B = B(log A) = log(A^B), I may turn this into:
I need to show that
Then
Then 1 (on the left-hand side) will be greater than the value on the
right-hand side, and the truth of the original inequality will be established.
I want to show
Raise a base of 3 to each side:
Each side is positive, and I can square each side:
-----------------------------------------------------------------------------------
Then I want to show that when 2 is raised to a number equal to
(or less than) 1.5, then it is less than 3.
Each side is positive, and I can square each side:
Last edited by reconsideryouranswer (2011-05-27 20:05:01)
Signature line:
I wish a had a more interesting signature line.
Offline
Hi reconsideryouranswer,
This problem was posted by JaneFairfax. I think it would be appropriate she verify the solution.
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Hi all,
I saw this post today and saw the probs on log. Well, they are not bad, they are good. But you can also try these problems here by me (Credit: to a book):
http://www.mathisfunforum.com/viewtopic … 93#p399193
Practice makes a man perfect.
There is no substitute to hard work All of us do not have equal talents but everybody has equal oppurtunities to build their talents.-APJ Abdul Kalam
Offline
JaneFairfax, here is a basic proof of L4:
For all real a > 1, y = a^x is a strictly increasing function.
log(base 2)3 versus log(base 3)5
2*log(base 2)3 versus 2*log(base 3)5
log(base 2)9 versus log(base 3)25
2^3 = 8 < 9
2^(> 3) = 9
3^3 = 27 < 25
3^(< 3) = 25
So, the left-hand side is greater than the right-hand side, because
Its logarithm is a larger number.
Offline
|
A discrete-time sinusoid can have frequency up to just shy of half the sample frequency. But if you try to plot the sinusoid, the result is not always recognizable. For example, if you plot a 9 Hz sinusoid sampled at 100 Hz, you get the result shown in the top of Figure 1, which looks like a sine. But if you plot a 35 Hz sinusoid sampled at 100 Hz, you get the bottom graph, which does not look like a sine when you connect the dots. We typically want the plot of a...
This article covers interpolation basics, and provides a numerical example of interpolation of a time signal. Figure 1 illustrates what we mean by interpolation. The top plot shows a continuous time signal, and the middle plot shows a sampled version with sample time Ts. The goal of interpolation is to increase the sample rate such that the new (interpolated) sample values are close to the values of the continuous signal at the sample times [1]. For example, if...
This is an article to hopefully give a better understanding of the Discrete Fourier Transform (DFT) by showing an implementation of how the parameters of a real pure tone can be calculated from just two DFT bin values. The equations from previous articles are used in tandem to first calculate the frequency, and then calculate the amplitude and phase of the tone. The approach works best when the tone is between the two DFT bins in terms of frequency.The Coding...
In an earlier post [1], we implemented lowpass IIR filters using a cascade of second-order IIR filters, or biquads. This post provides a Matlab function to do the same for Butterworth bandpass IIR filters. Compared to conventional implementations, bandpass filters based on biquads are less sensitive to coefficient quantization [2]. This becomes important when designing narrowband filters.
A biquad section block diagram using the Direct Form II structure [3,4] is shown in...
There are many applications in which this technique is useful. I discovered a version of this method while analysing radar systems, but the same approach can be used in a very wide range of...
This is an article to hopefully give a better understanding of the Discrete Fourier Transform (DFT), but only indirectly. The main intent is to get someone who is uncomfortable with complex numbers a little more used to them and relate them back to already known Trigonometric relationships done in Real values. It is essentially a followup to my first blog article "The Exponential Nature of the Complex Unit Circle".Polar Coordinates
The more common way of...
One of the basic DSP principles states that a sampled time signal has a periodic spectrum with period equal to the sample rate. The derivation of can be found in textbooks [1,2]. You can also demonstrate this principle numerically using the Discrete Fourier Transform (DFT).
The DFT of the sampled signal x(n) is defined as:
$$X(k)=\sum_{n=0}^{N-1}x(n)e^{-j2\pi kn/N} \qquad (1)$$
Where
X(k) = discrete frequency spectrum of time sequence x(n)
Figure 1a shows the block diagram of a decimation-by-8 filter, consisting of a low-pass finite impulse response (FIR) filter followed by downsampling by 8 [1]. A more efficient version is shown in Figure 1b, which uses three cascaded decimate-by-two filters. This implementation has the advantages that only FIR 1 is sampled at the highest sample rate, and the total number of filter taps is lower.
The frequency response of the single-stage decimator before downsampling is just...
In my last post, we saw that finding the spectrum of a signal requires several steps beyond computing the discrete Fourier transform (DFT)[1]. These include windowing the signal, taking the magnitude-squared of the DFT, and computing the vector of frequencies. The Matlab function pwelch [2] performs all these steps, and it also has the option to use DFT averaging to compute the so-called Welch power spectral density estimate [3,4].
In this article, I’ll present some...
The Discrete Fourier Transform (DFT) operates on a finite length time sequence to compute its spectrum. For a continuous signal like a sinewave, you need to capture a segment of the signal in order to perform the DFT. Usually, you also need to apply a window function to the captured signal before taking the DFT [1 - 3]. There are many different window functions and each produces a different approximation of the spectrum. In this post, we’ll present Matlab code that...
Introduction Quadrature signals are based on the notion of complex numbers and perhaps no other topic causes more heartache for newcomers to DSP than these numbers and their strange terminology of j operator, complex, imaginary, real, and orthogonal. If you're a little unsure of the physical meaning of complex numbers and the j = √-1 operator, don't feel bad because you're in good company. Why even Karl Gauss, one the world's greatest mathematicians, called the j-operator the "shadow of...
This article covers interpolation basics, and provides a numerical example of interpolation of a time signal. Figure 1 illustrates what we mean by interpolation. The top plot shows a continuous time signal, and the middle plot shows a sampled version with sample time Ts. The goal of interpolation is to increase the sample rate such that the new (interpolated) sample values are close to the values of the continuous signal at the sample times [1]. For example, if...
The finite-word representation of fractional numbers is known as fixed-point. Fixed-point is an interpretation of a 2's compliment number usually signed but not limited to sign representation. It extends our finite-word length from a finite set of integers to a finite set of rational real numbers [1]. A fixed-point representation of a number consists of integer and fractional components. The bit length is defined...
While there are plenty of canned functions to design Butterworth IIR filters [1], it’s instructive and not that complicated to design them from scratch. You can do it in 12 lines of Matlab code. In this article, we’ll create a Matlab function butter_synth.m to design lowpass Butterworth filters of any order. Here is an example function call for a 5th order filter:
Recently I've been thinking about the process of envelope detection. Tutorial information on this topic is readily available but that information is spread out over a number of DSP textbooks and many Internet web sites. The purpose of this blog is to summarize various digital envelope detection methods in one place.
Here I focus on envelope detection as it is applied to an amplitude-fluctuating sinusoidal signal where the positive-amplitude fluctuations (the sinusoid's envelope)...
This is an article to hopefully give an understanding to Euler's magnificent equation:
$$ e^{i\theta} = cos( \theta ) + i \cdot sin( \theta ) $$
This equation is usually proved using the Taylor series expansion for the given functions, but this approach fails to give an understanding to the equation and the ramification for the behavior of complex numbers. Instead an intuitive approach is taken that culminates in a graphical understanding of the equation.Complex...
In my last post, we saw that finding the spectrum of a signal requires several steps beyond computing the discrete Fourier transform (DFT)[1]. These include windowing the signal, taking the magnitude-squared of the DFT, and computing the vector of frequencies. The Matlab function pwelch [2] performs all these steps, and it also has the option to use DFT averaging to compute the so-called Welch power spectral density estimate [3,4].
In this article, I’ll present some...
A discrete-time sinusoid can have frequency up to just shy of half the sample frequency. But if you try to plot the sinusoid, the result is not always recognizable. For example, if you plot a 9 Hz sinusoid sampled at 100 Hz, you get the result shown in the top of Figure 1, which looks like a sine. But if you plot a 35 Hz sinusoid sampled at 100 Hz, you get the bottom graph, which does not look like a sine when you connect the dots. We typically want the plot of a...
$$ atan(z) \approx \dfrac{z}{1.0 +...
The finite-word representation of fractional numbers is known as fixed-point. Fixed-point is an interpretation of a 2's compliment number usually signed but not limited to sign representation. It extends our finite-word length from a finite set of integers to a finite set of rational real numbers [1]. A fixed-point representation of a number consists of integer and fractional components. The bit length is defined...
Introduction Quadrature signals are based on the notion of complex numbers and perhaps no other topic causes more heartache for newcomers to DSP than these numbers and their strange terminology of j operator, complex, imaginary, real, and orthogonal. If you're a little unsure of the physical meaning of complex numbers and the j = √-1 operator, don't feel bad because you're in good company. Why even Karl Gauss, one the world's greatest mathematicians, called the j-operator the "shadow of...
Recently I've been thinking about the process of envelope detection. Tutorial information on this topic is readily available but that information is spread out over a number of DSP textbooks and many Internet web sites. The purpose of this blog is to summarize various digital envelope detection methods in one place.
Here I focus on envelope detection as it is applied to an amplitude-fluctuating sinusoidal signal where the positive-amplitude fluctuations (the sinusoid's envelope)...
While there are plenty of canned functions to design Butterworth IIR filters [1], it’s instructive and not that complicated to design them from scratch. You can do it in 12 lines of Matlab code. In this article, we’ll create a Matlab function butter_synth.m to design lowpass Butterworth filters of any order. Here is an example function call for a 5th order filter:
Minimum Shift Keying (MSK) is one of the most spectrally efficient modulation schemes available. Due to its constant envelope, it is resilient to non-linear distortion and was therefore chosen as the modulation technique for the GSM cell phone standard.
MSK is a special case of Continuous-Phase Frequency Shift Keying (CPFSK) which is a special case of a general class of modulation schemes known as Continuous-Phase Modulation (CPM). It is worth noting that CPM (and hence CPFSK) is a...
This is an article to hopefully give an understanding to Euler's magnificent equation:
$$ e^{i\theta} = cos( \theta ) + i \cdot sin( \theta ) $$
This equation is usually proved using the Taylor series expansion for the given functions, but this approach fails to give an understanding to the equation and the ramification for the behavior of complex numbers. Instead an intuitive approach is taken that culminates in a graphical understanding of the equation.Complex...
$$ atan(z) \approx \dfrac{z}{1.0 +...
Figure 1.1 is a block diagram of a digital PLL (DPLL). The purpose of the DPLL is to lock the phase of a numerically controlled oscillator (NCO) to a reference signal. The loop includes a phase detector to compute phase error and a loop filter to set loop dynamic performance. The output of the loop filter controls the frequency and phase of the NCO, driving the phase error to zero.
One application of the DPLL is to recover the timing in a digital...
In this post, I present a method to design Butterworth IIR bandpass filters. My previous post [1] covered lowpass IIR filter design, and provided a Matlab function to design them. Here, we’ll do the same thing for IIR bandpass filters, with a Matlab function bp_synth.m. Here is an example function call for a bandpass filter based on a 3rd order lowpass prototype:N= 3; % order of prototype LPF fcenter= 22.5; % Hz center frequency, Hz bw= 5; ...
The topic of estimating a noise-free real or complex sinusoid's frequency, based on fast Fourier transform (FFT) samples, has been presented in recent blogs here on dsprelated.com. For completeness, it's worth knowing that simple frequency estimation algorithms exist that do not require FFTs to be performed . Below I present three frequency estimation algorithms that use time-domain samples, and illustrate a very important principle regarding so called "exact"...
|
I have Kepler's first law of planetary motion:
$$\text{r}=\frac{\text{p}}{1+\epsilon\cos\left(\theta\right)}\tag1.$$
Now, for $\epsilon$ I have:
$$0<\epsilon=\sqrt{1+\frac{2\cdot\eta\cdot\text{h}^2}{\mu^2}}<1\tag,$$
because it is an ellipse.
Now, for $\mu$:
$$\mu=\text{G}\cdot\text{M}\tag3,$$
and for $\eta$:
$$\eta=-\frac{\mu}{2\text{a}}\tag4,$$
where $\text{a}=\frac{\text{r}_\text{min}+\text{r}_\text{max}}{2}$.
Questions:
For the Earth's orbit around the sun, which mass ($\text{M}$) should I pick, the mass of the sun or the mass of the earth? What is $\text{h}$ in the equation for $\epsilon$ and how can I find the value for it? What are the values of $\text{r}_\text{min}$ and $\text{r}_\text{max}$?
|
For $k = 1, 2,...,n-1$ let $V_k = V(\lambda_k)$ be the Weyl module for the special orthogonal group $G = \mathrm{SO}(2n+1,\F)$ with respect to the $k$-th fundamental dominant weight $\lambda_k$ of the root system of type $B_n$ and put $V_n = V(2\lambda_n)$. It is well known that all of these modules are irreducible when $\mathrm{char}(\F) \neq 2$ while when $\mathrm{char}(\F) = 2$ they admit many proper submodules. In this paper, assuming that $\mathrm{char}(\F) = 2$, we prove that $V_k$ admits a chain of submodules $V_k = M_k \supset M_{k-1}\supset ... \supset M_1\supset M_0 \supset M_{-1} = 0$ where $M_i \cong V_i$ for $1,..., k-1$ and $M_0$ is the trivial 1-dimensional module. We also show that for $i = 1, 2,..., k$ the quotient $M_i/M_{i-2}$ is isomorphic to the so called $i$-th Grassmann module for $G$. Resting on this fact we can give a geometric description of $M_{i-1}/M_{i-2}$ as a submodule of the $i$-th Grassmann module. When $\F$ is perfect $G\cong \mathrm{Sp}(2n,\F)$ and $M_i/M_{i-1}$ is isomorphic to the Weyl module for $\mathrm{Sp}(2n,\F)$ relative to the $i$-th fundamental dominant weight of the root system of type $C_n$. All irreducible sections of the latter modules are known. Thus, when $\F$ is perfect, all irreducible sections of $V_k$ are known as well.
Scheda prodotto non validato
Scheda prodotto in fase di analisi da parte dello staff di validazione
Titolo: On certain submodules of Weyl modules for SO(2n+1,F) with char(F) = 2 Autori: Rivista: Citazione: Cardinali, I., & Pasini, A. (2014). On certain submodules of Weyl modules for SO(2n+1,F) with char(F) = 2. JOURNAL OF GROUP THEORY, 17(4), 559-588. Anno: 2014 Appare nelle tipologie: 1.1 Articolo in rivista
http://hdl.handle.net/11365/45581
|
The equations point the way
The question states two use cases, projectiles in a vacuum and projectiles in an atmosphere. As the shape of a projectile in a vacuum is simple, ie. whatever shape fits the barrel, the rest of the answer will concern itself with the more complicated atmospheric case.
The drag equation will have a lot to do with the shape of the projectile. It is..
$$D = Cd \cdot \frac{\rho \cdot V^2}{2} \cdot A$$
where $D$ is drag, $Cd$ is the drag coeffiient, $\rho$ is the density of the air, $V$ is velocity, and $A$ is the area.
Also, the kinetic energy equation will mean a lot too. It is...
$$E_k = \frac{1}{2} mv^2$$
...where $E_k$ is the energy of the object, $m$ is mass and $v$ is velocity. Remember kids, it's kinetic energy that kills.
Natural Shapes for Projectiles
From these equations we can see that we want to get our projectile's velocity and mass as high as possible, while also keeping the frontal area and drag as low as possible. Both deorbiting objects and railgun projectiles provide spectacular initial velocity to work with. Hooray!
Let's go step by step then...
A plate has a huge frontal area so we want something more narrow. A sphere has the least surface area to most volume of any geometric shape. However, spheres aren't especially aerodynamic and are notoriously difficult to aim. We observe that the longer an object is, the more likely it is to self-correct its trajectory. We want to hit what we aim for. (Aerodynamics, of course, play no part in space battles but we don't want to carry more ammo types than we have to.) A rod is long, so it's easier to aim, plus it has minimal frontal area which keeps the value of $A$ low. Rods also provide lots of volume to put all that lovely mass that we need to make our $E_k$ values really terrifying. What the equations don't say
Hypervelocity projectile noses are not intuitively shaped. While the projectiles are generally rod shaped, the nose of the rod may not be pointy. The front of the projectile shot out of the US Navy's rail gun is blunt. There's a YouTube Channel run by a guy named Taofledermaus who does lots of experimental shotgun loads. So very often, he'll take a slug that looks aerodynamic but just tumbles on the way to the target. It's not easy.
Aerodynamics is also an extremely complicated field. Trans-sonic aerodynamics is notoriously difficult within an already difficult field. Above the speed of sound, air behaves like a solid. Below the speed of sound, it behaves like a fluid. Around the speed of sound, it behaves like something else.
Also, aiming from orbit is really difficult as demonstrated by this WB answer. Without terminal guidance to track and adjust trajectory to hit a smaller target, this hypervelocity projectiles probably won't be accurate enough to be really dangerous. You'll be able to hit static targets without too much trouble but moving targets are too difficult, especially when you have to lead by a few minutes.
Even lead times of 30 seconds or so can be defeated with relative ease. WW2 bomber pilots received extensive training on how to avoid flak. Even a few seconds lead time is enough to make a shot miss, as this AC-130 gunship (WARNING: Graphic)demonstrates. Even changing direction a little bit will make a shot miss by enough to be ineffective. Throw in trying to hit targets in a 3D space, it gets very difficult, very quickly.
|
Difference between revisions of "Multi-index notation"
m (→Leibnitz formula for higher derivatives of multivariate functions)
m (Added category TEXdone)
(3 intermediate revisions by 3 users not shown) Line 1: Line 1: + +
$\def\a{\alpha}$
$\def\a{\alpha}$
$\def\b{\beta}$
$\def\b{\beta}$
Line 23: Line 25:
The partial derivative operators are also abbreviated:
The partial derivative operators are also abbreviated:
$$
$$
−
\partial_x=\biggl(\frac{\partial}{\partial x_1},\dots,\frac{\partial}{\partial x_n)
+
\partial_x=\biggl(\frac{\partial}{\partial x_1},\dots,\frac{\partial}{\partial x_n)=\partial\quad\text{if the choice of $x$ is clear from context.}
$$
$$
The notation for partial derivatives is also quite natural: for a differentiable function $f(x_1,\dots,x_n)$ of $n$ variables,
The notation for partial derivatives is also quite natural: for a differentiable function $f(x_1,\dots,x_n)$ of $n$ variables,
Line 43: Line 45:
(x+y)^\a=\sum_{0\leqslant\b\leqslant\a}\binom\a\b x^{\a-\b} y^\b.
(x+y)^\a=\sum_{0\leqslant\b\leqslant\a}\binom\a\b x^{\a-\b} y^\b.
$$
$$
−
===
+
===formula for higher derivatives of multivariate functions===
$$
$$
\partial^\a(fg)=\sum_{0\leqslant\b\leqslant\a}\binom\a\b \partial^{\a-\b}f\cdot \partial^\b g.
\partial^\a(fg)=\sum_{0\leqslant\b\leqslant\a}\binom\a\b \partial^{\a-\b}f\cdot \partial^\b g.
Line 55: Line 57:
\end{cases}
\end{cases}
$$
$$
+
===Taylor series of a smooth function===
===Taylor series of a smooth function===
If $f$ is infinitely smooth near the origin $x=0$, then its Taylor series (at the origin) has the form
If $f$ is infinitely smooth near the origin $x=0$, then its Taylor series (at the origin) has the form
$$
$$
−
\sum_{\a\in\Z_+^n}\frac1{\a!}\partial^\a f\cdot x^\a.
+
\sum_{\a\in\Z_+^n}\frac1{\a!}\partial^\a f\cdot x^\a.
+ + + + +
$$
$$
+ Latest revision as of 11:12, 12 December 2013 $\def\a{\alpha}$ $\def\b{\beta}$
An abbreviated form of notation in analysis, imitating the vector notation by single letters rather than by listing all vector components.
Contents Rules
A point with coordinates $(x_1,\dots,x_n)$ in the $n$-dimensional space (real, complex or over any other field $\Bbbk$) is denoted by $x$. For a
multiindex $\a=(\a_1,\dots,\a_n)\in\Z_+^n$ the expression $x^\a$ denotes the product, $x_\a=x_1^{\a_1}\cdots x_n^{\a_n}$. Other expressions related to multiindices are expanded as follows:$$\begin{aligned}|\a|&=\a_1+\cdots+\a_n\in\Z_+^n,\\\a!&=\a_1!\cdots\a_n!\qquad\text{(as usual, }0!=1!=1),\\x^\a&=x_1^{\a_1}\cdots x_n^{\a_n}\in \Bbbk[x]=\Bbbk[x_1,\dots,x_n],\\\a\pm\b&=(\a_1\pm\b_1,\dots,\a_n\pm\b_n)\in\Z^n,\end{aligned}$$The convention extends for the binomial coefficients ($\a\geqslant\b$ means, quite naturally, that $\a_1\geqslant\b_1,\dots,\a_n\geqslant\b_n$):$$\binom{\a}{\b}=\binom{\a_1}{\b_1}\cdots\binom{\a_n}{\b_n}=\frac{\a!}{\b!(\a-\b)!},\qquad \text{if}\quad \a\geqslant\b.$$The partial derivative operators are also abbreviated:$$\partial_x=\biggl(\frac{\partial}{\partial x_1},\dots,\frac{\partial}{\partial x_n}\biggr)=\partial\quad\text{if the choice of $x$ is clear from context.}$$The notation for partial derivatives is also quite natural: for a differentiable function $f(x_1,\dots,x_n)$ of $n$ variables, $$\partial^a f=\frac{\partial^{|\a|} f}{\partial x^\a}=\frac{\partial^{\a_1}}{\partial x_1^{\a_1}}\cdots\frac{\partial^{\a_n}}{\partial x_n^{\a_n}}f=\frac{\partial^{|\a|}f}{\partial x_1^{\a_1}\cdots\partial x_n^{\a_n}}.$$If $f$ is itself a vector-valued function of dimension $m$, the above partial derivatives are $m$-vectors. The notation $$\partial f=\bigg(\frac{\partial f}{\partial x}\bigg)$$ is used to denote the Jacobian matrix of a function $f$ (in general, only rectangular). Caveat
The notation $\a>0$ is ambiguous, especially in mathematical economics, as it may either mean that $\a_1>0,\dots,\a_n>0$, or $0\ne\a\geqslant0$.
Examples Binomial formula
$$ (x+y)^\a=\sum_{0\leqslant\b\leqslant\a}\binom\a\b x^{\a-\b} y^\b. $$
Leibniz formula for higher derivatives of multivariate functions
$$ \partial^\a(fg)=\sum_{0\leqslant\b\leqslant\a}\binom\a\b \partial^{\a-\b}f\cdot \partial^\b g. $$ In particular, $$ \partial^\a x^\beta=\begin{cases} \frac{\b!}{(\b-\a)!}x^{\b-\a},\qquad&\text{if }\a\leqslant\b, \\ \quad 0,&\text{otherwise}. \end{cases} $$
Taylor series of a smooth function
If $f$ is infinitely smooth near the origin $x=0$, then its Taylor series (at the origin) has the form $$ \sum_{\a\in\Z_+^n}\frac1{\a!}\partial^\a f(0)\cdot x^\a. $$
Symbol of a differential operator
If $$D=\sum_{|\a|\le d}a_\a(x)\partial^\a$$is a linear ordinary differential operator with variable coefficients $a_\a(x)$, then its
principal symbol is the function of $2n$ variables $S(x,p)=\sum_{|\a|=d}a_\a(x)p^\a$. How to Cite This Entry:
Multi-index notation.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Multi-index_notation&oldid=25755
|
Hypercube graph Set
context $ n\in\mathbb N, n\ge 1 $ range $ V\equiv \{0,1\}^n $
definiendum $Q_n\equiv \langle V,E\rangle$
range $ k\in\mathbb N,1\le k\le n $ for all $ v,w\in V $
postulate $ \{v,w\}\in E\leftrightarrow \exists!k.\ \pi_k(v)\neq\pi_k(w) $ Discussion
The Hypercube graph, also called n-cube, has vertices all n-tuples of 0's and 1's and two such vertices are connected iff they differ in one coordinate.
Since strings with 0's and 1's of length $n$ encode the subsetsets of an n-element set, we have $|V|=2^n$. And since for each n-tuple there $n$ ways to differ from it by one digit, there are $\frac{1}{2}2^n\cdot n$ vertices.
It is also the Hesse diagram of a Boolean lattice, the powerset with elements connected iff they differ by one element.
Examples
E.g. $V(Q_2)=\{\langle 0,0\rangle,\langle 0,1\rangle,\langle 1,0\rangle,\langle 1,1\rangle\}$ and there are all but the diagonal relations are edges in the graph. Clearly $Q_n$ is just a square.
|
Small challenge
06-06-2017, 08:05 AM (This post was last modified: 06-06-2017 09:44 AM by Pekis.)
Post: #1
Small challenge
Hello,
While making an Android app, I had to draw a small equilateral triangle pointing to a circle:
r: Outer Circle Radius: distance OAB
d: Side length of the equilateral triangle.
d=a*r, where a is a fraction of the outer circle radius
If we want the distance AB to be equal to 1/5 of r, what's the value of a ?
Have fun & Thanks for reading
06-06-2017, 01:26 PM
Post: #2
RE: Small challenge
Nice one! I also have in the bookmark your "brain teaser 2" to solve.
Wikis are great, Contribute :)
06-07-2017, 09:53 AM (This post was last modified: 06-07-2017 09:57 AM by Pekis.)
Post: #3
RE: Small challenge
Hello,
Here is the solution:
Distance AB = Distance AE + Distance EB
Distance AE:
d*cos(PI/6)
=a*r*sqrt(3)/2
Distance EB:
Arc height Formula r=h/2+c^2/(8*h),
Here h=Distance EB and c=d
=> Solved for Distance EB=r*(1-sqrt(4-a^2)/2)
Distance AB:
a*r*sqrt(3)/2+r*(1-sqrt(4-a^2)/2), which must be equal to c*r
(c=0.2 in the question)
=> Solve a*r*sqrt(3)/2+r*(1-sqrt(4-a^2)/2)=c*r
=> Solve a*sqrt(3)/2+1-sqrt(4-a^2)/2=c
=> Solved for a=(sqrt(3)*(c-1)+sqrt((c+1)*(3-c)))/2
If c=0.2 => a~=0.223694816
=> d will be approx. 0.22*r while the distance AB will be 0.2*r as required
If angle at B is t and (cx,cy) is the center of the outer circle with radius r:
Coordinates of A:
(Ax,Ay)=(cx+r*(1-c)*cos t, cy+r*(1-c)*sin t)
Coordinates of C:
(Cx,Cy)=(Ax+d*sin(PI/3-t),Ay+d*cos(PI/3-t))
Coordinates of D:
(Dx,Dy)=(Ax+d*cos(t-PI/6),Ay+d*sin(t-PI/6))
Thanks for reading
06-07-2017, 11:35 AM (This post was last modified: 06-08-2017 12:54 PM by PedroLeiva.)
Post: #4
RE: Small challenge
(06-07-2017 09:53 AM)Pekis Wrote: Hello,Can you provide a numerial example to test?. TYVM, Pedro
06-07-2017, 03:49 PM (This post was last modified: 06-07-2017 04:11 PM by Pekis.)
Post: #5
RE: Small challenge
(06-07-2017 11:35 AM)PedroLeiva Wrote:
Hello,
Here is a numerical example:
cex=0 (Outer Circle centered at (0,0))
cey=0
r=5 (Outer circle Radius)
c=0.2 (Fraction of Outer circle radius left for the triangle)
a=(sqrt(3)*(c-1)+sqrt((c+1)*(3-c)))/2=0.223694816
d=a*r=1.11847408 (Side length of equilateral triangle)
t=40° (0.6981317008 rad) (Angle at B)
Ax=cex+r*(1-c)*cos(t)=3.064177772
Ay=cey+r*(1-c)*sin(t)=2.571150439
Cx=Ax+d*sin(PI/3-t)=3.446718437
Cy=Ay+d*cos(PI/3-t)=3.622172279
Dx=Ax+d*cos(t-PI/6)=4.165659718
Dy=Ay+d*sin(t-PI/6))=2.765371425
Bx=cex+r*cos(t)=3.830222216
By=cey+r*sin(t)=3.213938048
Ex=cex+(r*(1-c)+d*cos(PI/6))*cos(t)=3.806189078
Ey=cey+(r*(1-c)+d*cos(PI/6))*sin(t)=3.193771851
Distance OA=sqrt((Ax-cex)^2+(Ay-cey)^2)=4
OK Distance OA=4=r*(1-c)
Distance AE=sqrt((Ax-Ex)^2+(Ay-Ey)^2)=0.9686269669
Distance EB=sqrt((Ex-Bx)^2+(Ey-By)^2)=0.03137303338
OK Distance AB=Distance AE+Distance EB=1=r*c
Distance AC=sqrt((Ax-Cx)^2+(Ay-Cy)^2)=1.11847408
Distance CD=sqrt((Cx-Dx)^2+(Cy-Dy)^2)=1.11847408
Distance DA=sqrt((Dx-Ax)^2+(Dy-Ay)^2)=1.11847408
OK Distance AC=Distance CD=Distance DA=d => Equilateral
CQFD
06-07-2017, 04:25 PM
Post: #6
RE: Small challenge
Or, a=(sqrt(7)-2)*sqrt(3)/5
06-07-2017, 04:31 PM
Post: #7
RE: Small challenge
06-07-2017, 08:42 PM (This post was last modified: 06-08-2017 02:24 AM by SlideRule.)
Post: #8
RE: Small challenge
I arrived at a slightly different results:
Δ ≈ 6.52° = interior angle formed by the line segments BOD
(½ the central angle of the sector formed by COD)
θ ≈ 23.58° = interior angle formed by line segments ADO
D ≈ (0.118474, 0.993725)
D ≈ (0.111847, 0.993725) typo correction
verified by WolframAlpha & AnalyzeMath
AD ≈ 0.22710 = d = a*R
AD ≈ 0.22369 = d = a*R correction for typo
[attachment=4914]
[attachment=4915]
The difference seems to be associated with the assumption guiding the calculation of the distance EB. I do not see how the Arc Segment S for the sector CAD with a central angle anchored to point A and the Arc Segment for the sector COD with a central angle anchored at point O can both describe the same arc segment CBD.
Although the difference of the sagitta calculated this way may be small, in a manner similar to the small difference between the SIN & the TAN of a small angle, I take exception to the equality of the results, not the equivalency.
Where am I going wrong?
BEST!
SlideRule
06-07-2017, 09:54 PM
Post: #9
RE: Small challenge
(06-07-2017 08:42 PM)SlideRule Wrote: I arrived at a slightly different results:
Hello,
Don't forget that the points C and D belong to the outer circle with radius r. So, although they also belong to the triangle ACD, the formula I used to calculate the arc height seems perfectly valid. You could imagine a OCD triangle to apply the formula. What do you think ? Thanks
06-07-2017, 10:53 PM (This post was last modified: 06-08-2017 02:26 AM by SlideRule.)
Post: #10
RE: Small challenge
Pekis
I used the Line-Circle intersect equations to calculate the coordinates of point D & since the other points can all be derived by rotating the line segment thru points O & B to align with the y axis, the rest is simple trigonometry / geometry using the distance formula, etc.
I've attached a grahpmatica screen capture to illustrate the deviation in the two Arcs thru pts C-D created by the two segments C-A-D (with central angle @ A) & C-O-D (with central angle @ O). The approximate delta is 0.0235.
[attachment=4916]
BEST!
SlideRule
edit: found and corrected numeric typo - omitted a digit!
06-08-2017, 05:10 AM (This post was last modified: 06-08-2017 05:12 AM by Pekis.)
Post: #11
RE: Small challenge
(06-07-2017 10:53 PM)SlideRule Wrote: Pekis
But there is no arc with central angle at A ! The only existing arc is created by outer circle with radius r and points C and D, with central angle at O, and for which I used the arc height formula.
Another completely different thing is the equilateral triangle ACD ... But I understand it can be confusing
06-08-2017, 12:12 PM (This post was last modified: 06-08-2017 12:13 PM by SlideRule.)
Post: #12
RE: Small challenge
Pekis
Talk about one mistake leading to another - I shouldn't do these challenges after midnight. I was looking for the reason underlying the difference in the magnitude of my solution and made a hasty & incorrect assumption with respect to the calculation of the sagitta, very embarrassing senior moment!
I wonder, is there a third approach to the solution of the problem?
BEST!
SlideRule
06-08-2017, 12:59 PM
Post: #13
RE: Small challenge
06-09-2017, 07:08 AM
Post: #14
RE: Small challenge
Hello,
I also tried to complete the drawing of the equilateral triangle with the surface below the arc (transforming the equilateral triangle into a pie slice).
It's then good to know that:
Arc length=r*2*arcsin(a/2)
Arc height=r*(1-sqrt(4-a^2)/2)
Arc angle span=Arc lengh/r=2*arcsin(a/2)
Arc starting angle (at C)=t+arcsin(a/2)
Arc ending angle (at D)=t-arcsin(a/2)
Thanks to all !
06-09-2017, 01:14 PM (This post was last modified: 06-09-2017 01:35 PM by Vtile.)
Post: #15
RE: Small challenge
(06-08-2017 12:12 PM)SlideRule Wrote: PekisAt least it is nice to know that I'm not the only one writing gibberish at times while surfing at the midnight on MoHPC!
I'll throw in another small and easy challenge ( especially since I give a picture. ). You are making cones out of cardboard, what are the measures of the cut out piece when it is then bend to cone and the dimensions for the cone are as follows.
angle Alpha: 30 deg.
R: 30
Hx: 5
The given values are random.
06-11-2017, 10:59 AM
Post: #16
RE: Small challenge
(06-09-2017 01:14 PM)Vtile Wrote: I'll throw in another small and easy challenge ( especially since I give a picture. ). You are making cones out of cardboard, what are the measures of the cut out piece when it is then bend to cone and the dimensions for the cone are as follows.
From these equations (\(\beta\) in radian):
\(R_2 · \beta=2 · r · \pi\), and
\((R_2+s) · \beta=2 · R · \pi\)
we get:
\(R_2=\frac{\frac{h_x}{cos \alpha}}{\frac{R}{R-h_x}-1}\), and
\(\beta=\frac{2 · (R-h_x) · \pi}{R_2}\)
All the other simplification is your task
Csaba
06-11-2017, 09:58 PM (This post was last modified: 06-12-2017 08:12 AM by Pekis.)
Post: #17
RE: Small challenge
Hello,
I just don't fully understand the figure in your cone challenge ...
Anyway, I wanted to give an epilogue to my challenge with a generalized formula with a isosceles triangle (with one base and two equal sides) instead of an equilateral one:
let c=Fraction of Outer circle radius left for the isosceles triangle
let p=1-c=Fraction of Outer circle radius for the figure inside the inner circle
let b=Base length of the isosceles triangle
let e=Fraction of the outer circle radius for base length => b=e*r
let d=Side length of the isosceles triangle
let a=Fraction of the outer circle radius for side length => d=a*r
let k=a/e=Ratio between Side and Base of the isosceles triangle
=> a=(sqrt(4*k²-p²)-p*sqrt(4*k²-1))/(2*k)
It's good looking ...
Arc length: r*2*arcsin(a/(2*k))
Arc height: r*sqrt(4-(a/k)²)/2
Arc Angle span: 2*arcsin(a/(2*k))
Arc Start angle: t+arcsin(a/(2*k))
Arc End angle: t-arcsin(a/(2*k))
And instead of the PI/6 angle in ACE, we now have Angle ACE=arcsin(1/(2*k))
For an equilateral triangle, k=1 and it leads to
a=((sqrt(4-p²)-p*sqrt(3))/2
(same as previous formula a=(sqrt(3)*(c-1)+sqrt((c+1)*(3-c)))/2))
Thanks
User(s) browsing this thread: 1 Guest(s)
|
I have difficulties calculating the area and setting the right boundaries of the following polar coördinates:
$$r=2(1+cos(\theta) ) $$
Thanks in advance
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
The function $$\theta\mapsto r(\theta):=2(1+\cos\theta)$$ does not define an area per se. Now this function is $2\pi$-periodic, and graphing the curve $$\gamma:\quad\theta\mapsto\bigl(x(\theta),y(\theta)\bigr)=r(\theta)\,(\cos\theta,\sin\theta)\qquad(-\pi\leq\theta\leq\pi)$$ we obtain a "loop with an indent" enclosing a certain shape $A$, whereby $\gamma$ is astroidal with respect to the origin. The area of $A$ then can be calculated with the formula $${\rm area}(A)={1\over2}\int_{-\pi}^{\pi}r^2(\theta)\>d\theta=6\pi\ .$$
|
I am trying to value an option on N assets, say $S^1, S^2,..., S^N$ that expires in $\Delta T$ years using Monte Carlo simulation. I have read many sources that state I should use the following formula for each asset:
$S_T^i = S_0^i exp( (\mu_i - \sigma_i^2/2)\Delta T + \alpha_i\sigma_i\sqrt{\Delta T})$
Where:
The $i$'s are used to differentiate the different assets. $S_t^i$ denotes the price of asset $S^i$ at time t. $(\alpha_1,...,\alpha_N)$ are derived by taking the Cholesky decomposition $LL^*$ of the "correlation matrix" and then applying it to N iid standard normal random variables $(\epsilon_i,...,\epsilon_N)$.
My questions are:
Does the "correlation matrix" represent the correlations between the Assets or of the Asset returns? Does the Cholesky method simply accomplish drawing from multivariate normal distribution with mean $(0,...,0)$ and variance-covariance matrix of the answer to my first question?
Thank you in advance.
|
NTS ABSTRACTSpring2019
Return to [1]
Jan 23
Yunqing Tang Jan 24
Hassan-Mao-Smith--Zhu The diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$ Abstract: Assume a polynomial-time algorithm for factoring integers, Conjecture~\ref{conj}, $d\geq 3,$ and $q$ and $p$ prime numbers, where $p\leq q^A$ for some $A>0$. We develop a polynomial-time algorithm in $\log(q)$ that lifts every $\mathbb{Z}/q\mathbb{Z}$ point of $S^{d-2}\subset S^{d}$ to a $\mathbb{Z}[1/p]$ point of $S^d$ with the minimum height. We implement our algorithm for $d=3 \text{ and }4$. Based on our numerical results, we formulate a conjecture which can be checked in polynomial-time and gives the optimal bound on the diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$. March 28
Shamgar Gurevitch Harmonic Analysis on GLn over finite fields Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters.
For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}: $$trace (\rho(g))/dim (\rho),$$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (TAMU).
|
I propose a small modification of the parametrization for the torus that addresses issues with conformality. Try
F[t_, u_, r_] := {Cos[t] (r + Cos[u + Sin[u]/r]),
Sin[t] (r + Cos[u + Sin[u]/r]),
Sin[u + Sin[u]/r]}
instead. Next, we wish to choose suitable values for $m, n$ for a given $r$ such that the mapping of the regular hexagonal tiling preserves angles as much as possible. We see that this requires us to choose $m, n$ such that $$\frac{\sqrt{3}}{2} \frac{n}{m} = r.$$ As we also require $n$ to be even (or else the tiling does not fit properly on the torus), we can let $n = 2k$ and this gives us $k \sqrt{3} = rm$; thus for a given $r$ we should try to choose $k, m$ as the nearest integers satisfying this equation. This gives us a very nearly angle-preserving tiling. For example, with $r = 2 \sqrt{3}$, we can choose $m = 11$, $n = 44$ to get something that looks like this:
Notice how much more regular the hexagons are throughout the torus--the "inner" ones are not squashed, and the outer ones are not stretched.
Addendum. So, the above seems to work reasonably well for large $r$, but when $r = 1 + \epsilon$ for small $\epsilon$, it doesn't work because the mapping I chose is not truly conformal. I found the relevant information here.
This suggests that the correct form of $f$ should be
F[t_, u_, r_] := {Cos[t], Sin[t], Sin[# u]/#} #^2/(r - Cos[# u]) &[Sqrt[r^2 - 1]]
And whereas $t$ is still plotted on the same interval, we need to plot $u$ on $\left(-\frac{\pi}{\sqrt{r^2-1}}, \frac{\pi}{\sqrt{r^2-1}}\right)$. So we modify the plotting command as well:
P[r_, m_, n_] := Graphics3D[Polygon /@
Table[F[4 Pi/(3 n) (Cos[Pi k/3] + i 3/2),
2 Pi/(Sqrt[3 (r^2 - 1)] m) (Sin[Pi k/3] + (j + i/2) Sqrt[3]),
r], {i, n}, {j, m}, {k, 6}], Boxed -> False]
And now the selection of $m, n$ based on $r$ is also more complicated. $n = 2m \sqrt{\frac{r^2 - 1}{3}}$ seems to give good results. Here is a picture for $r = 1.1$, $m = 30$, $n = 20$:
This solution calculates exact coordinates. However, for 3D-printing, machine precision is usually enough, and affords a significant speedup. We can force machine arithmetic by adding dots after some of the constants (e.g.
2 Pi to
2. Pi). We can also achieve a 3× speed up by only calculating the location of each vertex once, and using
GraphicsComplex to share the locations with each hexagon. (This is how 3D formats like
.stl work internally. If you need regular polygon objects to process further, just use
Normal to eliminate
GraphicsComplex.)
Pfast[r_, m_, n_] :=
Graphics3D[
GraphicsComplex[
Flatten[Table[
F[2. Pi (i + k/3.)/n, Pi (1. + i + 2 j)/m/Sqrt[r^2 - 1.],
r // N], {j, m}, {i, n}, {k, {-1, +1}}], 2],
Polygon[Join @@
Table[Mod[(j - 1) (2 n) + {1, 2, 3 + If[i == n, n (n - 2), 0]}~
Join~({2, 1, If[i == 1, n (2 - n), 0]} + 2 n) + 2 (i - 1),
2 n m, 1], {i, n}, {j, m}]]], Boxed -> False]
The code is almost the same as before, except that we now only need to generate two new coordinates for each cell, so
Cos[Pi k/3] only takes on two values and
Sin[Pi k/3] only takes on one value, allowing the arithmetic to be simplified considerably. We don't need to change
F; it's already extremely fast due to the two-stage calculation it does to avoid recomputing the expensive square root multiple times.
We can do a timing and memory usage comparison of the two versions:
ByteCount[P2[2, 50, 100]] // Timing
(* {0.343750, 1440448} *)
ByteCount[P[2, 50, 100]] // Timing
(* {5.921875, 60849648} *)
The numerical version is around 20 times faster and gives a result 40 times smaller. It's actually now fast enough to quickly make a nice table of tori with different parameters:
GraphicsGrid[
ParallelTable[
With[{n = 2 Round[m Sqrt[(r^2 - 1)/3]]},
Show[P2[r, m, n], PlotLabel -> {r, m, n}]], {r, {1.1, 1.5, 2, 3,
5}}, {m, {6, 10, 15, 20, 30, 50}}], ImageSize -> Full]
|
1) A $4\times4$ square matrix has distinct eigenvalues $\{0, 1, 2, 3\}$. What is its rank?
2) Let $a,b\in\mathbb{R}^n$ be two non-zero linearly independent vectors, and let $\alpha,\beta\in\mathbb{R}$ be two non-zero scalars.
i) What is the rank of the matrix $M = \begin{bmatrix}a&\alpha a&b&\beta b\end{bmatrix}$?
ii) Can you name two linearly independent non-zero vectors $x_1, x_2\in\mathbb{R}^4$ in the null space of $M$? (i.e., $Mx_1 = Mx_2 = 0$)
For question 1, is the answer $3$? It seems that the rank will correspond to the number of non-zero eigenvalues.
For question 2 i), is the answer $n$? Besides, what is null space? I would be grateful if someone can help .
|
Lebesgue integral
The most important generalization of the concept of an integral. Let $(X,\mu)$ be a space with a non-negative complete countably-additive measure $\mu$ (cf. Countably-additive set function; Measure space), where $\mu(X)<\infty$. A simple function is a measurable function $g:X\to\mathbb R$ that takes at most a countable set of values: $g(x)=y_n$, $y_n\ne y_k$ for $n\ne k$, if $x\in X_n$, $\bigcup\limits_{n=1}^{\infty}X_n=X$. A simple function $g$ is said to be summable if the series \begin{equation} \sum\limits_{n=1}^{\infty}y_n\mu(X_n) \end{equation} converges absolutely (cf. Absolutely convergent series); the sum of this series is the Lebesgue integral \begin{equation} \int\limits_Xg\,d\mu. \end{equation} A function $f:X\to\mathbb R$ is summable on $X$ $ $ if there is a sequence of simple summable functions uniformly convergent (cf. Uniform convergence) to on a set of full measure, and if the limit
is finite. The number is the Lebesgue integral
This is well-defined: the limit exists and does not depend on the choice of the sequence . If , then is a measurable almost-everywhere finite function on . The Lebesgue integral is a linear non-negative functional on with the following properties:
1) if and if
then and
2) if , then and
3) if , and is measurable, then and
4) if and is measurable, then and
In the case when and , , the Lebesgue integral is defined as
under the condition that this limit exists and is finite for any sequence such that , , . In this case the properties 1), 2), 3) are preserved, but condition 4) is violated.
For the transition to the limit under the Lebesgue integral sign see Lebesgue theorem.
If is a measurable set in , then the Lebesgue integral
is defined either as above, by replacing by , or as
where is the characteristic function of ; these definitions are equivalent. If , then for any measurable . If
if is measurable for every , if
and if , then
Conversely, if under these conditions on one has for every and if
then and the previous equality is true (-additivity of the Lebesgue integral).
The function of sets given by
is absolutely continuous with respect to (cf. Absolute continuity); if , then is a non-negative measure that is absolutely continuous with respect to . The converse assertion is the Radon–Nikodým theorem.
For functions the name "Lebesgue integral" is applied to the corresponding functional if the measure is the Lebesgue measure; here, the set of summable functions is denoted simply by , and the integral by
For other measures this functional is called a Lebesgue–Stieltjes integral.
If , and if is a non-decreasing absolutely continuous function, then
If , and if is monotone on , then and there is a point such that
(the second mean-value theorem).
In 1902 H. Lebesgue gave (see [Le]) a definition of the integral for and measure equal to the Lebesgue measure. He constructed simple functions that uniformly approximate almost-everywhere on a set of finite measure a measurable non-negative function , and proved the existence of a common limit (finite or infinite) of the integrals of these simple functions as they tend to . The Lebesgue integral is a basis for various generalizations of the concept of an integral. As N.N. Luzin remarked [Lu], property 2), called absolute integrability, distinguishes the Lebesgue integral for from all possible generalized integrals.
References
[Le] H. Lebesgue, "Leçons sur l'intégration et la récherche des fonctions primitives" , Gauthier-Villars (1928) MR2857993 Zbl 54.0257.01 [Lu] N.N. Luzin, "The integral and trigonometric series" , Moscow-Leningrad (1915) (In Russian) (Thesis; also: Collected Works, Vol. 1, Moscow, 1953, pp. 48–212) [KF] A.N. Kolmogorov, S.V. Fomin, "Elements of the theory of functions and functional analysis" , 1–2 , Graylock (1957–1961) (Translated from Russian) MR1025126 MR0708717 MR0630899 MR0435771 MR0377444 MR0234241 MR0215962 MR0118796 MR1530727 MR0118795 MR0085462 MR0070045 Zbl 0932.46001 Zbl 0672.46001 Zbl 0501.46001 Zbl 0501.46002 Zbl 0235.46001 Zbl 0103.08801 Comments
For other generalizations of the notion of an integral see -integral; Bochner integral; Boks integral; Burkill integral; Daniell integral; Darboux sum; Denjoy integral; Kolmogorov integral; Perron integral; Perron–Stieltjes integral; Pettis integral; Radon integral; Stieltjes integral; Strong integral; Wiener integral. See also, of course, Riemann integral. See also Double integral; Improper integral; Fubini theorem (on changing the order of integration).
References
[H] P.R. Halmos, "Measure theory" , v. Nostrand (1950) MR0033869 Zbl 0040.16802 [P] I.N. Pesin, "Classical and modern integration theories" , Acad. Press (1970) (Translated from Russian) MR0264015 Zbl 0206.06401 [S] S. Saks, "Theory of the integral" , Hafner (1952) (Translated from French) MR0167578 Zbl 1196.28001 Zbl 0017.30004 Zbl 63.0183.05 [Ro] H.L. Royden, "Real analysis", Macmillan (1968) [Ru] W. Rudin, "Real and complex analysis" , McGraw-Hill (1978) pp. 24 MR1736644 MR1645547 MR0924157 MR0850722 MR0662565 MR0344043 MR0210528 Zbl 1038.00002 Zbl 0954.26001 Zbl 0925.00005 Zbl 0613.26001 Zbl 0925.00003 Zbl 0278.26001 Zbl 0142.01701 [HS] E. Hewitt, K.R. Stromberg, "Real and abstract analysis" , Springer (1965) MR0188387 Zbl 0137.03202 How to Cite This Entry:
Lebesgue integral.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Lebesgue_integral&oldid=29351
|
Consider the series $\displaystyle{ \sum_{n=1}^{\infty} \left[ \frac{\sin \left( \frac{n^2+1}{n}x\right)}{\sqrt{n}}\left( 1+\frac{1}{n}\right)^n\right]}$ . Find all points at which the series is convergent. Find all intervals that the series is uniformly convergent.
I know that I need to use Dirichlet and/or Abel Criterion to show this.
My first attempt was to consider the argument of the sum as product of three sequences of functions, $f_n,g_n,h_n$ and perform Dirichlet/Abel twice. Towards that end I tried to break up $ \sin \left( \frac{n^2+1}{n}x\right)$ using $\sin(\alpha +\beta)=\sin \alpha \cos \beta \,+\, \sin \beta \cos \alpha $ to produce
$\sin \left(nx +\frac{x}{n} \right)=\sin(nx)\cos\left(\frac{x}{n}\right)+\sin\left(\frac{x}{n}\right)\cos (nx)$ and use the fact that $\sin(nx)\leq \frac{1}{\big|\sin\left(\frac{x}{2}\right)\big|}$, but then I don't know what to do with the $\sin\left(\frac{x}{n}\right)$, $\cos\left(\frac{x}{n}\right)$, and $\cos(nx)$. Previously, when using this method we showed uniform convergence on compact intervals like $[2k\pi-\varepsilon,2(k+1)\pi-\varepsilon]$.
Can I just say $ \frac{\sin \left( \frac{n^2+1}{n}x\right)}{\sqrt{n}}\leq \frac{1}{\sqrt{n}}\to 0$ as $n\to \infty$ ?
Also, I know that $\lim_{n\to \infty}\left( 1+\frac{1}{n}\right)^n=e$.
i.e. Help :) Thank you in advance
|
I have a proper rotation transformation between coordinate axes $\{X, Y, Z\}$ and $\{X^\prime, Y^\prime, Z^\prime\}$. What I am given are three angles, all of which have vertex at the origin:
Let the line of intersection between the $XY$ plane and the $X^\prime, Y^\prime$ plane be OA; then I am given that the angle between the $X$ axis and OA is $\alpha$.
Let the line of intersection between the $YZ$ plane and the $Y^\prime, Z^\prime$ plane be OB; then I am given that the angle between the $Y$ axis and OB is $\beta$.
Let the line of intersection between the $ZX$ plane and the $Z^\prime, X^\prime$ plane be OC; then I am given that the angle between the $Z$ axis and OC is $\kappa$.
I need to find either the rotation matrix, or the expression of the rotation in terms of Euler angles or in terms of Tait-Bryan angles, as a function of $(\alpha, \beta, \kappa)$.
The three quantities should be sufficient to specify the rotation but I'm having a lot of trouble finding an expression that is not horribly ugly.
thanks in advance.
|
Definition
When a complex number is thought of as a vector in two dimensions, the $X$ coordinate $x$ and the $Y$ coordinate $y$ can be expressed in terms of the length of the vector $r$ and the angle made by this vector with the positive $X$-axis, namely $\theta$. Since $x = r \cos \theta$ and $y = r \sin \theta$, $z$ can be expressed as(1)
where $\theta$ can be in degrees or radians (usually radians) and recall that $2 \pi \mbox{ rad } = 360\,^\circ$. $r$ is called the magnitude of $z$, denoted by $|z|$ and $\theta$ is called the phase of the complex number $z$, denoted by $\mbox{arg}{z}$ or $\angle z$.
Using Euler's identities $z$ can be written as(2)
This is known as the polar form or exponential form and it is very important to be able to convert a complex number from cartesian form to exponential form and vice versa. It is easy to see that $x,y,r$ and $\theta$ are related according to(3)
|
Search
Now showing items 1-2 of 2
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays
(Elsevier, 2014-11)
The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
|
Search
Now showing items 1-5 of 5
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
|
Consider the problem of finding a minimal volume-covering ellipsoid:
$$\begin{array}{ll} \text{minimize} & \log \det X^{-1}\\ \text{subject to} & a_i^T X a_i \le 1\\ & X \succeq 0\end{array}$$
I have two questions:
Can the problem above be written as a semidefinite program (SDP)?
If the answer to question 1 is yes, how to write it to standard form of SDP as follows:
$$minimize \quad tr(CX)$$ $$s.t. \quad tr(A_iX)=b_i, \quad i=1,2,...,p$$ $$X \succeq 0$$ In fact,I confirm that it is a convex programming because $\log \det X^{-1}$ is a convex function and $a_i^TXa_i \le 1 \iff tr(a_ia_i^TX) \le 1$ is linear constraint for $X \in S^n_{++}$.
|
Let's back up a little bit and provide a comprehensive answer to these types of problems.
Suppose $u(x,t)$ solves \begin{align}u_t&=u_{xx}, \qquad 0 < x < \ell,\ t>0,\\u(0,t)&=f(t),\\u(\ell, t)&=g(t),\\u(x,0)&=h(x).\end{align}In the subsequent work, we will impose whatever smoothness conditions on the initial and boundary data we need to get convergence of the involved series.
First, standard separation of variables shows that the solution to the problem with homogeneous BCs is $$u(x,t)=\sum_{n=1}^\infty b_n\sin(\sqrt{\lambda_n}\,x)e^{-\lambda_n t},$$ where $\lambda_n=(n\pi/\ell)^2$, $n=1,2,\dots$ In other words, for each fixed $t>0$,$$u(x,t)=\sum_{n=1}^\infty u_n(t)\sin(\sqrt{\lambda_n}\,x)\quad\text{where}\quad u_n(t)={2\over \ell}\int_0^\ell u(x,t)\sin(\sqrt{\lambda_n}\,x)\,dx.$$ (This is a key observation. I hope the notation isn't confusing: $u_n$ represents the coefficients in the series for $u$,
not a partial derivative.)
Then, differentiating the series above, define $v_n(t)$ and $w_n(t)$ as the coefficients in the series for ${\partial u\over \partial t}$ and ${\partial^2 u\over \partial x^2}$, respectively:
\begin{align}{\partial u\over \partial t}&=\sum_{n=1}^\infty v_n(t)\sin(\sqrt{\lambda_n}\,x)\quad\text{where}\quad v_n={2\over \ell}\int_0^\ell {\partial u\over \partial t}\sin(\sqrt{\lambda_n}\,x)\,dx={du_n\over dt},\\{\partial^2 u\over \partial x^2}&=\sum_{n=1}^\infty w_n(t)\sin(\sqrt{\lambda_n}\,x)\quad\text{where}\quad w_n={2\over \ell}\int_0^\ell {\partial^2 u\over \partial x^2}\sin(\sqrt{\lambda_n}\,x)\,dx.\end{align}
Integrating the $w_n(t)$ by parts, simplifying the trig terms, and applying the BCs, we get\begin{align}w_n(t)&=-{2\over \ell}\int_0^{\ell} \lambda_n u(x,t)\sin(\sqrt{\lambda_n}\,x)\,dx\\&\qquad\qquad+{2\over \ell}\left[u_x(x,t)\sin(\sqrt{\lambda_n}\,x)-\sqrt{\lambda_n}\,u(x,t)\cos(\sqrt{\lambda_n}\,x)\right]\Bigg|_{x=0}^{x=\ell}\\&=-\lambda_nu_n(t)+\underbrace{{2\sqrt{\lambda_n}\over \ell}\left[f(t)+(-1)^{n+1}g(t)\right]}_{F(t)}.\end{align}
From the PDE, $$u_t=u_{xx}\implies v_n(t)=w_n(t)\implies {du_n\over dt}=-\lambda_n u_n(t)+F(t),$$ and thus the coefficients $u_n(t)$ we seek are found by solving the (ODE!) initial-value problem\begin{align}{du_n\over dt}+\lambda_nu_n(t)&=F(t),\\u_n(0)&={2\over \ell}\int_0^\ell h(x)\sin(\sqrt{\lambda_n}\,x)\,dx,\end{align}by the method of your choice.
This is called the
method of eigenfunction expansions. Transform methods are also available, but that is a separate post.
Hope that helps.
|
The sequences which are Cauchy for every compatible metric on $X$ are exactly the (topologically) convergent sequences. Every convergent sequence is easily seen to be Cauchy for every compatible metric, so we're asking about the converse.
So let $(x_n)$ be a sequence in a metrizable space $X$ which is
not convergent. Pick a metric $\delta$ on $X$. If $(x_n)$ is not $\delta$-Cauchy, we're done.
Otherwise, if $(x_n)$ is $\delta$-Cauchy, let $\overline{X}$ be the completion of $X$ with respect to $\delta$. Now $(x_n)$ converges to a unique limit point $x\in \overline{X}$, and $\overline{X}$ is completely metrizable. Let $Y = \overline{X}\setminus \{x\}$. We have $X\subseteq Y\subseteq \overline{X}$, and $Y$ is an open subset of $\overline{X}$. It is a theorem that a subspace of a completely metrizable space is completely metrizable if and only if it is $G_\delta$. In particular, there is a compatible complete metric $\delta'$ on $Y$. But $(x_n)$ is not convergent in $Y$, so it is not $\delta'$-Cauchy. The restriction of $\delta'$ to $X$ is a compatible metric in which $(x_n)$ is not Cauchy.
Being a logician, the first reference I can point to for the theorem about $G_\delta$ subspaces is
Classical Descriptive Set Theory by Kechris, Theorem I(3.11). But in the case of just removing a single point $y$, it's not hard to write down an explicit $\delta'$ that works: $$\delta'(a,b) = \delta(a,b)+\left|\frac{1}{\delta(a,y)} - \frac{1}{\delta(b,y)}\right|.$$
|
This answer focuses on identifying families of solutions to the problem described in the question.
I've made two provisional conjectures in order to make progress with the problem:
The result can be stated for three $2n$-gons rather than two $n$-gons and one $2n$-gon.
Solutions have mirror symmetry. Or equivalently, in any solution there are two pairs of $2n$-gons which have the same degree of overlap. [This turns out to be false - see 'Solution family 5' below. However, this condition is assumed in Solution families 1-4.]
[
Continuation 6: in an overhaul of the notation I've halved $\phi$ and doubled $m$ so that $m$ is always an integer.]
If we define the degree of overlap, $j$, between two $2n$-gons $(n>3)$ as the number of edges of one that lie wholly inside the other, then $1 < j < n$.
If $$\phi = \frac{\pi}{2n}$$is half the angle subtended at the centre of the $2n$-gon by one of its edges, then the distance between the centres of two overlapping $2n$-gons is $$D_{jn} = 2\cos{j\phi}$$Consider a $2n$-gon P which overlaps a $2n$-gon O with degree $j$. Now bring in a third $2n$-gon, Q, which also overlaps O with degree $j$ but is rotated about the centre of O by an angle $m\phi$ with respect to P, where $m$ is an integer.
The distance between the centres of P and Q, which I'll denote by $D_{kn}$ for a reason that will become apparent, is$$D_{kn} = 2D_{jn}\sin{\tfrac{m}{2}\phi} = 4\cos{j\phi} \, \sin{\tfrac{m}{2}\phi}$$
We now demand that P and Q should overlap by an integer degree, $k$, so that$$D_{kn} = 2\cos{k\phi}$$This will ensure that all points of intersection coincide with vertices of the intersecting polygons, and thus provide a configuration satisfying the requirements of the question (with the proviso that the condition does not guarantee that there is a common area of overlap shared by all three polygons).
We have omitted mention of the orientation of the polygons, but it is easily shown that this is always such as to achieve the desired overlap.
Combining the two expressions for $D_{kn}$ gives the condition
$$2\cos{j\phi}\, \sin{\tfrac{m}{2}\phi} = \cos{k\phi}$$or (since $n\phi=\pi/2$)$$2\cos{j\phi}\, \cos{(n-\tfrac{m}{2})\phi} = \cos{k\phi} \tag{1}$$
The configurations we seek are solutions of this equation for integer $n$, $j$, $k$ and $m$.
In the first example in the question $n = 12, j = 8, k = 6, m = 12$.
In the second example $n = 15, j = 6, k = 10, m = 6$.
[
Continuation 6: for solutions under the constraint of conjecture 2, $m$ is always even, but in the more general case $m$ may be odd.]
I'll now throw this open to see if anyone can provide a general solution. It seems likely that $j$, $k$ and $m/2$ must be divisors of $2n$ [this turns out to be incorrect], and I have a hunch that the solution will involve cyclotomic polynomials [this turns out to be correct].
Continuation (1)
I've now identified 3 families of solutions consistent with conjecture 2 (mirror symmetry), all involving angles of 60 degrees. There may be others.
Solution family 1
This family is defined by setting $j=2n/3$. This means that half the angle subtended at the centre of O by its overlapping edges is $\tfrac{\pi}{3}$ radians or 60 degrees. Since $\cos{\tfrac{\pi}{3}} = \tfrac{1}{2}$ it reduces equation 1 to$$\cos{(n-\tfrac{m}{2})\phi} = \cos{k\phi}$$so there are solutions with$$n-\tfrac{m}{2} = k$$(where $\tfrac{m}{2}$ is an integer) subject to $2 \le k \le n-1\,\,$, $1 \le \tfrac{m}{2} \le n-2\,\,$ and $3|n$.
The first example in the question belongs to this family. The complete set of solutions for $n=12$ combine to make this pleasing diagram:
Solution family 2
This family has $m=2n/3$. This makes $\cos{(n-\tfrac{m}{2})\phi}=\cos{(\pi/3)} = \tfrac{1}{2}$, which reduces equation 1 to$$\cos{j\phi} = \cos{k\phi}$$so (given that $j<n$ and $k<n$)$$j = k$$These solutions have threefold rotational symmetry. The only restriction is that $n$ must be divisible by 3. Example ($n=9, j=k=4, m=6$):
Solution family 3
This family is the most interesting of the three, but yields only one solution. It is defined by setting $k=2n/3$ so that $\cos{k\phi}=\cos{\tfrac{\pi}{3}} = \tfrac{1}{2}$. Equation 1 then becomes
$$2\cos{j\phi}\,\cos{(n-\tfrac{m}{2})\phi} = \tfrac{1}{2}$$which may be written in the following equivalent forms:$$\cos{(n+\tfrac{m}{2}-j)\phi} + \cos{(n+\tfrac{m}{2}+j)\phi} = -\tfrac{1}{2} \tag{2}$$$$\cos{(n-\tfrac{m}{2}-j)\phi} + \cos{(n-\tfrac{m}{2}+j)\phi} = \tfrac{1}{2} \tag{3}$$Solutions to these equations can be found using the following theorem relating the roots $z_i(N)$ of the $N$th cyclotomic polynomial to the Möbius function $\mu(N)$:
$$\sum_{i=1}^{\varphi(N)} {z_i(N)} = \mu(N)$$where $\varphi(N)$ is the Euler totient function (the number of positive integers less than $N$ that are relatively prime to $N$) and $z_i(N)$ are a subset of the $N$th roots of unity.Taking the real part of both sides and using symmetry this becomes:$$\sum_{i=1}^{\varphi(N)/2} { \cos{(p_i(N) \frac{2\pi}{N})} } = \tfrac{1}{2} \mu(N) \tag{4}$$where $p_i(N)$ is the $i$th integer which is coprime with $N$.
The Möbius function $\mu(N)$ takes values as follows:
$\mu(N) = 1$ if $N$ is a square-free positive integer with an even number of prime factors.
$\mu(N) = −1$ if $N$ is a square-free positive integer with an odd number of prime factors.
$\mu(N) = 0$ if $N$ has a squared prime factor.
Equation 4 thus provides solutions to equations 2 and 3 if $\varphi(N) = 4$, $\mu(N)$ has the appropriate sign and the cosine arguments are matched.
The first two conditions are true for only two integers:
$N=5$, with $\mu(5)=-1$, $p_1(5) = 1, p_2(5) = 2$
$N=10$, with $\mu(10)=1$, $p_1(10) = 1, p_2(10) = 3$.
We first set $N=5$ and look for solutions to equation 2.
Matching the cosine arguments requires firstly that$$2j \frac{\pi}{2n} = (p_2(5)-p_1(5))\frac{2\pi}{5}$$from which it follows that$$5j = 2n$$
$n$ must be divisible by 3 to satisfy $k=2n/3$, so the smallest value of $n$ for which solutions are possible is $n=15$, with $k=10$ and $j=6$. All other solutions will be multiples of this one.Matching the cosine arguments also requires that$$(n+\tfrac{m}{2}-j) \frac{\pi}{2n} = p_1(5) \frac{2\pi}{5}$$which implies $m=6$.
This is the solution illustrated by the second example in the question.
Setting $N=10$ and looking for solutions to equation 3 yields the same solution.
Continuation (2)
Solution family 4
A fourth family of solutions can be obtained by writing equation 1 as
$$\cos{(n+\tfrac{m}{2}-j)\phi} + \cos{(n+\tfrac{m}{2}+j)\phi} + \cos{k\phi} = 0 \tag{5}$$
and viewing this as an instance of equation 4 with $\varphi(N)/2 = 3$ and $\mu(N) = 0$. There are two values of N which satisfy these conditions, $N = 9$ and $N = 18$, which lead to three solutions:
For $N = 9$:$$n=9, j=6, k=8, m=2\\n=9, j=4, k=4, m=6$$
For $N=18$:$$n=9, j=2, k=2, m=6$$
However, these are not new solutions. The first is a member of family 1 and the last two are members of family 2.
Continuation (3)
Solution family 5
Rotating a $2n$-gon about a vertex by an angle $m\phi$ moves its centre by a distance $$2\sin{ \tfrac{m}{2}\phi} = 2\cos{(n-\tfrac{m}{2})\phi} = D_{n-m/2,n}.$$If $m$ is even the rotated $2n$-gon thus overlaps the original $2n$-gon with integer degree $n-\tfrac{m}{2}$, and a third $2n$-gon with a different $m$ may overlap both of these, providing another type of solution to the problem.
Solutions of this kind may be constructed for all $n \ge 3$. The diagram below includes the complete set of such solutions for $n=5$. A similar diagram with $n=12$ (but with a centrally placed $2n$-gon of the same size which can only be added when $3|n$) is shown above under Solution family 1.
This family of solutions provides exceptions to conjecture 2: not all groups of three $2n$-gons overlapping in this way show mirror symmetry.
Continuation (4)
If we relax the condition set by conjecture 2, allowing solutions without mirror symmetry, we need an additional parameter, $l$, to specify the degree of overlap between O and P (which is now no longer $j$).
The distances between the centres of the three $2n$-gons are now related by the cosine rule:
$$D_{nk}^2 = D_{nj}^2 + D_{nl}^2 - 2 D_{nj}D_{nl}\cos{m_k\phi},$$where a subscript $k$ has been added to $m$ to acknowledge the fact that $j$, $l$ and $k$ can be cycled to generate three equations of this form. These can be written$$\\ \cos^2{J} + \cos^2{L} - 2 \cos{J} \cos{L} \cos{M_k} = \cos^2{K} \\ \cos^2{K} + \cos^2{J} - 2 \cos{K} \cos{J} \cos{M_l} = \cos^2{L} \\ \cos^2{L} + \cos^2{K} - 2 \cos{L} \cos{K} \cos{M_j} = \cos^2{J} $$where$$J = j\phi,\, L = l\phi,\, K = k\phi,\\M_j = m_j\phi,\, M_l = m_l\phi,\, M_k = m_k\phi$$
The same result in a slightly different form is derived in the answer provided by @marco trevi.
$M_j$, $M_l$ and $M_k$ are the angles of the triangle formed by the centres of the three polygons. Since these sum to $\pi$ we have$$m_j + m_l + m_k = 2n$$
The sine rule gives another set of relations:$$\frac{\cos{J}}{\sin{M_j}} = \frac{\cos{L}} {\sin{M_l}} = \frac{\cos{K}}{\sin{M_k}} $$
In general the $m$ parameters are limited to integer values (as can be seen by considering the symmetry of the overlap between a $2n$-gon and each of its two neighbours). But they are now not necessarily even.
|
These symbols are like arrow, except the arrow heads are inverted. Some ASCII art: >-- or --<
Could you help?
TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community
I couldn't find any existing symbols. But, one option would be to use
tikz to draw them:
\documentclass{article}\usepackage{amsmath}\usepackage{tikz}\newcommand{\toReversed}{\mathrel{\tikz [x=1.4ex,y=1.4ex,line width=.14ex, baseline] {\draw (0,0.5) -- (1,0.5); \draw [<-] (0.95,0.5) -- (1,0.5);}}}%\newcommand{\fromReversed}{\mathrel{\tikz [x=1.4ex,y=1.4ex,line width=.14ex, baseline] {\draw (0,0.5) -- (1,0.5); \draw [<-] (0.05,0.5) -- (0,0.5);}}}%\begin{document}$A \toReversed B \fromReversed C$\end{document}
They are
\righttail and
\lefttail with
unicode-math, U291A and U+2919. No math font for
pdflatex provides them, as far as I know, except for the STIX fonts.
If you have them, you can import the symbols; not really compatible with the standard arrows, but not too distant either.
\documentclass{article}%% from stix.sty (with slight changes)\DeclareFontEncoding{LS1}{}{}\DeclareFontSubstitution{LS1}{stix}{m}{n}\DeclareSymbolFont{arrows2}{LS1}{stixsf}{m}{it}\SetSymbolFont{arrows2}{bold}{LS1}{stixsf}{b}{it}\DeclareMathSymbol{\lefttail} {\mathrel}{arrows2}{"B2}\DeclareMathSymbol{\righttail} {\mathrel}{arrows2}{"B3}\DeclareMathSymbol{\leftdbltail} {\mathrel}{arrows2}{"B4}\DeclareMathSymbol{\rightdbltail}{\mathrel}{arrows2}{"B5}\begin{document}$A\lefttail B \righttail C \rightarrow D$$A\leftdbltail B \rightdbltail C$\end{document}
With
unicode-math
\documentclass{article}\usepackage{unicode-math}\setmathfont{Latin Modern Math}\setmathfont[range={\lefttail,\righttail,\leftdbltail,\rightdbltail}]{XITS Math}\begin{document}$A\lefttail B \righttail C \rightarrow D$$A\leftdbltail B \rightdbltail C$\end{document}
Also Asana Math has the symbols. Unfortunately they are missing from Latin Modern Math and the other TeX Gyre math fonts, that is, Termes and Pagella.
This approach builds it from existing characters, in a manner that tries to mimic the style of the default arrowheads.
\documentclass{article}\usepackage{graphicx}\def\righttail{\mathrel{% \makebox[.2pt][l]{$\righttailhelper$}\righttailhelper\mkern-4mu-}}\def\righttailhelper{\scalebox{.5}[1]{$\succ$}}\def\lefttail{\mathrel{% -\mkern-4mu\makebox[.2pt][l]{$\lefttailhelper$}\lefttailhelper}}\def\lefttailhelper{\scalebox{.5}[1]{$\prec$}}\begin{document}$A \righttail C \rightarrow D$$A \leftarrow C \lefttail B$\end{document}
|
Fermat's little theorem
For a number $a$ not divisible by a prime number $p$, the congruence $a^{p-1}\equiv1\pmod p$ holds. This theorem was established by P. Fermat (1640). It asserts that the order of every element of the multiplicative group of residue classes modulo $p$ divides the order of the group. Fermat's little theorem was generalized by L. Euler to the case modulo an arbitrary $m$. Namely, he proved that for every number $a$ relatively prime to the given number $m>1$ there is the congruence
$$a^{\phi(m)}\equiv1\pmod m,$$
where $\phi(m)$ is the Euler function. Another generalization of Fermat's little theorem is the equation $x^q=x$, which is valid for all elements of the finite field $k_q$ consisting of $q$ elements.
References
[1] I.M. Vinogradov, "Elements of number theory" , Dover, reprint (1954) (Translated from Russian) Comments References
[a1] G.H. Hardy, E.M. Wright, "An introduction to the theory of numbers" , Oxford Univ. Press (1979) Comments
The converse of Fermat's little theorem does not hold: for any fixed $a$ there are infinitely many composite $n$ such that $a^{n-1} \equiv 1 \pmod n$. Such $n$ are known as pseudoprimes.
References
[b1] C. Pomerance, J.L. Selfridge, S.S. Wagstaff, Jr., "The pseudoprimes to $25\cdot10^9$" Math. Comp. , 35 (1980) pp. 1003–1026. Zbl 0444.10007. DOI 10.2307/2006210 How to Cite This Entry:
Fermat's little theorem.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Fermat%27s_little_theorem&oldid=34358
|
I do have various questions regarding the topic of probability measures on polish spaces in general, thus I am trying to divide them in “small” subquestions. Hence, this is my first question on this issue.
Notation: $\Omega$ is a Polish space; $\mathcal{B}(\Omega)$ is the Borel $\sigma$-algebra on $\Omega$; $\Delta (\Omega)$ is the set of probability measures on $\mathcal{B}(\Omega)$.
Quite often I read that $\mathcal{B}(\Omega)$ is endowed with the
topology of weak convergence,where the topology of weak convergence states that for every real function $f$ bounded, continuous on $\Omega$, $$\lim_{n \to \infty} \int f d\mu_n = \int f d\mu.$$
Here there are my basic questions:
How does the topology of weak convergencecreates open sets in $\Delta (\Omega)$? How do those open sets look like?
As always, any feedback regarding anything will be more than welcome.
Thank you for your time.
PS: Maybe the question looks naive, but I am trying to see why, if $\Omega$ is Polish, then $\Delta (\Omega)$ endowed with the topology of weak convergence is Polish as well (of course, any feedback regarding this issue is welcome as well!). Thus I would like to see how the topology of weak convergence actuallys
. works
|
Excuse the large title (The 'good title' page said not to be afraid to make it too long)
$\{23,114,187,473,2792,5624,19640,75884,187211,479797,1452795,5102858,14872865,72392867,146262888\}$
I'm trying to figure out the nth term for these numbers. To help:
I'm generating these numbers on a python program:
import math n=23 digit=0 while(True): pi=n*math.sin(math.radians(((90*n)-180)/(n)))*math.sin(math.radians(180/n)) strpi=str(pi) count1=2 while(count1<len(strpi)-1): if(strpi[count1]==str(math.pi)[count1]): if(digit<count1): print(n,"\n\n",pi,"\n") digit=count1 else: break count1+=1 n+=1
For the non-programmers the program uses this formula:
$ π=n \cdot \sin(\frac{90n-180}{n}) \cdot \sin(\frac{180}{n})$
What this does is calculate $\pi$ more accurately the higher the value of $n$. The program recorded the value of $n$ each time the next decimal value of $\pi$ was found (increasing $n$ by $1$ starting from $0$ each time). So if $n$ is $23$ the output is $3.1$ and if it is $114$ the value is is $3.14$. Note that these values have decimal places after them that are not digits of $\pi$. And this is in degrees, not radians.
This formula is derived from this one (Done with the sine rule):
$ A=nr^2 \cdot \sin(\frac{90n-180}{n}) \cdot \sin(\frac{180}{n})$
$A=$ The area of a proper polygon, $r=$ length from centre to corner, $n=$ number of sides
I substituted $A$ with $πr^2$ and simplified
It took me a while to realise that the difference between the first two values is larger than the difference between the second and third ($91$ then $73$)
If my program gets a new value I'll be adding it in. And if you want me to create an algorithm to find different data relating to this to help you solve it, or have an improved program do not hesitate to ask/suggest.
Technically speaking, the first element could be labelled as $12$ which gives the first
digitas $3$.
Below are the pi values given when n is increased
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Let $X$ be some (possibly?) arbitrary compact, complex manifold of dimension $d$, and let $\overline{M}_{g,n}(X, \beta)$ be the compactified moduli space of
stable holomorphic maps $f: C \to X$, where $C$ is a connected genus $g$ curve with $n$ marked points, and the image lands in the homology class $\beta$ of $X$.
The (complex) virtual dimension of this moduli space is famously
$$(1-g)(d-3)+n+\int_{C}f^{*}(c_{1}(X)).$$
Now, I'm trying to get my hands on this moduli space, even just in the simplest case of $\beta=0$, which corresponds to $f$ taking the entire curve $C$ to a point in $X$. Intuitively, I would think that the moduli space would simply decompose into a product of $\overline{M}_{g,n}$ and $X$,
$$\overline{M}_{g,n}(X,0) = \overline{M}_{g,n} \times X \,\,\,? $$
After all, it seems like the only moduli in this case, is the choice of a complex structure on the curve, and the choice of a point in $X$ to map the whole curve to! It seems like these bits of data don't interact. Thus, I would expect the dimension to be
$$3g-3 + n + d.$$
But the actual virtual dimension given earlier, seems to indicate that the dimension would be
$$3g-3+d(g-1) + n.$$
Where in my reasoning is this discrepancy arising? Maybe the dimension formula above doesn't hold when $\beta=0$? I can't imagine how my reasoning is flawed in the degree zero case. I understand one also needs to think about
stability but I feel like the problem isn't arising from that.
|
I need help about spherical geometry problem that I need to use it for my project.
I try to calculate $T_1 $ and $T_2$ coordinates on $B$ centered small circle on the sphere and $AT_1$ and $AT_2$ circular lines are tangent to that small circle.
Given Earth Angle Coordinates (Longitude (-180,+180) , Latitude (-90,90)) are $A=(x_1, y_1)$ and $B=(x_2,y_2)$ . And the perimeter of circle (r) on the Earth surface is also given.
I want to calculate $T_1 $ and $T_2$ angle coordinates.(Please suppose that Earth is perfect sphere)
My Strategy to solve the problem:
I suggest that Point $A$ is my new North Pole point (0,90). $$B'=(0,90-\alpha)$$
Where $\alpha$ is angle between Point $A$ and point $B$.
$\alpha=|$Shortest distance between $A$ and $B$ on sphere$|/R$
$\alpha$ on unit sphere can be calculated via Spherical_law_of_cosines easily.
I need to calculate angle $a$
Thus $$\sin(\alpha)=\frac{x}{R}$$ $$x=R.\sin(\alpha)$$ $$r=a.x$$ $$a=\frac{r}{x}=\frac{r}{R.\sin(\alpha)}$$
To find relative coordinates of coordinates $T'_1 $ and $T'_2$ are:
$$T'_1=(-a, 90-\alpha) $$ $$T'_2=(a, 90-\alpha)$$
I am not sure to add relative point of A and to get real coordinates because $AB$ circle may not pass point $N$ as shown on first picture above.
Could you please help me how to solve that problem . Thanks a lot for helps
|
I am asked to find the integrating factor and solve.
$$ y\sin(y)dx + x(\sin(y) - y\cos(y))dy = 0.$$
I'm not sure on how to put this in the form of
$$y' + p(x)y = f(x)$$
to solve the equation. Or is there another method to use?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
You can separate dx and dy here:
$\frac {-(sin(y)−ycos(y))}{ysin(y)}dy=\frac {dx}{x}$
And solve the now separated variables equation, can you go on from here?
For an equation in the form $$Mdx +Ndy=0$$ where $M$ and $N$ are functions of $x$ and $y$ the integrating factor is $$\mu = e^{\displaystyle\int (M_y - N_x)/N dx}$$
|
If $x_1 = \alpha, x_{n+1} = 3x_n^2$ then first you form the geometric picture in your mind, which you did correctly. Now, to see what exactly is happening, you must first inspect $x_{n+1} = 3x_n^2$ as a growth pattern, and then ram home the argument formally.
This lemma is our key tool.
If $0 < \alpha < \frac 13$, then $x_{n+1} < x_n$ for all $n$.
Proof : By induction. $x_1 = \alpha$, and $x_2= 3\alpha^2$, so $x_1 - x_2 = \alpha(1- 3\alpha) > 0$ since both the terms are greater than $0$.
For general $n$, start with $x_{n+1} = 3x_n^2$. By induction, $x_n < x_{n-1} ... < x_1 < \frac 13$. So, $x_n - x_{n+1} $ $= x_n(1-3x_n) > 0$ since $x_n > 0$ (since it's three times the square of something positive) and the other term is positive as $x_n < \frac 13$. Therefore, $x_n > x_{n+1}$ , completing the proof.
A bounded decreasing sequence converges.
This is easy to see. The candidate for the limit is the infimum of the sequence (as a set). Use $\epsilon - \delta$ and the definition of infimum.
Now, you showed that if there is a limit, it is $0$ or $\frac 13$. But the limit cannot be $\frac 13$ here : we saw that $x_n < \alpha$ for all $n$, so $|x_n - \frac 13| > |\alpha - \frac 13|$ for all $n$.
So, the sequence must go to $0$.
If $\alpha > \frac 13$, then a similar argument shows that $x_{n+1} > x_n$ for all $n$. However, here something different happens : if $x_n$ did converge, then it would go to either zero or $\frac 13$, but here neither is possible. Therefore, by my second lemma's contrapositive, $x_n$ must be divergent.
If $\alpha = \frac 13$ then the sequence is constant.
I leave you to see the nature of the sequence for $\alpha \leq 0$. In particular, you will notice that sign doesn't matter, since the square appears in $x_2$ itself, so this is only a question of the absolute value of $\alpha$.
|
What are the Law of Indices?
\[\begin{array}{l}{x^0} = 1\\{x^m} \times {x^n} = {x^{m + n}}\\\frac{{{x^m}}}{{{x^n}}} = {x^{m - n}}\\{\left( {{x^m}} \right)^n} = {x^{mn}}\\{x^{ - m}} = \frac{1}{{{x^m}}}\\{x^{\frac{m}{n}}} = \sqrt[n]{{{x^m}}} = {\left( {\sqrt[n]{x}} \right)^m}\end{array}\]
GCSE Maths - Rules of Indices (1) (Multiplication and Division)
GCSE Maths - Rules of Indices (2) (Raising to a Power and Zero Power)
GCSE Maths - Rules of Indices (3) (Negative and Fractional Powers)
Index Laws - Ultimate revision guide for Further maths GCSE
Rotate to landscape screen format on a mobile phone or small tablet to use the
Mathway widget, a free math problem solver that answers your questions with step-by-step explanations.
You can use the free Mathway calculator and problem solver below to practice Algebra or other math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
|
${{\boldsymbol \Sigma}{(1560)}}$ Bumps $I(J^P)$ = $1(?^{?})$
This entry lists peaks reported in mass spectra around 1560 MeV without implying that they are necessarily related. DIONISI 1978B
observes a 6 standard-deviation enhancement at 1553 MeV in the charged ${{\mathit \Lambda}}$ /${{\mathit \Sigma}}{{\mathit \pi}}$ mass spectra from ${{\mathit K}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ( ${{\mathit \Lambda}}$ /${{\mathit \Sigma}}$) ${{\mathit \pi}}{{\mathit K}}{{\overline{\mathit K}}}$ at 4.2 ${\mathrm {GeV/}}\mathit c$. In a CERN ISR experiment, LOCKMAN 1978
reports a narrow 6 standard-deviation enhancement at 1572 MeV in ${{\mathit \Lambda}}{{\mathit \pi}^{\pm}}$ from the reaction ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit \Lambda}}{{\mathit \pi}^{+}}{{\mathit \pi}^{-}}{{\mathit X}}$ . These enhancements are unlikely to be associated with the ${{\mathit \Sigma}{(1580)}}$ (which has not been confirmed by several recent experiments -- see the next entry in the Listings). CARROLL 1976
observes a bump at 1550 MeV (as well as one at 1580 MeV) in the isospin-$1{}^{}{}^{}$ ${{\overline{\mathit K}}}{{\mathit N}}$ total cross section, but uncertainties in cross section measurements outside the mass range of the experiment preclude estimating its significance. See also MEADOWS 1980
for a review of this state.
|
How to show that the Riemann hypothesis, random walks and the Möbius function are related or even equivalent?
I was reading the paper Randomness and Pseudorandomness by Avi Wigderson, but to me the most interesting thing wasn't discussed there.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
How to show that the Riemann hypothesis, random walks and the Möbius function are related or even equivalent?
I was reading the paper Randomness and Pseudorandomness by Avi Wigderson, but to me the most interesting thing wasn't discussed there.
The fact that the Riemann Hypothesis and the Mobius function are related has its origin in the formula $$\frac{1}{\zeta(s)} = \sum_{n=1}^\infty \mu(n)n^{-s}.$$
The relation to random walks is that the Riemann Hypothesis implies that the cumulative sum of the Mobius function $$ M(x) =\sum_{n\le x}\mu(n)$$ has random walk-like asymptotics $M(x) = O(x^{1/2+\epsilon}).$ The naive guess is that if the Mobius function were a random walk, the order of growth would be $\sqrt{x\log\log(x)}$ (the law of iterated logarithm) so the Riemann hypothesis gets us fairly close to this.
The identity relating the zeta function and the Mobius function is straightforward and can be understood by a person without much background with little work. It has to do with Euler's identity $$ \zeta(s) = \prod_p\frac{1}{1-p^{-s}}.$$ You can find derivations of this identity in a lot of places. Basically you expand the factor in a geometric series and multiply out. The identity to the sum form $ \zeta(s) = \sum_{n=1}^\infty n^{-s}$ then follows from the fundamental theorem of arithmetic. Then if you flip over Euler's identity and multiply it out, thinking about which terms are present and whether they have plus and minus signs will lead you to the identity $$ \frac{1}{\zeta(s)} = \prod_p (1-p^{-s}) = \sum_{n=1}^\infty \mu(n) n^{-s}$$ A good for-the-layman treatment of Euler's identity as well as more advanced stuff and connections to randomness can be found in Prime Obsession by (the regrettably somewhat disgraced) Derbyshire.
But the latter fact relating the Mobius function to a random walk requires some more heavy lifting.
In brief, the $1/2$ power and the fact that the zeros have $s=1/2$ are intimately related. The origin is that cumulative functions $F(x) = \sum_{n\le x}f(n)$ are related to Dirichlet series $\tilde f (s)=\sum f(n)n^{-s}$ by a Mellin transform $$ \frac{\tilde f(s)}{s} = \int_1^\infty x^{-(s+1)}F(x)dx$$ which can then be inverted to $$ F(x) = \frac{1}{2\pi i}\int_{a-i\infty}^{a+i\infty}\frac{\tilde f(s)}{s}x^sds.$$
If you use Cauchy's theorem to do the transform integral, you will see that the poles of $\tilde f(s)$ with largest real part contribute the leading order terms in the asymptotic series for $F(x)$. Since here we have $\tilde f = \frac{1}{\zeta(s)}$ we see that, assuming the Riemann Hypothesis, the largest poles of the integrand (i.e. the zeros of the zeta function) have real part $s=1/2$ so contribute a leading term $\sim x^{1/2}$ with oscillations from the imaginary parts
This of course sweeps a lot of details under the rug, but gives a basic outline of the relationship.
To see demonstrations of these facts one can consult an introductory treatment of analytic number theory. (Notes by Hildebrand are available free online and are quite good.)
These three concepts are not equivalent as such, but there is an interesting connection.
We can think of a simple symmetric random walk on $\mathbb{Z}$ as a sequence of random variables $Z_{1},Z_{2},...$ where each $Z_{k}$ takes the value $+1$ or $-1$, each with probability $1/2$. We can then define $S_{n}=\sum_{k=1}^{n}Z_{k}$ to be the position after $n$ steps of the random walk. One can then ask about what can be said about $E(|S_{n}|)$, the expected distance of the walk from the origin after $n$ steps. As is noted in the article, $E(|S_{n}|)$ is "about $\sqrt{n}$". More precisely $$\lim_{n\rightarrow\infty}\frac{E(|S_{n}|)}{\sqrt{n}}=\sqrt{\frac{2}{\pi}}$$
In particular, we can see that $E(|S_{n}|)=O(n^{\epsilon+\frac{1}{2}})$ for any $\epsilon>0$.
How does this relate to the Riemann Hypothesis? Well there is a formulation of the Riemann Hypothesis which bares a striking resemblance to the above statement. The idea is outlined in the article in the following way. Instead of using random variables $Z_{k}$ to determine each step, use $\mu(k)$. Here $\mu$ is the Mobius function, which is defined by setting $\mu(k)=+1$ if $k$ is a product of an even number of distinct primes, $\mu(k)=-1$ if $k$ is a product of an odd number of distinct primes, and $\mu(k)=0$ otherwise. Again we can think of using $\mu$ to tell us how to walk along $\mathbb{Z}$.
Now define the Mertens function $M(n)=\sum_{k=1}^{n}\mu(k)$. This function tells us where we are after $n$ steps using this method. Note that, unlike with the random walk, this walk is deterministic so we are looking at $M(n)$ directly and not some expectation. So, as with the random walk, what can we say about $M(n)$? Well, as is hinted at in the article, it turns out that the Riemann Hypothesis is equivalent to the conjecture that $$M(n)=O(n^{\epsilon+\frac{1}{2}})$$ holds for all $\epsilon>0$. Compare this with what was said about the random walk.
|
For any rigid pendulum the angular frequency is given by:
$$ \omega = \sqrt{\frac{mgx}{I}} $$
where $m$ is the centre of mass of the rigid object and $x$ is the distance of the pivot from the centre of mass. Suppose we start with the pivot passing through the centre of mass, i.e. $x=0$, then obviously the angular frequency is zero (the period is infinite).
As we move the pivot away from the centre of mass two things happen:
the value of $mgx$ increases because $x$ increases
the moment of inertia $I$ increases as described by the parallel axis theorem
Since the frequency is proportional to the ratio of these two, $\sqrt{mgx/I}$, how the frequency changes depends on how the two quantities change.
If the moment of inertia about an axis through the centre of mass is $I_0$ then the parallel axis theorem tells us the the moment of inertia about an axis a distance $x$ from the centre of mass is:
$$ I(x) = I_0 + mx^2 $$
and we can substitute this in our equation for the angular frequency to get:
$$ \omega(x) = \sqrt{\frac{mgx}{I_0 + mx^2}} $$
Now consider the limits where $x$ is very small and $x$ is very large. For small $x$ we have $I_0 \gg mx^2$ so the frequency is approximately:
$$ \omega(x) = \sqrt{\frac{mgx}{I_0}} \propto x $$
So as we increase $x$ away from zero the frequency
increases linearly with $x$. However for large $x$ we have $I_0 \ll mx^2$ so the frequency is approximately:
$$ \omega(x) = \sqrt{\frac{mgx}{mx^2}} \propto \frac{1}{x} $$
So as $x$ gets large the frequency
decreases with increasing $x$.
The end result is that as we move the axis away from the centre of mass the angular frequency increases at first, but then reaches a maximum and for large $x$ it starts decreasing again. And this is what you are seeing in your calculation. If you put in the value of $I_0$ for a rigid rod of length $2\ell$:
$$ I_0 = \frac{m(2\ell)^2}{12} $$
Then you'll find $\omega(x)$ looks like:
So there value of $\omega(x)$ is indeed the same one third of the way along as it is at the end of the rod.
|
Normal forms for non-uniform contractions
Department of Mathematics, The Pennsylvania State University, University Park, PA 16802, USA
Let $f$ be a measure-preserving transformation of a Lebesgue space $(X,\mu)$ and let ${\mathscr{F}}$ be its extension to a bundle $\mathscr{E} = X \times {\mathbb{R}}^m$ by smooth fiber maps ${\mathscr{F}}_x : {\mathscr{E}}_x \to {\mathscr{E}}_{fx}$ so that the derivative of ${\mathscr{F}}$ at the zero section has negative Lyapunov exponents. We construct a measurable system of smooth coordinate changes ${\mathscr{H}}_x$ on ${\mathscr{E}}_x$ for $\mu$-a.e. $x$ so that the maps ${\mathscr{P}}_x ={\mathscr{H}}_{fx} \circ {\mathscr{F}}_x \circ {\mathscr{H}}_x^{-1}$ are sub-resonance polynomials in a finite dimensional Lie group. Our construction shows that such ${\mathscr{H}}_x$ and ${\mathscr{P}}_x$ are unique up to a sub-resonance polynomial. As a consequence, we obtain the centralizer theorem that the coordinate change $\mathscr{H}$ also conjugates any commuting extension to a polynomial extension of the same type. We apply our results to a measure-preserving diffeomorphism $f$ with a non-uniformly contracting invariant foliation $W$. We construct a measurable system of smooth coordinate changes ${\mathscr{H}}_x: W_x \to T_xW$ such that the maps ${\mathscr{H}}_{fx} \circ f \circ {\mathscr{H}}_x^{-1}$ are polynomials of sub-resonance type. Moreover, we show that for almost every leaf the coordinate changes exist at each point on the leaf and give a coherent atlas with transition maps in a finite dimensional Lie group.
Keywords:Normal form, contracting foliation, non-uniform hyperbolicity, Lyapunov exponents, polynomial map, homogeneous structure. Mathematics Subject Classification:Primary: 37D10, 37D25; Secondary: 37D30, 34C20. Citation:Boris Kalinin, Victoria Sadovskaya. Normal forms for non-uniform contractions. Journal of Modern Dynamics, 2017, 11: 341-368. doi: 10.3934/jmd.2017014
References:
[1] [2] [3]
L. Barreira and Ya. Pesin,
[4]
I. U. Bronstein and A. Ya. Kopanskii,
Smooth Invariant Manifolds and Normal Forms, World Scientific, 1994.
doi: 10.1142/9789812798749.
Google Scholar
[5] [6] [7]
R. Feres, A differential-geometric view of normal forms of contractions, in
[8] [9]
A. Gogolev, B. Kalinin and V. Sadovskaya,
Local rigidity for Anosov automorphisms(With appendix by R. de la Llave),
[10] [11]
M. Guysinsky and A. Katok,
Normal forms and invariant geometric structures for dynamical systems with invariant contracting foliations,
[12] [13] [14] [15] [16] [17]
A. Katok and R. Spatzier, Differential rigidity of Anosov actions of higher rank abelian groups and algebraic lattice actions,
[18] [19] [20] [21]
F. Ledrappier and L.-S. Young,
The metric entropy of diffeomorphisms. Ⅰ. Characterization of measures satisfying Pesin's entropy formula,
[22] [23] [24] [25] [26]
show all references
References:
[1] [2] [3]
L. Barreira and Ya. Pesin,
[4]
I. U. Bronstein and A. Ya. Kopanskii,
Smooth Invariant Manifolds and Normal Forms, World Scientific, 1994.
doi: 10.1142/9789812798749.
Google Scholar
[5] [6] [7]
R. Feres, A differential-geometric view of normal forms of contractions, in
[8] [9]
A. Gogolev, B. Kalinin and V. Sadovskaya,
Local rigidity for Anosov automorphisms(With appendix by R. de la Llave),
[10] [11]
M. Guysinsky and A. Katok,
Normal forms and invariant geometric structures for dynamical systems with invariant contracting foliations,
[12] [13] [14] [15] [16] [17]
A. Katok and R. Spatzier, Differential rigidity of Anosov actions of higher rank abelian groups and algebraic lattice actions,
[18] [19] [20] [21]
F. Ledrappier and L.-S. Young,
The metric entropy of diffeomorphisms. Ⅰ. Characterization of measures satisfying Pesin's entropy formula,
[22] [23] [24] [25] [26]
[1] [2] [3]
Paul L. Salceanu.
Robust uniform persistence in discrete and continuous dynamical systems using Lyapunov exponents.
[4]
Jackson Itikawa, Jaume Llibre.
Global phase portraits of uniform isochronous centers with quartic
homogeneous polynomial nonlinearities.
[5]
Zhong-Jie Han, Gen-Qi Xu.
Spectrum and dynamical behavior of a kind of planar network of non-uniform
strings with non-collocated feedbacks.
[6]
Boris Hasselblatt, Yakov Pesin, Jörg Schmeling.
Pointwise hyperbolicity implies uniform hyperbolicity.
[7]
Donald L. DeAngelis, Bo Zhang.
Effects of dispersal in a non-uniform environment on population
dynamics and competition: A patch model approach.
[8]
Zhong-Jie Han, Gen-Qi Xu.
Exponential decay in
non-uniform porous-thermo-elasticity model of Lord-Shulman type.
[9]
Zhong-Jie Han, Gen-Qi Xu.
Dynamical behavior of networks of non-uniform Timoshenko
beams system with boundary time-delay inputs.
[10]
Grigor Nika, Bogdan Vernescu.
Rate of convergence for a multi-scale model of dilute emulsions with non-uniform surface tension.
[11]
Hai Huyen Dam, Wing-Kuen Ling.
Optimal design of finite precision and infinite precision non-uniform cosine modulated filter bank.
[12]
Boris Kalinin, Victoria Sadovskaya.
Lyapunov exponents of cocycles over non-uniformly hyperbolic systems.
[13] [14]
Igor G. Vladimirov.
The monomer-dimer problem and moment Lyapunov exponents of homogeneous
Gaussian random fields.
[15]
Jingxue Yin, Chunhua Jin.
Critical exponents and traveling wavefronts of a degenerate-singular
parabolic equation in non-divergence form.
[16]
Alexander Zlotnik.
The Numerov-Crank-Nicolson scheme on a non-uniform mesh for the time-dependent Schrödinger equation on the half-axis.
[17]
Victor Churchill, Rick Archibald, Anne Gelb.
Edge-adaptive $ \ell_2 $ regularization image reconstruction from non-uniform Fourier data.
[18] [19] [20]
2018 Impact Factor: 0.295
Tools Metrics Other articles
by authors
[Back to Top]
|
in a previous question, I mistakenly attempted to subtract one cardinal number from another. Anyway, this got me to thinking, suppose I have two sets $X$ and $Y$, with $Y\subseteq X$. Suppose also that $|X|=|Y|=|X-Y|=\kappa$ for some cardinal number $\kappa$. Does there exist some bijection $f\colon X\to X$ such that $Y$ consists of all fixed points of $f$?
I thought the easiest example comes from working with countable sets. If $X=\mathbb{Z}$ and $Y$ is the set of even integers, then we could take $f$ to be $$ f(x)=\begin{cases} x, &\text{if }x\text{ is even} \\ x+2, &\text{if }x\text{ is odd} \\ \end{cases} $$
Of course, we could do the same thing with the set of odds. I wasn't able to think of any more examples.
But I'm curious, is it possible to do this in the more general sense, for any sets where $|X|=|Y|=|X-Y|=\kappa$, and $Y\subseteq X$? That is, can we prove there exists a bijection on $X$ such that $Y=\{x\in X\ |\ f(x)=x\}$? After thinking about it, I suppose it suffices to find a derangement $g$ (I hope I'm using that term correctly, I've only seen it used in combinatorics on finite sets) on $X-Y$, and then we could let $f=g\cup id|_Y$. Thanks!
|
Last night I was playing as my level 20 Barbarian, when our party encountered an enemy who was immune to all damage, unless it was of a specific elemental type.
This was particularly unfortunate for me, since my Barbarian deals exclusively in piercing and slashing damage. This resulted in me effectively standing around acting as a damage sponge while the party’s spell-casters dealt all the damage.
While not a disastrous situation, it got me thinking:
Are there any ways to effectively change the type of damage I am dealing?
To be more specific, I am curious if there are any spells, enchantments, magics, magical weapons, etc which can change the type of damage a weapon’s normal attack would have dealt. For example, my Barbarian’s weapon deals slashing damage – I am wondering if there are ways to change the damage type into something else(eg into instead dealing fire or poison damage, or even piercing or bludgeoning).
I’ve read this similar question, but am playing 5e, not 3.5.
The rules on “Falling” state:
[…] At the end of a fall, a creature takes 1d6 bludgeoning damage for every 10 feet it fell, to a maximum of 20d6…
When looking at the
reverse gravity spell I realized it states:
[…] If some solid object (such as a ceiling) is encountered in this fall, falling objects and creatures strike it just as they would during a normal downward fall…
And then in this related question “Do any damage resistances apply to Reverse Gravity?” it is shown that falling damage, even from this spell, is still just bludgeoning damage. Is there any way to change the type of damage that a fall inflicts?
Maybe some object explicitly has this property or there is a way to change
all damage a creature takes to another type, which would thus include fall damage?
If this is possible, then certain creatures, like the treant, would take more damage from falling so pushing them off a cliff or using
reverse gravity on them would be more effective.
If a method is available to
both PC’s and Monsters, that would be ideal. But if there is a method only available to PC’s and another method only available to Monsters that would work as well.
I’m performing some correlation assessment à la NIST Recommendation for the Entropy Sources Used for Random Bit Generation, § 5.1.
You take a test sequence and compress it with a standard compression algorithm. You then shuffle that sequence randomly using a PRNG, and re-compress. We expect that the randomly shuffled sequence to be harder to compress as any and all redundancy and correlations will have been destroyed. It’s entropy will have increased.
So if there is any auto correlation, $ \frac{\text{size compressed shuffled}} {\text{size compressed original}} > 1$ .
This works using NIST’s recommended bz2 algorithm, and on my data samples, the ratio is ~1.03. This indicates a slight correlation within the data. When I switch to LZMA, the ratio is ~0.99 which is < 1. And this holds over hundreds of runs so it’s not just a stochastic fluke.
What would cause the LZMA algorithm to repetitively compress a randomly shuffled sequence (slightly) better than a non shuffled one?
I’m planning on starting a campaign of
Waterdeep: Dragon Heist tomorrow, but it’s looking like we may only have two players (plus me as DM) for it. The other published adventures I’ve used ( Lost Mine of Phandelver and Storm King’s Thunder) each say a recommended party size for them, which we haven’t always had but helped me figure out how much I needed to adjust the included encounters when we had fewer players. But I can’t find within Dragon Heist the party size that it’s designed for. Due to it being more of an intrigue-based campaign than a hack-and-slash style, does it work just as well if you have two PCs or six?
Let’s say that a player creates a 6th level character with maximum possible ranks in craft (armorsmithing) . They want to start with selfmade full plate armor. Should they still cover the full price of the armor or just the materials, which are one third of the price? I believe that selfmade starting gear should be less pricey than one bought on the market. On the other hand I am afraid, it could be heavily exploited, like starting with +5 adamantium battle plate armor. Are there any books, including third parties covering how to deal with this problem?
I came across following problem to finding whether the following language is decidable or semi-decidable or not even a semi-decidable.
$ L: \{\langle M\rangle: M\space is\space a\space TM\space and\space |L(M)| \ge3\}$
Now thinking intuitively I conjectured that this language is semi-decidable. We can say yes when the input does belong to $ L$ . But, we can not say no when the input does not belong to $ L$ .
Now, I formulated following reduction from complement of halting problem $ \overline{HP}$ which is not semi-decidable (non $ RE$ ).
$ \overline{HP}: \{\langle M, w\rangle : M\space is\space TM\space and\space it\space does\space not\space halt\space on\space string\space w.\}$
$ \tau(\langle M,x\rangle) = \langle M’\rangle$ .
$ M’$ on input $ w$ works as follows. It erases w, puts $ M$ and $ x$ on its tape, and runs $ M$ on $ x$ and accepts if $ M$ doesn’t halt on x. Otherwise it rejects.
Proof of validity of reduction:
$ \langle M,x\rangle \in \overline{HP} \implies M\space does\space not\space halt\space on\space x \implies M’\space accepts\space all\space inputs\space \implies|L(M’)| \ge 3\implies M’ \in L$
$ \langle M,x\rangle \notin \overline{HP} \implies M\space does\space halt\space on\space x \implies M’\space rejects\space all\space inputs\space \implies|L(M’)| < 3\implies M’ \notin L$
According to above reduction $ \overline{HP}$ should be recursively enumerable$ (RE)$ which it is not. So, $ L$ should not be $ RE$ but it indeed is $ RE$ . So, my reduction must be flawed.
Please point out where I messed up.
I have read that minimizing regular expressions is, in general, a PSPACE problem. Is it known whether minimizing regular expressions without the Kleene closure (star, asterisk) is in P?
The language of any such regular expression would be guaranteed to be finite. I suppose an equivalent question is whether the problem of constructing a minimal regular expression from a known finite language is any easier than minimizing an arbitrary regular expression. It seems like this should be the case.
(If the answer is that it is easier and there’s an obvious proof, I’m happy to go attempt it, I just haven’t thought about the problem deeply yet and wanted to see what I’d be getting myself into first.)
I’m wondering if there is a way to replace a specific color to another color in all the website. Fir example: I want to change everything which is written in white to be black. Is there a way to do that or I have to do it manually?
Thanks a lot!
While working with modern SharePoint, I found no option to change the header mega menu color.
But while creating SharePoint theme from theme-generator, is there any specific color palette that is responsible for changing it?
The base movement speed of a gnoll flesh gnawer is 30 feet, and if it activates rampage by bringing a creature to 0 hit points, it can move an additional 15 feet for a total of 45 feet during “normal” combat.
But is there a way that rampage could be triggered for the flesh gnawer after it gains a movement speed of 90 feet from its Sudden Rush action? This should allow the flesh gnawer to traverse 135 feet in one turn.
Ideally this would require only actions/abilities possessed by the various gnoll types, but I am open to other first-party/non-homebrew solutions for Dungeons and Dragons 5e that may make this possible.
Potentially relevant text from some of the gnoll actions are listed below:
Sudden Rush
Until the end of the turn, the gnoll’s speed increases by 60 feet and doesn’t provoke opportunity attacks.
Incite Rampage (Possessed by the Gnoll Pack Lord)
One creature the gnoll can see within 30 feet of it can use its reaction to make a melee attack if it can hear the gnoll and has the Rampage trait.
Rampage
When the gnoll reduces a creature to 0 hit points with a melee attack on its turn, the gnoll can take a bonus action to move up to half its speed and make a bite attack.
|
The main reason for reactive power compensation is to regulate the voltage magnitude. Note that the compensation might be both positive and negative (reactive power in, or reactive power out). In a transmission system, there is a strong correlation between reactive power and voltage magnitude, whereas the active power is mainly dependent on the voltage angle. Have a look here for a bit more information.
In the transmission system, a branch might have impedance
Z = R + jX, where the reactive
X is about 10 times the purely resistive
R.
I'm assuming you're familiar with the per unit system. Let me know if you're not and I'll explain it closer.
Let's just review a few basic relationships first:
\begin{align*}S = V \cdot I^*\\=> I = (S/ V)^*\\\Delta V = I^2\cdot Z\\Z = (R + jX)\end{align*}
Let's assume we have a very simple power system that looks like this:
G ---|------------------|------------------|----->
3 Z = R + jX 2 Z = R + jX 1 Load
G is the generator The verticals lines are buses, labeled 1 - 3 The load is at the end of the radial. The voltage at bus 1 is assumed to be 1pu with angle 0 degrees. The load is (1 + j0.2) pu. (If S_base = 100MVA, this would be equal to 100MW + 20MVAr) Z = 0.01 + j0.1
The current necessary to supply the load is given by:
\begin{align*}I = (S/ V)^* =((1 + j0.2) / 1)^* = 1 - j0.2\\\end{align*}
No compesation:
The voltage at bus 2 is give by the voltage at bus 1 plus the voltage rise over the cable (seen from 1 to 2):
\begin{align*}V_2 = V_1 + I^2 \cdot Z = (1-j0.2)\cdot(0.01 + j0.1)= 1.054\angle 5.01 ^{\circ}\;\text{pu}\end{align*}
This means the power injection into the cable between 1 and 2 is:
\begin{align*}S_2 = V_2 \cdot I^* = (1.031 + j0.302) \;\text{pu}\end{align*}
The voltage at V3 is:
\begin{align*}V_3 = V_2 + I^2 \cdot Z = 1.11\angle 9.50^{\circ}\;\text{pu}\end{align*}
Now we can find the power output from the generator by using the first equation:
\begin{align*}S_{Gen} = V_3 \cdot I^* = (1.062 + j0.404) \;\text{pu}\end{align*}
With compensation:
Let us add a capacitor that injects 0.3pu reactive power at bus 2.
The voltage at bus 2 is still given by the voltage at bus 1 and the voltage rise over the cable, so it's still \$\underline{1.054 \angle5.01^{\circ}\;\text{pu}}\$.
Now, the reactive power injection of 0.3pu will give a current injection of:
\begin{align*}I_{inj} = (Q / V_2)^* = 0.285 \angle{-85.0}^{\circ}\;\text{pu}\end{align*}
The current through cable 1-2 is equal to the current through cable 2-3 plus the current injection, so:
\begin{align*}I_3 = I_2 - I_{inj} = 0.979\angle 4.90^{\circ}\;\text{pu}\end{align*}
You see that the current magnitude is lower than it was without compensation. So, let's have a look at the voltage at bus 3:
\begin{align*}V_3 = V_2 + {I_3}^2 \cdot Z = 1.06\angle10.22^{\circ}\;\text{pu}\end{align*}
Now we can find the power output from the generator by using the first equation:
\begin{align*}S_{Gen} = V_3 \cdot I_3^* = (1.037 + j0.096)\;\text{pu}\end{align*}
So, to summarize:
W/O comp: W comp:
|V1| 1.000 1.000
|V2| 1.054 1.054
|V3| 1.115 1.060
W/O comp: W comp:
Gen 1.062 + j0.404 1.033 + j0.096
As you can see from the above results, the voltage is much more stable with compensation. The current gets lower through the cable resulting in lower active losses.
The reason why the reactive power is needed in the first place is because it accounts for the magnetization of the equipment. If there's no reactive power, transformers, generator rotors/stators, machines etc. have no magnetic field. With no magnetic field, there is no torque, no magnetic coupling in the transformer etc. So, a lot of equipment have to consume reactive power in order to work. If there's too little reactive power available the equipment will try to draw more current to compensate. This will lead to higher voltage drops, which in the end might cause voltage collapse.
As Andy points out, it can also be used as power factor correction for large industrial loads. However when we're talking about reactive power compensation it's most often because of what I've described above.
In a meshed grid it can also be used to control power flow. This works because the active power flow through a cable is mainly given by the voltage angle difference over it. If you inject reactive power, the voltage and currents angles will change, thus it will affect the power flow. If you inject the right amount at the right place you can redistribute the power flow the way you want (but only to a small extent).
Hope this answers your question!
|
So here is an abrupt try find connections between them. I know this is incomplete and I hope someone else adds more/edits more into this:
The Ricci flow equation
$$\frac{dg}{dt} = - 2 Ric(g(t))$$
Both sides are the same type of object : at each point $p \in M$, a bilinear form on $T_pM$.
In terms of local coordinates this becomes
$$\frac{\partial g_ij}{\partial t}= - 2 R_{ij} $$
(Hamilton, 1982).
The heat equation in 3-D is
$$\frac{\partial f}{\partial t} = \nabla^2 f$$
The basic differences are
the heat flow evolves an initial function $f_0 $ towards a constantfunction Ricci flow evolves a Riemannian metric.
More on the Ricci flow by Bennett-chow
Here is a similar intuition behind the Ricci flow
heat-type equations. The full curvature tensor$\operatorname{Rm}$ satisfies an equation of the form $\frac{\partial}{\partial t}\operatorname{Rm}=\Delta\operatorname{Rm}+q(\operatorname{Rm})$,where $q$ is a quadratic polynomial. Since $\operatorname{Rm}$ is a symmetricbilinear form on the vector space $\wedge^{2}T_{x}^{\ast}M$ at each point $x$,we have the notion of nonnegativity of $\operatorname{Rm}$. Since$q(\operatorname{Rm})$ satisfies a property sufficient for the maximumprinciple for systems to be applied, $\operatorname{Rm}\geq0$ is preservedunder the Ricci flow. Generally, we can analyze the behavior of$\operatorname{Rm}$ by the maximum principle under various hypotheses.
Geometric application. In particular, when $n=3$ and $\operatorname{Ric}_{g_{0}}>0$, we have $\pi_{1}(M)=0$ and hence the universal cover $\tilde{M}$is a homotopy $3$-sphere. Encouraged by this, Hamilton proved that thesolution to the normalized Ricci flow exists for all time and converges to aconstant positive sectional curvature metric; thus $M$ is diffeomorphic to aspherical space form. The main gonzo estimate is $\frac{|\operatorname{Ric}%-\frac{R}{3}g|^{2}}{R^{2}}\leq CR^{-\delta}$ for some $C$ and $\delta>0$.Intuitively, we expect $R\rightarrow\infty$ and hence $\operatorname{Ric}-\frac{R}{3}g\rightarrow0$.
|
This question already has an answer here:
Given $L=\{a^ib^jc^k | i\neq j \space and \space j=k\}$. Is this CFL? How do I write CFG for it or prove it with pumping lemma? Thanks.
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
Given $L=\{a^ib^jc^k | i\neq j \space and \space j=k\}$. Is this CFL? How do I write CFG for it or prove it with pumping lemma? Thanks.
Suppose that $L$ were context-free. According to Ogden's lemma, there is a constant $p$ such that each word in $L$ with at least $p$ marked positions satisfies the constraints of the lemma. Consider the word $s = \underline{a^p}b^{p+p!}c^{p+p!}$, in which the underlined part is marked. According to Ogden's lemma, there is a decomposition $s = uvwxy$ in which $vx$ contains at least one $a$, and $uv^iwx^iy \in L$ for all $i \geq 0$. We now consider several cases:
$x$ contains $b$s but not $c$s, or $c$s but not $b$s. Choosing $i = 0$, we obtain a word in which the number of $b$s differs from the number of $c$s, and so does not belong to $L$.
$x$ contains both $b$s and $c$s. Choosing $i = 2$, we obtain a word not belonging to $a^*b^*c^*$, and so not belonging to $L$.
$x$ contains no $b$s nor $c$s, and $v \notin a^*$. In this case $x = \epsilon$, and so $v$ must contain at least two different characters. Choosing $i = 2$, we again obtain a word not belonging to $a^*b^*c^*$, and so not belonging to $L$.
$vx \in a^+$, say $vx = a^q$. Let $i = p!/q+1$. Then $uv^iwx^iy = a^{p+p!}b^{p+p!}c^{p+p!} \notin L$.
We have obtained a contradiction, and so $L$ is not context-free.
|
Suppose that the weight of a person selected at random from some population is normally distributed with parameters $\mu$ (mean) and $\sigma$ (variance). Suppose also that $P(X \le 160) = 1/2$ and $P(X \le 140) = 1/4.$ Find $\mu$ and $\sigma,$ and find $P(X \ge 200).$ Of all the people in the population weighing at least 200 pounds, what percentage will weigh over 220 pounds?
HINTS If $X \sim \mathcal{N}(0, 1)$ what are the values of $a,b$ such that $\mathbb{P}[X \le a] = 1/2$ and $\mathbb{P}[X\le b] = 1/4$? How do you convert $Y \sim \mathcal{N}(\mu, \sigma)$ to $X \sim \mathcal{N}(0, 1)$?
From $P(X \le 160) = 1/2,$ you can infer that $\mu = 160.$ Then from $$P(X \le 140) = \left(\frac{X-\mu}{\sigma}\le \frac{140 - 160}{\sigma} \right) = P\left(Z \le \frac{-20}{\sigma} \right) = 1/4,$$ You can find $-20/\sigma$ and thus $\sigma.$
Then, knowing both $\mu$ and $\sigma,$ you are ready to find $P(X \ge 200).$
Note: In principle, if you are given values of any two probabilitiesof the type $P(X \le a)$ and $P(X \le b),$ where $a \ne b$ are bothknown, then you can solve two equations in two unknowns to get $\mu$ and$\sigma.$
|
This blog discusses a problematic situation that can arise when we try to implement certain digital filters. Occasionally in the literature of DSP we encounter impractical digital IIR filter block diagrams, and by impractical I mean block diagrams that cannot be implemented. This blog gives examples of impractical digital IIR filters and what can be done to make them practical.
Implementing an Impractical Filter: Example 1
Reference [1] presented the digital IIR bandpass filter...
I've recently encountered a digital filter design application that astonished me with its design flexibility, capability, and ease of use. The software is called the "ASN Filter Designer." After experimenting with a demo version of this filter design software I was so impressed that I simply had publicize it to the subscribers here on dsprelated.com.What I Liked About the ASN Filter Designer
With typical filter design software packages the user enters numerical values for the...
This blog describes a general discrete-signal network that appears, in various forms, inside so many DSP applications.
Figure 1 shows how the network's structure has the distinct look of a digital filter—a comb filter followed by a 2nd-order recursive network. However, I do not call this useful network a filter because its capabilities extend far beyond simple filtering. Through a series of examples I've illustrated the fundamental strength of this Swiss Army Knife of digital networks...
Recently I've been thinking about the process of envelope detection. Tutorial information on this topic is readily available but that information is spread out over a number of DSP textbooks and many Internet web sites. The purpose of this blog is to summarize various digital envelope detection methods in one place.
Here I focus on envelope detection as it is applied to an amplitude-fluctuating sinusoidal signal where the positive-amplitude fluctuations (the sinusoid's envelope)...
I just discovered a useful web-based source of signal processing information that was new to me. I thought I'd share what I learned with the subscribers here on DSPRelated.com.
The Home page of the web site that I found doesn't look at all like it would be useful to us DSP fanatics. But if you enter some signal processing topic of interest, say, "FM demodulation" (without the quotation marks) into the 'Search' box at the top of the web page
and click the red 'SEARCH...
This blog discusses a not so well-known rule regarding the filtering in multistage decimation and interpolation by an integer power of two. I'm referring to sample rate change systems using half-band lowpass filters (LPFs) as shown in Figure 1. Here's the story.
Figure 1: Multistage decimation and interpolation using half-band filters.Multistage Decimation – A Very Brief Review
Figure 2(a) depicts the process of decimation by an integer factor D. That...
Recently I've been thinking about digital differentiator and Hilbert transformer implementations and I've developed a processing scheme that may be of interest to the readers here on dsprelated.com.
This blog presents a novel method for simultaneously implementing a digital differentiator (DD), a Hilbert transformer (HT), and a half-band lowpass filter (HBF) using a single tapped-delay line and a single set of coefficients. The method is based on the similarities of the three N =...
This blog proposes a novel differentiator worth your consideration. Although simple, the differentiator provides a fairly wide 'frequency range of linear operation' and can be implemented, if need be, without performing numerical multiplications.Background
In reference [1] I presented a computationally-efficient tapped-delay line digital differentiator whose $h_{ref}(k)$ impulse response is:$$ h_{ref}(k) = {-1/16}, \ 0, \ 1, \ 0, \ {-1}, \ 0, \ 1/16 \tag{1} $$
and...
This blog discusses a little-known filter characteristic that enables real- and complex-coefficient tapped-delay line FIR filters to exhibit linear phase behavior. That is, this blog answers the question:What is the constraint on real- and complex-valued FIR filters that guarantee linear phase behavior in the frequency domain?
I'll declare two things to convince you to continue reading.
Declaration# 1: "That the coefficients must be symmetrical" is not a correct
If you need to compute inverse fast Fourier transforms (inverse FFTs) but you only have forward FFT software (or forward FFT FPGA cores) available to you, below are four ways to solve your problem.
Preliminaries To define what we're thinking about here, an N-point forward FFT and an N-point inverse FFT are described by:$$ Forward \ FFT \rightarrow X(m) = \sum_{n=0}^{N-1} x(n)e^{-j2\pi nm/N} \tag{1} $$ $$ Inverse \ FFT \rightarrow x(n) = {1 \over N} \sum_{m=0}^{N-1}...
Some time ago I was studying various digital differentiating networks, i.e., networks that approximate the process of taking the derivative of a discrete time-domain sequence. By "studying" I mean that I was experimenting with various differentiating filter coefficients, and I discovered a computationally-efficient digital differentiator. A differentiator that, for low fequency signals, has the power of George Foreman's right hand! Before I describe this differentiator, let's review a few...
This blog presents two very easy ways to test the performance of multistage cascaded integrator-comb (CIC) decimation filters [1]. Anyone implementing CIC filters should take note of the following proposed CIC filter test methods.
Introduction
Figure 1 presents a multistage decimate by D CIC filter where the number of stages is S = 3. The '↓D' operation represents downsampling by integer D (discard all but every Dth sample), and n is the time index.
If the Figure 3 filter's...
There are two code snippets associated with this blog post:
and
This blog discusses an accurate method of estimating time-domain sinewave peak amplitudes based on fast Fourier transform (FFT) data. Such an operation sounds simple, but the scalloping loss characteristic of FFTs complicates the process. We eliminate that complication by...
There are so many different time- and frequency-domain methods for generating complex baseband and analytic bandpass signals that I had trouble keeping those techniques straight in my mind. Thus, for my own benefit, I created a kind of reference table showing those methods. I present that table for your viewing pleasure in this blog.
For clarity, I define a complex baseband signal as follows: derived from an input analog xbp(t)bandpass signal whose spectrum is shown in Figure 1(a), or...
Recently I was on the Signal Processing Stack Exchange web site (a question and answer site for DSP people) and I read a posted question regarding Goertzel filters [1]. One of the subscribers posted a reply to the question by pointing interested readers to a Wikipedia web page discussing Goertzel filters [2]. I noticed the Wiki web site stated that a Goertzel filter:"...is marginally stable and vulnerable tonumerical error accumulation when computed usinglow-precision arithmetic and...
I just learned a new method (new to me at least) for computing the group delay of digital filters. In the event this process turns out to be interesting to my readers, this blog describes the method. Let's start with a bit of algebra so that you'll know I'm not making all of this up.
Assume we have the N-sample h(n) impulse response of a digital filter, with n being our time-domain index, and that we represent the filter's discrete-time Fourier transform (DTFT), H(ω), in polar form...
I've recently encountered a digital filter design application that astonished me with its design flexibility, capability, and ease of use. The software is called the "ASN Filter Designer." After experimenting with a demo version of this filter design software I was so impressed that I simply had publicize it to the subscribers here on dsprelated.com.What I Liked About the ASN Filter Designer
With typical filter design software packages the user enters numerical values for the...
Most of us are familiar with the process of flipping the spectrum (spectral inversion) of a real signal by multiplying that signal's time samples by (-1)n. In that process the center of spectral rotation is fs/4, where fs is the signal's sample rate in Hz. In this blog we discuss a different kind of spectral flipping process.
Consider the situation where we need to flip the X(f) spectrum in Figure 1(a) to obtain the desired Y(f) spectrum shown in Figure 1(b). Notice that the center of...
Earlier this year, for the Linear Audio magazine, published in the Netherlands whose subscribers are technically-skilled hi-fi audio enthusiasts, I wrote an article on the fundamentals of interpolation as it's used to improve the performance of analog-to-digital conversion. Perhaps that article will be of some value to the subscribers of dsprelated.com. Here's what I wrote:
We encounter the process of digital-to-analog...
This blog explains why, in the process of time-domain interpolation (sample rate increase), zero stuffing a time sequence with zero-valued samples produces an increased-length time sequence whose spectrum contains replications of the original time sequence's spectrum.
Background
The traditional way to interpolate (sample rate increase) an x(n) time domain sequence is shown in Figure 1.
Figure 1
The '↑ L' operation in Figure 1 means to...
There are so many different time- and frequency-domain methods for generating complex baseband and analytic bandpass signals that I had trouble keeping those techniques straight in my mind. Thus, for my own benefit, I created a kind of reference table showing those methods. I present that table for your viewing pleasure in this blog.
For clarity, I define a complex baseband signal as follows: derived from an input analog xbp(t)bandpass signal whose spectrum is shown in Figure 1(a), or...
This blog discusses two ways to determine an exponential averager's weighting factor so that the averager has a given 3-dB cutoff frequency. Here we assume the reader is familiar with exponential averaging lowpass filters, also called a "leaky integrators", to reduce noise fluctuations that contaminate constant-amplitude signal measurements. Exponential averagers are useful because they allow us to implement lowpass filtering at a low computational workload per output sample.
Figure 1 shows...
If you've read about the Goertzel algorithm, you know it's typically presented as an efficient way to compute an individual kth bin result of an N-point discrete Fourier transform (DFT). The integer-valued frequency index k is in the range of zero to N-1 and the standard block diagram for the Goertzel algorithm is shown in Figure 1. For example, if you want to efficiently compute just the 17th DFT bin result (output sample X17) of a 64-point DFT you set integer frequency index k = 17 and N =...
This blog presents several interesting things I recently learned regarding the estimation of a spectral value located at a frequency lying between previously computed FFT spectral samples. My curiosity about this FFT interpolation process was triggered by reading a spectrum analysis paper written by three astronomers [1].
My fixation on one equation in that paper led to the creation of this blog.
Background
The notion of FFT interpolation is straightforward to describe. That is, for example,...
This blog is not about signal processing. Rather, it discusses an interesting topic in number theory, the magic of the number 9. As such, this blog is for people who are charmed by the behavior and properties of numbers.
For decades I've thought the number 9 had tricky, almost magical, qualities. Many people feel the same way. I have a book on number theory, whose chapter 8 is titled "Digits — and the Magic of 9", that discusses all sorts of interesting mathematical characteristics of the...
It works like this: say we have a real xR(n) input bandpass...
This blog describes a general discrete-signal network that appears, in various forms, inside so many DSP applications.
Figure 1 shows how the network's structure has the distinct look of a digital filter—a comb filter followed by a 2nd-order recursive network. However, I do not call this useful network a filter because its capabilities extend far beyond simple filtering. Through a series of examples I've illustrated the fundamental strength of this Swiss Army Knife of digital networks...
There have been times when I wanted to determine the z-domain transfer function of some discrete network, but my algebra skills failed me. Some time ago I learned Mason's Rule, which helped me solve my problems. If you're willing to learn the steps in using Mason's Rule, it has the power of George Foreman's right hand in solving network analysis problems.
This blog discusses a valuable analysis method (well known to our analog control system engineering brethren) to obtain the z-domain...
I just encountered what I think is an interesting technique for multiplying two integer numbers. Perhaps some of the readers here will also find it interesting.
Here's the technique: assume we want to multiply 18 times 17. We start by writing 18 and 17, side-by-side in column A and column B, as shown at the top of Figure 1. Next we divide the 18 at the top of column A by two, retaining only the integer part of the division, and double the 17 at the top of column B. The results of those two...
I have read, in some of the literature of DSP, that when the discrete Fourier transform (DFT) is used as a filter the process of performing a DFT causes an input signal's spectrum to be frequency translated down to zero Hz (DC). I can understand why someone might say that, but I challenge that statement as being incorrect. Here are my thoughts.
Using the DFT as a Filter It may seem strange to think of the DFT as being used as a filter but there are a number of applications where this is...
|
In my experience, there are two primary methods of alpha generation. In both cases, assume we know what price is.
Method 1: Inference on what the price/payoff will be.
Method 2: Inference on what the underlying (“intrinsic”) value is (I.e., what the price/payoff
should be).
Generally, method 1 regresses variables (e.g., factors and/or anomalies) to infer what the price/payoff will be (and/or what the returns will be) within a given time frame. Under method 1, the speed and likelihood of convergence is implied.
Method 2 may be broadly referred to as "valuation". Canonically, it is underpinned by a conviction that price and value will at some point converge (read
margin of safety). As such, value investors are primarly concerned with real world probabilities (since, in the risk-neutral world, $P_t \equiv \mathbb{E}\left[V_T\right]$ for an underlying asset). For example, in the equities world, analysts utilize various measures for net present value to infer what the price should be. Such methods include discounted cash flow analyses, precedent transactions, comps (i.e., peer group benchmarking), etcetera. However, valuation only tells us what the price should be, but nothing about likelihood or rate of the price-value convergence. Even if we knew with absolute certainty that price would converge to value, this says nothing about when it will converge or how.
For example, let's say we have an instrument which continuously pays $X_\tau$ over the interval $(t,\infty] \, \forall \, \tau \in T$. The NPV can then be expressed as such:
$$\mathbb{E}\left[V_t \right] =\int_{t}^\infty m(\tau)X_\tau \,d\tau$$
where: $m(\tau)$ is the discount factor (i.e., "deflator").
This will give us an expected net present value, which tells us whether the instrument is under or overvalued versus its price, $P_t$. Canonically, we would interpret a large enough discrepancy between $V_t$ and $P_t$ as an opportunity to go long or short to capture the difference. But it appears that we do not have enough information to assess the rate of return (let alone the likelihood).
Are there any theories of value or valuation methods which indicate both the likelihood and rate of value-price convergence?
References are always appreciated.
|
Search
Now showing items 1-10 of 32
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
|
How to evaluate the limit $$\lim_{n \to +\infty}(-1)^n\frac{n^n + \ln n}{\cos(n\pi)(n + \pi)^n}$$?
I was suspecting that the sequence does not converge, but the ratio test, which is the only one that I know, turned out to be useless. The limit of the ratio $\frac{a_{n+1}}{a_n}$ is $1$, so the test is inconclusive.
It turns out the result is $\boxed{\displaystyle e^{-\pi}}$, but I fail to see how one would reach it.
Any suggestions?
|
The critical thing is to compute the probability of $A$ winning, the rest is standard and I'll omit it.
We can proceed recursively. Let $P_A(m,n)$ denote the probability that $A$ wins given that we start with $m$ non-diamonds and $n$ diamonds. Of course we want $P_A(39,13)$.
I'll assume we are working without replacement (the computation with replacement is similar but simpler). It is easy to deduce $P_B, P_C$ from $P_A$ as $$P_B(m,n)=\frac m{n+m}\times P_A(m-1,n)\quad \& \quad P_C(m,n)=\frac m{n+m}\times \frac {m-1}{m+n-1}\times P_A(m-2,n)$$
recursively, consider the effect of the first draw. $A$ wins on the first draw with probability $\frac n{n+m}$. If $A$ misses on the first draw, $A$ then counts on $B,C$ also missing, at which point we're back at the start, only with $m$ replaced by $m-3$. The probability of all three missing in the first round is $\frac m{n+m}\times \frac {m-1}{n+m-1}\times \frac {m-2}{n+m-2}$. It follows that $$P_A(n,m)=\frac n{n+m}+\frac m{n+m}\times \frac {m-1}{n+m-1}\times \frac {m-2}{n+m-2}\times P_A(m-3,n)$$
For initial conditions, we have $$P_A(0,n)=1\quad P_A(1,n)=\frac n{n+1}\quad P_A(2,n)=\frac n{n+2}$$
Fixing $n=13$ we can use a spreadsheet or a program to compute $$P_A(39,13)\approx \boxed {.4313}$$
Hard to see a useful sanity check on this...of course for small $m$ we expect $P_A(m,13)$ to be very high, as $A$ has an excellent chance of winning on the first round. As $m$ gets extremely large, we expect $P_A(m,13)\to \frac 13$, since the first few draws are all expected to fail and have minimal impact on the distribution (so all three players start out very close to equal). But that really does require large $m$...trusting my (hastily written) code I see $P_A(300,13)\approx .3475$ which is at least getting there. And $P_A(1000,13)\approx .3376$ which, to me, strongly suggests that the code is working.
|
Just wanted to provide a little color to @assylias' answer, which is a correct one.
The S&P 500 index is constructed according to an adjusted float weighted methodology, in which a change in the index level is defined by a
Laspeyres index:
$\frac{I + \Delta I}{I} = \frac{\sum_i P_{i,t+1}*Q_{i,t}}{\sum P_{i,t}*Q_{i,t}} \,; \forall i \in I$
where: $I$ is the index level;$P_i$ is the price of asset $i$; and,$Q_i$ is the float adjusted share count of asset $i$.
Please reference this following S&P document for a more robust definition: http://us.spindices.com/documents/methodologies/methodology-index-math.pdf
Therefore, the S&P earnings yield can be restated as follows:
$\frac{E}{I}= \frac{\sum_i e_{i,t}*Q_{i,t}}{\sum P_{i,t}*Q_{i,t}} \,; \forall i \in I$
Where now, $e_{i,t}$ is the earnings per share of constituent, $i$.
Also, the simple aggregation of Earnings over Market Cap will provide the yield of a cap-weighted index... it serves as a very good approximation of the modified float-weighted Laspeyres index. The sum-product of corporate earnings by the ratio of float to market cap (i.e., index weightings) would be an equivalent workthrough:
$E = \sum_i ( E_{i,t}*\frac{Q_{i,t} P_{i,t}}{I}) \,; \forall i \in I$
I cannot speak exactly to Multpl's approach, but my estimates using the aforementioned methodology converge with Multpl's using S&P Capital IQ data. As an aside, I've never been able to perfectly replicate the S&P 500 total return index over time, but my average absolute daily tracking error using the above methodology is less $1 *10^{-6}\%$. I presume there are some adjustments missing or lags in my dataset. It stands to reason that results will very somewhat depending on selection of which float and which earnings to use. For example, expect different results if using GAAP earnings (i.e., net income) versus net income before non-controlling interests and extra-ordinary items. Furthermore, the calculation of float can be very nuanced.
|
Suppose $N$ and $H$ are groups and $\phi: H \rightarrow \operatorname{Aut}(N)$ is a homomorphism. We know that $N \rtimes_{\phi} H = N \times H$ if and only if $\phi$ is trivial, but this question is a bit different.
Does $N \rtimes_{\phi} H \cong N \times H$ imply that $\phi$ is trivial?
My first idea is that there should be a counterexample, but I haven't been able to figure out anything yet.
Since nontrivial semidirect products are always nonabelian, we definitely need at least one of $N$ or $H$ nonabelian. I think finding a counterexample to the statement would also be equivalent to finding $G$ such that $G = NH = N'H'$ where
$N \cap H = N' \cap H' = 1$
$N \cong N'$ and $H \cong H'$
$N, N', H' \trianglelefteq G$ but $H$ is not normal in $G$
|
Date: 27.09.17 Times: 10:00 to 11:00 Place: IMUB-Universitat de Barcelona Speaker: David Martí-Pete University: Kyoto University
Abstract:
We study the parameter space of the complex standard family
$F_{\alpha,\beta}(z)=z+\alpha+\
beta \sin z,$
where the parameter $0<\beta\ll 1$ is considered to be fixed and the bifurcation is
studied with respect to the parameter $\alpha\in\mathbb{C}$. In the real axis of that parameter plane one can observe the so-called Arnold tongues, and from them arise some finger-like structures which were observed for the first time by Fagella in her PhD thesis. Similar structures can also be observed in the parameter spaces of families of Blaschke products or Henon maps in higher dimension. We study the qualitative and quantitative aspects of the fingers via parabolic bifurcation.
This is a work in progress joint with Mitsuhiro Shishikura (Kyoto University).
|
Modal Logic Framework Modal operators $\Box_R$ … necessarily $\Diamond_R=\neg\Box_R\neg$ … possibly Semantics $\langle W,R\rangle$ … “Kripke frame” where $W$ … (type of) worlds $R\subseteq W\times W$ … binary accessibility relation $wRv$ … “$v$ is accessible from $w$” $P(w)$ … predicate $(\Box_R P)(w)$ … $\forall v.\, wRv\rightarrow P(v)$ $(\Diamond_R P)(w)$ … $\exists v.\, wRv\land P(v)$ The modal operators are similar to $\forall$ and $\exists$, except instead of mapping predicates $P(w)$ to propositions (as in $\forall w.\,P(w)$), they map predicates to predicates (as in $(\Box P)(w)$).
An
example I cooked up: $W$ := legal board configurations in chess $wRv$ := there is a game development from $w$ to $v$ call $w_0$ the initial position for which $\forall v.\,w_0Rv$ We use modal operators on predicates of the currentstate to form predicates of what might happen $\Box$ = will hold till the end of the game $\Diamond$ = might occur till the end of the game (this makes for sort of temporal logic which is trivial in that it has no time steps) $P(w)$ = in the currentstate $w$, you have a pawn on the board $K(w)$ = in the currentstate $w$, you have a king on the board $B(w)$ = in the currentstate $w$, you have a dark-squared bishop we can formulate: $B(w_0)$ … at the start, you have a dark-squared bishop $\forall v.\,(\Box K)(v)$ … in all possible states, you do have a king on the board $\neg(\Box P)(w_0)$ … it's not a given that some of your pawns will stay on the board till the end $(\Diamond \neg P)(w_0)$ … equivalently, it might happen that at one point you have no pawns on the board $\forall v.\,(\neg P(v)\land\neg B(v))\rightarrow (\Box\neg B)(v)$ … if you have no pawn on the board and no dark-squared bishop, then, till the end of the game, you'll have no dark-squared bishop From the last line, using classical predicate logic on the body, we can make the following derivation:
$(\neg P(v)\land\neg B(v))\rightarrow (\Box\neg B)(v)$ postulate $\neg P(v)\rightarrow\neg B(v)\rightarrow (\Box \neg B)(v)$ currying $\neg(\neg B(v)\rightarrow (\Box\neg B)(v))\rightarrow \neg \neg P(v)$ A implies B $\vdash$ not B implies not A $\neg(\neg B(v)\rightarrow (\Box\neg B)(v))\rightarrow P(v)$ $\neg \neg$ A $\vdash$ A $\neg(\neg\neg B(v)\lor (\Box\neg B)(v))\rightarrow P(v)$ A implies B $\vdash$ not A or B $\neg(B(v)\lor (\Box\neg B)(v))\rightarrow P(v)$ $\neg \neg$ A $\vdash$ A $(\neg B(v)\land \neg(\Box\neg B)(v))\rightarrow P(v)$ neither A nor B $\vdash$ not A and not B $(\neg B(v)\land (\Diamond B)(v))\rightarrow P(v)$ $\neg\Box \neg A\vdash \Diamond A$ … if you have no dark-squared bishop but it's still possible that it might happen, then you have a pawn on the board.
|
Many thermal boundary conditions are available in OpenFOAM. I will upload some basic cases that explain the usage of these boundary conditions. Source Code src/TurbulenceModels/compressible/turbulentFluidThermoModels/derivedFvPatchFields/ convectiveHeatTransfer
It calculates the heat transfer coefficients from the following empirical correlations for forced convection heat transfer:
\begin{eqnarray} \left\{ \begin{array}{l} Nu = 0.664 Re^{\frac{1}{2}} Pr^{\frac{1}{3}} \left( Re \lt 5 \times 10^5 \right) \\ Nu = 0.037 Re^{\frac{4}{5}} Pr^{\frac{1}{3}} \left( Re \ge 5 \times 10^5 \right) \tag{1} \label{eq:NuPlate} \end{array} \right. \end{eqnarray} where \(Nu\) is the Nusselt number, \(Re\) is the Reynolds numberand \(Pr\) is the Prandtl number. externalCoupledTemperature externalWallHeatFluxTemperature
This boundary condition can operate in the following two modes:
Mode#1Specify the heat flux \(q\) \begin{equation} -k \frac{T_p – T_b}{\vert \boldsymbol{d} \vert} = q + q_r \tag{2} \label{eq:fixedHeatFlux} \end{equation} * \(k\): thermal conductivity * \(q_r\): radiative heat flux * \(T_b\): temperature on the boundary Mode#2Specify the heat transfer coefficient \(h\) and the ambient temperature \(T_a\) (Fig. 1) \begin{equation} -k \frac{T_p – T_b}{\vert \boldsymbol{d} \vert} = \frac{T_a – T_b}{R_{th}} + q_r \tag{3} \label{eq:fixedHeatTransferCoeff} \end{equation} * \(R_{th}\): total thermal resistance of convective and conductive heat transfer \begin{equation} R_{th} = \frac{1}{h} + \sum_{i=1}^{n} \frac{l_i}{k_i} \tag{4} \label{eq:Rth} \end{equation} fixedIncidentRadiation lumpedMassWallTemperature
There is a dimensionless quantity called the
Biot number, which is defined as \begin{equation} Bi = \frac{l/k}{1/h} = \frac{hl}{k}, \tag{5} \label{eq:Biot} \end{equation} where \(h\) is the heat transfer coefficient, \(k\) is the thermal conductivity of a solid and \(l\) is the characteristic length of the solid. As the definition in Eq. \eqref{eq:Biot} indicates, it represents the ratio of the internal conduction resistance \(l/k\) and the external convection resistance \(1/h\). If the Biot number is small (\(Bi \ll 1\)), the solid may be treated as a simple lumped mass system of an uniform temperature. This boundary condition calculates the uniform temperature variation \(\Delta T\) on the boundary from the following equation: \begin{equation} m c_p \Delta T = Q \Delta t. \tag{6} \label{eq:lumpedmass} \end{equation} * \(m\): total mass [kg] * \(c_p\): specific heat capacity [J/(kg.K)] * \(Q\): net heat flux on the boundary [W] * \(\Delta t\): time step [s] outletMappedUniformInletHeatAddition totalFlowRateAdvectiveDiffusive wallHeatTransfer compressible::thermalBaffle1D compressible::turbulentHeatFluxTemperature compressible::turbulentTemperatureCoupledBaffleMixed compressible::turbulentTemperatureRadCoupledMixed compressible::alphatJayatillekeWallFunction compressible::alphatPhaseChangeWallFunction compressible::alphatWallFunction
|
I will implement some calculators to estimate the proper settings for the boundary prism layer meshing.
Simple mathematical calculations, such as the four arithmetic operations and power, can be done using HTML and JavaScript.
Boundary Layer Mesh
\begin{align}
l_{n} = l_{1} r^{n-1} \end{align} \begin{align} l_{tot} = \sum_{k=1}^{n} l_{1} r^{k-1} = \frac{l_{1} \left( r^n – 1 \right)}{r – 1} \left( r \neq 1 \right) \end{align}
|
Hello I'm taking the course Algorithm and I have this problem:
Let $G=(V,E)$ be a directed graph. Let $U \subset V$ a subset of $V$, and also let $s,t$ be two vertices such that $s \neq t \in V$ and $s, t \not\in U$.
I need to write an algorithm that find the shortest path from $s$ to $t$ which visits only two vertices in $U$. In this question I allowed to visit a vertex more than once (not have to be a simple path). In my book its says that I need to solve this with reduction. any suggestions?
I think I need to solve this with BFS...
|
June 28th, 2019, 12:53 PM
# 1
Newbie
Joined: Jun 2019
From: Italy
Posts: 7
Thanks: 0
Sum of number's number
Hey, I have a number theory problem: determine the sum of the digits of a natural number. I have been thinking about it for a lot time during this days, but I can't still find a solution. Can someone give me an idea or advise me a book/internet site with something useful?
Thank a lot.
June 28th, 2019, 05:52 PM
# 4
Member
Joined: Oct 2018
From: USA
Posts: 93
Thanks: 66
Math Focus: Algebraic Geometry
This seems pretty brute force, wouldn't surprise me at all if there was something more elegant.
Let each digit $d_i \in \{0,1,2 \dots , 9\}$ be such that
$\displaystyle d_{1}d_{2}d_{3} \dots d_{N} = a \in \mathbb{N}$
(This is concatenation, but also $\displaystyle a= \sum_{i=1}^{N} d_{i}10^{(N-i)}$ )
Then we know $d_1 = floor \left(10^{-(N-1)}a \right)$
Now, we know $a-10^{(N-1)}d_{1} = d_{2}d_{3} \dots d_{N}$
thus $d_{2} = floor \left(10^{-(N-2)} \left(a-10^{(N-1)}d_{1} \right) \right)$
and $d_{3} = floor \left(10^{-(N-3)} \left(a-10^{(N-1)}d_{1} - 10^{(N-2)}d_{2} \right) \right)$
Continuing this strategy leaves
$\displaystyle d_{n} = floor \left( 10^{-(N-n)} \left( a - \sum_{i=1}^{n-1} d_{i}10^{(N-i)} \right) \right) $
So, the sum of digits should be
$\displaystyle \sum_{i=1}^{N}d_{i} = floor \left(10^{-(N-1)}a \right) + \sum_{n=2}^{N} floor \left( 10^{-(N-n)} \left( a - \sum_{i=1}^{n-1} d_{i}10^{(N-i)} \right) \right)$
For $N \geq 2$
Last edited by Greens; June 28th, 2019 at 06:06 PM.
Reason: Negative Signs
June 28th, 2019, 11:34 PM
# 5
Newbie
Joined: Jun 2019
From: Italy
Posts: 7
Thanks: 0
Thanks for your answer. I tried with another different way:
-Let $\displaystyle d_i \in \{0,1,2 \dots , 9\}$ and $\displaystyle \displaystyle d_{1}d_{2}d_{3} \dots d_{N} = a \in \mathbb{N}$
-Now I know that each digit is generated by: $\displaystyle d_i = \sum_{i=1}^{\left \lfloor log_{10}(A)+1 \right \rfloor}\frac{A\,\, mod\,\, 10^{i}-A\,\, mod\,\, 10^{i-1}}{10^{i-1}}$
-I expand the sum and I obtain: $\displaystyle \frac{A\, mod\,\, 10-A\,\, mod\,\, 1}{1}+\frac{A\, mod\,\, 10^{2}-A\,\, mod\,\, 10}{10}+\cdots +\frac{A\, mod\,\, 10^{i}-A\,\, mod\,\, 10^{i-1}}{10^{i-1}}$
-I have to simplify the denominator so: $\displaystyle \frac{1\cdot (A\, mod\,\, 10-A\,\, mod\,\, 1)}{1}+\frac{10\cdot (\frac{A}{10}\, mod\,\, 10-\frac{A}{10}\,\, mod\,\, 1)}{10}+\cdots +\frac{10^{i-1}\cdot (\frac{A}{10^{i-1}}\, mod\,\, 10-\frac{A}{10^{i-1}}\,\, mod\,\, 1)}{10^{i-1}}$
-I split the term: $\displaystyle A\, \, mod\, \, 10+\frac{A}{10}\, \, mod\, \, 10+\cdots+\frac{A}{10^{i-1}}\, \, mod\, \, 10-(A\, \, mod\, \, 1+\frac{A}{10}\, \, mod\; 1+\cdots +\frac{A}{10^{i-1}}\, \, mod\, \, 1)$
-Now I would have picked up $\displaystyle mod\,\,10$ and $\displaystyle mod\,\,1$: $\displaystyle \left (\sum_{i=1}^{\left \lfloor log_{10}(A)+1 \right \rfloor}\frac{A}{10^{i-1}}\right)\, \, mod\, \, 10-\left (\sum_{i=1}^{\left \lfloor log_{10}(A)+1 \right \rfloor}\frac{A}{10^{i-1}}\right)\, \, mod\, \, 1$; but I can't; some ideas to move forward?
Tags function, number, number theory, sum
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post The number of all combinations of limited number of elements is infinite? Germann Advanced Statistics 1 March 12th, 2019 09:34 AM The relationships between Prime number and Fibonacci number (Part 2) thinhnghiem Math 0 May 15th, 2018 08:07 AM Prove a large number of samplings converge to a set of small number of samples thanhvinh0906 Advanced Statistics 3 August 30th, 2017 04:27 PM natural number multiple of another number if its digit sum equal to that number Shen Elementary Math 2 June 5th, 2014 07:50 AM Number of Necklace/Bracelets With Fixed Number of Beads UnreasonableSin Number Theory 2 June 13th, 2010 12:03 AM
|
Here in this account I just want to make sure, that I've grasped the concept of natural selection as is usually spoken by evolutionary biologists, truly the wording here are non standard and in some sense sloppy, yet I want to make sure that I've got this concept right.
My account on what constitutes natural selection and what's not: We say that there is a causal relationship from [A] to [B], to mean that there is a process in which A is a part of, that results in changes in B.
Also we can paraphrase the above by saying that there is a causal relationship that [A] has on [B]; or by saying that [B] is the consequence of a causal relationship from [A].
Now Natural selection seems to require those as prerequisites:
There is a causal relationship from some [hereditary material g] to the [survival and reproductive status of individuals harboring g].
More precisely and incorporating some standard terminology this is:
There is a causal relationship from a [genotype g] to [absolute fitness of g].
We call such hereditary material as fitness related hereditary material.
A variation exists between some fitness related hereditary material as regards their final causal effect on the degree of their fitness in a common environment $E$. Mathematically speaking, this is:
$\exists E, g_1, g_2( E \text { is an environment } \wedge g_1,g_2 \text { are fitness related hereditary material } \wedge g_1 \neq g_2 \wedge fit_E(g_1) \neq fit_E(g_2))$
In English: There exists an environment $E$ and $g_1, g_2$ where both are fitness related hereditary material such that $g_1$ is different from $g_2$ and fitness of $g_1$ in enviornment $E$ is different from fitness of $g_2$ in environment $E$.
Two fitness related hereditary material that have the same final effect on the degree of their fitness (even if through different causal mechanisms) in a common environment $E$, are said to be isofit$_E$, while those that differ are said to be anisofit$_E$. Formally:
$g_1 \ isofit_E \ g_2 \iff fit_E(g_1)=fit_E(g_2)$
$g_1 \ anisofit_E \ g_2 \iff fit_E(g_1) \neq fit_E(g_2)$
Of course anisofit fitness related hereditary material might even have a difference in the direction of their effect on fitness, so a positive direction means that "the causal relationship from the hereditary material to its fitness, is towards increasing its fitness"; while the opposite is for negative direction.
Now for every hereditary material $g$ the population of all individuals in environment $E$ that harbour $g$, is to be called the "$g$ population in $E$".
There is an environment $E$ that has two anisofit$_E$ fitness related hereditary material populations living in $E$.
If we have 1 and 2 and 3, then we can have Natural selection!
The reason is because
Natural selection is the differential (i.e., non-equal) size of populations of anisofit fitness related hereditary material, living under a common environment, that is caused by the differential causal relationship those hereditary materials have on survival and reproduction of individuals in their populations under that common environment.
The above serve as a definition of
natural selection, albeit a long one, but I think it captures the intended meaning given to that term by evolutionary biologists.
Now from the above definition we get to infer two properties that natural selection has:
Natural selection is always not neutral. Note "non-equal" part in the definition. The reason is because we have two anisofit fitness related hereditary material populations living under a common environment, and the difference in their size is attributable to the effect of those hereditary materials on their fitness in that environment, so since those are aniso-fit, then clearly the size of populations of them would be different.
For the same reason outlined in 1, we expect natural selection to be "co-directional", i.e. the difference in sizes of the populations of those anisofit hereditary materials, must parallel the difference in the effect of those hereditary materials on their fitness in that common environment, so the population with the hereditary material causing "higher" fitness would have the "bigger" size! In other words natural selection increase the size of the population of the hereditary material that causes higher fitness on the expense of the size of the population of the hereditary material that causes lower fitness.
On the other hand suppose we have some environment in which there are two anisofit fitness related hereditary material populations. Now if some environmental, or recombinative genetic change inflicts those two populations, that works either in a neutral manner, i.e. causes equal population sizes of those anisofit fitness related hereditary material, or works in an opposite-directional manner, i.e. in a direction that is opposite of the expected direction mentioned above, better be termed as "contra-directional". In this situation even if the size of the populations of those hereditary materials is different (imparting the appearance of a selection) still that difference is not explained by the effect of those anisofit fitness related hereditary material on their fitness in that environment! So this would not be an example of natural selection! It would be an example of an environmental factor that caused a "genetic drift", or of a genetic recombination process that caused a "genetic drift" also.
So we in effect have a struggle between "natural selection" which works in the direction of increasing adaptation with the environment, on one hand, and "random selection" (or sometimes called neutral selection or non-selection) which doesn't necessarily work in the direction of increasing adaptation with the environment.
So in some sense "evolution" is determined by the struggle of those two kinds of mechanism of change.
If random change prevails, then evolution would not necessarily move in the direction of increasing adaptation of living organisms with their environment. While if "natural selection" prevails, then evolution would proceed in the direction of increasing adaptation to the environment.
|
Suppose $\left \{ x_{n} \right \}$ and $\left \{ y_{n} \right \}$ converge to the limits $x$ and $y$, respectively. Also, suppose that $y_{n}$'s are nonzero. I want to show that the sequence $\left \{ \frac{x_{n}}{y_{n}} \right \} \rightarrow \frac{x}{y}$.
Let $\epsilon >0$. We can pick an $N$ such that $n \geq N$ $\Rightarrow \left | \frac{x}{y} - \frac{x_{n}}{y_{n}}\right | < \epsilon$.
I start from $$ \left | \frac{x}{y} - \frac{x_{n}}{y_{n}}\right |$$
$$=\left | \frac{x}{y} - \frac{x}{y_{n}} + \frac{x}{y_{n}} - \frac{x_{n}}{y_{n}}\right | $$
$$= \left | \frac{x\left (y_{n}- y \right )}{y y_{n}} + \frac{\left ( x-x_{n} \right )}{y_{n}}\right | $$
$$\leq \left | \frac{x}{y} \right | \frac{1}{\left | y_{n} \right |} \left | y-y_{n} \right | + \frac{1}{|y_{n}|}\left | x-x_{n} \right | $$ by triangle inequality.
I know that I want to make the two terms $\frac{\epsilon}{2}$ + $\frac{\epsilon}{2}$. Since $y_{n}$ converges to $y$, can make $\left | y - y_{n} \right |< \frac{\epsilon \left | y \right |}{\left ( \left | x \right | + 1\right )}$ for some $N_{1}$.
But the $y_{n}$ term is giving me a problem, which I can't get rid off.
Any suggestions on how to proceed from here (or approaching the problem from a different angle) would be greatly be appreciated.
|
I'm reading Intro to Topology by Mendelson.
The problem at hand is,
Show that $\text{Bdry}(A)=\emptyset$ if and only if $A$ is closed and open.
This was all the problem statement had, but I'm in the chapter covering closure, interior and boundary with respect to topological spaces.
Here is my attempt at the proof,
Suppose that Bdry$(A)=\emptyset$. Then $\bar{A}=A$ $\cup$ Bdry$(A)=A\cup\emptyset=A$. Hence, $\bar{A}=A$, which implies that $A$ is closed. To show that $A$ is also open we will show that $\overline{C(A)}=C(A)$, that is, $C(A)$ is closed and hence $A$ open. We already know that $C(A)\subset\overline{C(A)}$, thus it suffices to only show that $\overline{C(A)}\subset C(A)$. This is the case since we know that Bdry$(A)=\bar{A}\cap\overline{C(A)}=\emptyset$, which implies that $\overline{C(A)}\subset C(\bar{A})=$ Int$(C(A))\subset C(A)$. Thus, $A$ is open. Suppose now that $A$ is both open and closed. Since $A$ is open we know that Int$(A)=A$. Also, since $A$ is closed $\bar{A}=A=$ Int$(A)$. Now we know that Bdry$(A)=\bar{A}\cup\overline{C(A)}=$ Int$(A)$ $\cup$ $C($Int$(A))=\emptyset.$
I used an the identity $\bar{A}=A\cup\text{Bdry}(A)$, which was asked later in the problem set. I'm wondering if I should use this, since I was able to prove it, or attempt the proof assuming I'm unaware of the identity. I have also been trying to write more concise proofs and this one is definitely not one of them. Would removing some of the words help or is there a cleaner approach that can be pointed out?
Thanks for any feedback!
|
I am having problems with part 2 but provided part 1 for context - I believe I have part 1 correct but could be mistaken.
Let a Young-slits apparatus have slits separated by
d. Let the incoming light contain frequencies in range $\Delta\nu$ centred on $\nu_0$. We wish to be able to see clearly ten fringes, five each side of the pattern's centre.
1) Suppose that we set the following criterion for "seeing clearly": in the vicinity of the fifth fringe, where $d\sin\theta=p\lambda_0=pc/\nu_0$ with $p=5$, the order of interference $p$ shall be permitted to cover a range $\frac 12$. Show that this permits $\Delta \nu/\nu_0 = 1/10$
Ans: We are told the path difference can vary by half an order $p$ or half a wavelength in terms of $\lambda_0$. The way I thought about it was: The actual path length from the slits to the fringe for $p=5$ is defined by $\theta$ ($PD=5\lambda_0=5c/\nu_0$) but after a change in frequency/wavelength the wave travels an extra (or less) distance of half wavelength of the original $\lambda_0$ relative to the original light. So the change in frequency gives rise to an accumulation of $\pi$ radians phase over the original path to the $p=5$, $\lambda_0$ fringe. In terms of wavenumbers:
$$\Delta k \cdot PD=\pi$$ $$2\pi\frac{\Delta\nu}{c} \cdot \frac{5c}{\nu_0} = \pi$$ $$\frac{\Delta\nu}{\nu_0}=\frac{1}{10}$$
2) The condition set in part (1) is unduly pessimistic. Show that the fringe contrast in the vicinity of the fifth bright fringe is zero if the range permitted to $p$ is 1. Show that this permits $\Delta\nu/\nu_0=1/5$ -- the hint given here asks one to "remember the elementary way for locating the first zero in the diffraction pattern due to a single slit"
-- I do not understand how to show the contrast is zero near the vicinity of the fifth bright fringe - nor really what this means, I imagine it is something to do with the hint, I have seen this elementary example of rays cancelling across either side of the slit but don't see how to apply it... Please do not show me how to plug in p=1 and obtain the 1/5 ratio.
|
Many people (in different texts) use the following famous definition of the determinant of a matrix $A$: \begin{align*} \det(A) = \sum_{\tau \in S_n}\operatorname{sgn}(\tau)\,a_{1,\tau(1)}a_{2,\tau(2)} \ldots a_{n,\tau(n)}, \end{align*} where the sum is over all permutations of $n$ elements over the symmetric group. None of them actually explains how one
interprets this definition, so this makes me suspicious and think they don't know it either.
This is what I understand so far:
Definition: A permutation $\tau$ of $n$ elements is a bijective function having the set $ \left\{1, 2, ..., n\right\}$ both as its domain and codomain. The number of permutations of $n$ elements, and hence the cardinality of the set $S_n$ is $n!$
So for example, for every integer $i \in \left\{1, 2, ..., n\right\}$ there exists exactly one integer $j \in \left\{1, 2, ..., n\right\}$ for which $\tau(j) = i$.
Permutations can also be represented in matrices, for example if $\tau(1) = 3, \tau(2) = 1, \tau(3) =4, \tau(4) =5, \tau(5) =2$, then \begin{align*} \tau = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 \\ 3 & 1 & 4 & 5 & 2 \end{pmatrix}. \end{align*}
Definition: Let $\tau \in S_n$ be a permutation. Then an inversion pair $(i,j)$ of $\tau$ is a pair of positive integers $i, j \in \left\{1, 2, ..., n\right\}$ for which $i < j$ but $\tau(i) > \tau(j)$.
This determines how many elements are 'out of order'. For example if $\tau = \begin{pmatrix} 1 & 2 & 3 \\ 1 & 3 & 2 \end{pmatrix}$, then $\tau$ has one single inversion pair $(2,3)$, since $\tau(2) = 3 > \tau(3) = 2$.
Definition: A transposition, called $t_{ij}$, is the permutation that interchanges $i$ and $j$ while leaving all other integers fixed in place. The numbers of inversions in a transposition is always odd, because one can compute that the number of inversion pairs in $t_{ij}$ is exactly $2(j-1)-1$. Definition: Let $\tau \in S_n$ be a permutation. Then the sign of $\tau$, denoted by sign$(\tau)$ is defined by \begin{align*} sign(\tau) = (-1)^{\text{# of inversion pairs in}\ \tau} \end{align*} This is $+1$ if the number of inversions is even, and $-1$ if the number is odd. Every transposition is an odd permutation.
This is all clear to me, but can someone explain to me, in an understandable fashion, how one interprets the definition of the determinant on the basis of all this information? That would be greatly appreciated (not only by me, but I think by many others aswell).
For example: what do I make of the $a_{1,\tau(1)}$ etc. in the definition of the determinant, all the way up to $n$? What do they represent?
|
I want an intuition of how this set is constructed more than a formal proof. At first I thought that the set was simply defined axiomatically but further reading showed me that there had been attempts to construct the set explicitly like the construction from Cauchy sequences. So what are the real numbers ? An axiomatically defined set? the completion of the rational numbers? Both? Something else?. Thanks a lot in advance!
In mathematics, we often don't really care what something "is" in some fundamental sense, but what its properties are. In this way, we may view the real numbers as any complete, ordered field $\mathbb{R}$ which contains the rational numbers as an ordered subfield.
At this point, you may have two questions: does any set of real numbers $\mathbb{R}$ exist and, if so, are there more than one which are "different" from each others. As you alluded to, there exist several constructions, most famously in terms of Dedekind cuts and Cauchy sequences of rational numbers, which show the existence of a set of real numbers. It is also true that the real numbers are unique. That is, if I have two complete, ordered fields $\mathbb{R}, \mathbb{F}$ containing the rationals as an ordered subfield, there there exists an order-preserving isomorphism between them. In other words, $\mathbb{R}$ and $\mathbb{F}$ are the "same".
Note that the Cauchy sequence construction shows that the real numbers are the metric completion of $\mathbb{Q}$ under the metric $d(x,y) = |x-y|$, so we may additionally view $\mathbb{R}$ as the metric completion of $\mathbb{Q}$.
What is worth noting is that, for any practical matters, it doesn't matter what model of the real numbers you use. Whether real numbers are equivalence classes of Cauchy sequences, Dedekind cuts, or something else doesn't effect the actual properties of the real numbers that you care about for analysis.
There are many answers to this question on the site but I'll give a fairly intuitive (to me) one here, starting from natural numbers (go to paragraph 4 for just rationals to reals):
Start with natural numbers, that is, whole numbers or counting numbers starting from 0. For every natural number $n$ that isn't zero, define $-n$ to be a new number so that $n + (-n) = 0$. The set of all these numbers together makes the integers. Note that if we add or multiply any two numbers in this set, we get a third number that's also already in the set.
Then, for any integer $z$ that isn't zero, define $z^{-1}$ to be a new number so that $z\cdot z^{-1} = 1$. If you add or multiply any two numbers in this set, you might get a new number not in the set. For any integer $z$ and natural number $n \neq 0$, define a new number $z\cdot n^{-1}$. The set of
these numbers is called the rational numbers, and any sum or product of these numbers is another rational number.
Now, you might be tempted to believe that this is all the numbers there are. In between any two rational numbers, you can find an infinite number of rational numbers, and you can pick two rational numbers to be as close as you want. However, you want a number system so that you can easily find a number $r$ so that $r^2 = 2$ (or $r^n = m$ for natural numbers $m,n$), but it can be shown that there is no rational number that does it. In fact, we can show that a number that solves this equation will have a non-repeating decimal expansion, while all the rational numbers repeat (with terminating decimals ending in repeating $0$'s). So how can we get all the numbers with infinite decimal expansions?
Start with a non-repeating sequence of natural numbers from $0$ to $9$, called $d_n$. Then, define a new sequence $a_n = \sum_{i=0}^n d_i\times10^{-n} = d_0.d_1d_2d_3 \dots d_n$. Then, as $n$ gets bigger, $a_n$ gets closer and closer to a particular number $a$. While each $a_n$ is rational, $a$ is not rational since it requires an infinite number digits, so we define $a$ to be a new number. The set of numbers we have now is the real numbers, and can be shown to follow all the "axioms" of real numbers you've learned. Every continuous function $f$ which has positive and negative values for some real numbers has a solution to $f(a)=0$, so most of our "intuitively solvable" kinds of equations can be solved with these numbers.
However, these numbers don't do everything, as something like $a^2 = -1$ doesn't have a solution ($f(a) = a^2+1 > a^2 \geq 0$, so $f(a) \neq 0$ for any real $a$). We use this to define complex numbers, which themselves can't solve every equation, such as $(ix-xi)^2+1=0$.
The real numbers $\mathbb{R}$ is the unique complete order field up to isomorphism-- $\mathbb{R}$ has cardinality $2^{\mid \mathbb{N} \mid}$.
You can construct the natural numbers $\mathbb{N}$ recursively. From which we can construct $\mathbb{Z}$: the integers, next we construct $\mathbb{Q}$: the rationals and finally from $\mathbb{Q}$ we construct $\mathbb{R}$: the real numbers via Cauchy sequences, Dedekind cuts, or another method.
There are multiple ways to construct the real numbers. As you have mentioned, they can be viewed as a complete order field. Complete here meaning that every nonempty set has a least and greatest upper bound.
There is my favorite way which is to view $\mathbb{R}$ as the completion of $\mathbb{Q}$ with respect to the metric |x-y|. Completion here is that all Cauchy sequences are convergent where the limit lives in your space. This procedure is nice since you can generalize this to metric spaces, namely every metric space has a completion. Depending where you are at you should prove this! There are books that walk you through this in the exercises. Royden is one of them.
You can also construct the reals by Dedekind cuts. I personally do not know much about this but I am sure a quick google search will produce many resources.
The simplest and most intuitive (and involving least technical machinery) approach to construction of real numbers is the one given by Dedekind in his famous pamphlet
Stetigkeit und irrational zahlen in German. An English translation by the name of Continuity and irrational numbers is a very good read. Unfortunately modern textbooks of analysis present this construction in a very boring manner full of symbolism/formalism and you should seriously avoid such presentation/treatment.
Further it has been the trend of modern textbooks to disregard the importance of construction of real numbers so much so that most textbooks would rather convince you that it is a totally pointless exercise.
It is not really important to study a formal proof of construction of real numbers (because a large part of it is very very boring and it is a total waste of time). But there are two things which every student of analysis must understand:
1) The relation of real numbers to rationals. Just like we can define rationals as ratio of two integers and get all properties of rationals from this simple fact, it should be possible to define (and make sense of) real numbers in terms of rational numbers.
2) One must be able to know how real numbers satisfy the completeness property based on whatever way we define real numbers in terms of rational numbers. Axiomatizing this part is more akin to memorizing theorems without proofs (and learning to apply the theorems without their proofs).
In other words we should not be worried so much to prove that real numbers form an ordered field, but we must be worried to know how they form a complete ordered field. The boring parts of the construction of real numbers are related to proving that real numbers form an ordered field. The part about completeness is never boring if presented in right fashion (i.e. first showing the inadequacy of rationals in terms of completeness and then showing how to fix this problem) and it is the key to all the significant theorems in elementary analysis/calculus.
Apart from Dedekind's original pamphlet the only good (meaning not boring) treatment of the construction of real numbers which I have found is in Chapter $1$ of G. H. Hardy's classic
A Course of Pure Mathematics.
|
X-ray Reflectivity Measurements and Landau Theory of Smectic Wetting in Liquid Crystal–Benzyl Alcohol Mixtures Author
Kellogg, G. J.
Kawamoto, E. H.
Foster, W.
Deutsch, Moshe
Ocko, B. M.
Published Versionhttps://doi.org/10.1103/PhysRevE.51.4709 MetadataShow full item record CitationKellogg, G. J., Peter S. Pershan, E. H. Kawamoto, W. Foster, Moshe Deutsch, and B. M. Ocko. 1995. X-ray reflectivity measurements and Landau theory of smectic wetting in liquid crystal–benzyl alcohol mixtures. Physical Review E 51(5): 4709–4726. AbstractSmectic layering has been observed at the free surface of decylcyanobiphenyl (10CB) and dodecylcyanobiphenyl (12CB) and mixtures of 10CB and 12CB with benzyl alcohol (BA). The effect of BA is to suppress the bulk isotropic-smectic transition temperature T\(_{IA}\) and surface layer ‘‘transition’’ temperatures T\(_j\), as well as sharpening these surface transitions by reducing the temperature range \(\Delta\)T over which layers grow. The observed sharpening appears to saturate for concentrations x\(\geq\)0.118. A Landau theory for the growth of a single layer j has been developed, in which the free energy contains a term coupling the concentration x to the local smectic order parameter \(\psi_j\) such that Fj \(\sim\) x\(\psi^{2}_{j}\). Applying this theory to pure 12CB and eight mixtures of 12CB-BA, we find that the j=1\(\rightarrow\)j=2 transition is continuous in pure 12CB, but that the addition of small amounts of impurity drives the transition first order. Other Sourceshttps://alumnus.alumni.caltech.edu/~bill/foster_pre.pdf Terms of UseThis article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:10357484
Collections FAS Scholarly Articles [15998]
|
Assume $\mathcal{S} := \{0, 1, \cdots \}$, $p(0,1)=1$ and $p(n,0)=p(n,n+1)=\frac{1}{2}$ for $n=1,2, \cdots$. Is $0$ recurrent or transient?
So, basically this is an irreducible, closed but infinite Markov chain. Hence, we know that either all states are recurrent or transient.
If I define $\rho_{n,0} := P(T_0 < \infty \; | \; X_0 = n) $ where $T_0 := \inf \{ n \ge 1 : X_n =0 \}$, we will have the following recursive relation
\begin{align} \rho_{n,0} = \frac{1}{2}(\rho_{n+1,0} + 1) \end{align}
so it seems that there is always positive probability for not coming back to 0 (simply keep moving to right). Can we argue that 0 is transient then?
|
$ST-CONN = \text{{(G,s,t) | G is directed graph, there's path from s to t}}$
I've learned the following deterministic algorithm to solve the problem in $log^2n$ space: $\psi(G,s,t,k) :$ $\hspace{1cm}\text{if k == 1: return 1 if there's edge from s to t, else return 0;}$ $\hspace{1cm}\text{foreach } v \in V(G) \text{:}$ $\hspace{2cm}L=\psi(G,s,v,\lceil{\frac{k}{2}}{\rceil});$ $\hspace{2cm}R=\psi(G,v,t,\lfloor{\frac{k}{2}}{\rfloor});$ $\hspace{2cm}\text{if L == 1 and R == 1, return 1;}$ $\hspace{1cm}\text{return 0;}$ The professor said in class that the algorithm takes at most $\mathcal{O}(\log^2n)$ space, where $n$ is the number of vertices. But don't we pass the entire graph each call? I'll expect to see a $|V|+|E|$ factor in the big-oh, or maybe I miss something here.
$ST-CONN = \text{{(G,s,t) | G is directed graph, there's path from s to t}}$
You don't need to pass $G$ every time, since it doesn't change across calls. You can think of it as a global variable, which is stored only on the input tape. The other parameters take only $O(\log n)$ steps, and since the depth of the call stack is also $O(\log n)$, in total this uses up $O(\log^2 n)$ space on the work tape.
|
The wording in this article is a little ambiguous. I thought of two interpretations, the first of which is incorrect. The second is correct but it doesn't explain the bit about "preservation of the positive cone".
It looks like it may be a case of mistakenly using that a mapping
fixes a subset when it only preserves a subset. (I.e. $f|_X = \text{id}_X$ vs $\text{im}(f) \subset X$.)
Interpretation 1. Maybe the statement below is being claimed:
(*) Let $C$ be the positive cone. If $P^tv = v$ then for all $v^+,v^- \in C$ such that $v = v^+ - v^-$ we have $P^tv^+ = v^+$.
This is false unless $P$ is the identity matrix. Let $x \in C$, then according to (*) then $\bar v^\pm = v^\pm + x$ must satisfy $P^t\bar v^+ = \bar v^+$ as well. But linearity says then $P^tx = x$ too. So $P^t$ fixes the positive cone. This is only true if $P$ is the identity matrix because the span of the positive cone is the whole space.
Interpretation 2. Maybe instead it means to set $v^+$ to be the vector of positive entries of $v$ with 0s in place of negatives. E.g. if $v = (1,0,2,-7)$ then $v^+ = (1,0,2,0)$ and $v^- = (0,0,0,7)$. Then the claim would be:
If $P^tv = v$ then we have $P^tv^+ = v^+$ where $v^+$ is the vector of positive entries described above.
This is true, but I don't think the cited article offers any explanation as to why, and I don't know how to prove it without using Frobenius-Perron, which is maybe a harder theorem than the one we are trying to prove.
It is a trivial consequence of Frobenius-Perron in the case of an irreducible stochastic matrix, because one has either $v = v^+$ or $v = v^-$. This is because there is a stationary state $v$ (by F-P) and the eigenspace for $\lambda = 1$ is simple (also F-P). So any invariant vector is a scalar multiple of it and also has this property.
For reducible matrices the eigenspace for $\lambda = 1$ is no longer simple, so we can do things like $v = v_1 - v_2$ where $v_i$ is the stationary state for the $i$th block. Then $v^+ = v_1, v^- = v_2$. Following the suggestion in the article, one would then find a stationary distribution by normalizing just the positive part $v^+ = v_1$.
|
Rotations of second rank tensors are common in quantum mechanics and Nuclear Magnetic Resonance theory. The sums can often be simplified into simplified real components, but when many sums exist, simplification can be challenging to do by hand. For example,
$V_{20}' = \sum_m V_{2m} D_{m0}^{(2)} (\phi, \theta, 0)$
has 5 terms. The simplified result should be:
$V_{20}' = V_{20} \cdot \tfrac{1}{2} (3\cos^2\theta - 1) + \sqrt{\tfrac{3}{8}} (V_{22} + V_{2-2}) \sin^2\theta cos2\phi$
The tensor is symmetric and $V_{2\pm1}=0$.
I've used Mathematica's built-in Wigner matrices as well as my own, and I can't simplify the equation to yield a compact expression. For example,
Sum[Subscript[V, 2, m]* WignerD[{2, m, 0}, \[Phi], \[Theta], 0], {m, -2, 2, 2}] // FullSimplify
yields
$\frac{1}{4} \left(V_{2,0} (3 \cos (2 \theta )+1)+\sqrt{6} e^{-2 i \phi } \sin ^2(\theta ) \left(V_{2,-2}+e^{4 i \phi } V_{2,2}\right)\right)$
This expression is correct, but it is not compact with trig functions. The expression has complex exponentials with components that cancel, and the 2nd Legendre polynomial is eliminated in the $m=0$ term. This example can be solved by hand, but rotations with 25 or 125 components are not uncommon, and it would be immensely helpful to simplify these expressions. Are there techniques or tools in Mathematica to more easily work with sums of Wigner matrices?
Thank you,
Justin
|
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response)
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details
http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.