text
stringlengths
256
16.4k
In the field of linear algebra there are variety of different matrix types. Each has its own definition and relevance. I had trouble finding a good overview online and thought I’d compile a list myself: This article lists a selection of matrix types as well as their definition, mostly based on the corresponding Wikipedia articles. Generally, I recommend The Matrix Cookbook for concise facts about matrices and this figure on $n\times n$ matrices. A matrix is diagonal (wiki) if all entries outside the main diagonal are zero: $$A_{i,j}=0\Leftarrow i\ne j$$ A square matrix (wiki) is a matrix with the same number of rows and columns, e.g. $n\times n$. An identity matrix $\boldsymbol{I}_n$ (wiki) is a diagonal square matrix whose entries on the main diagonal are one: $$I_{i,j}=\begin{cases}1&\text{if }i=j\\0&\text{otherwise}\end{cases}$$ A zero matrix $\boldsymbol{0}_{n,m}$ (wiki) is a $n\times m$ matrix whose entries are zero (respectively, a one matrix $\boldsymbol{1}_{n,m}$ has only one entries): $$0_{i,j}=0$$ A normal matrix (wiki; always unitary, Hermitian, and skew-Hermitian) commutes with its conjugate transpose: $$\boldsymbol{A}^*\boldsymbol{A}=\boldsymbol{A}\boldsymbol{A}^*$$ An upper triangular matrix (wiki) has only zero entries below its main diagonal: $$A_{i,j}=0\Leftarrow i\lt j$$ A lower triangular matrix (wiki) has only zero entries above its main diagonal: $$A_{i,j}=0\Leftarrow i\gt j$$ A symmetric matrix (wiki) is equal to its transpose: $$\boldsymbol{A}=\boldsymbol{A}^\mathsf{T}$$ A skew-symmetric matrix (wiki) is equal to the negative of its transpose: $$\boldsymbol{A}=-\boldsymbol{A}^\mathsf{T}$$ A Hermitian (or self-adjoint) matrix (wiki) is a complex square matrix that is equal to its own conjugate transpose: $$\boldsymbol{H}=\boldsymbol{H}^*$$ A skew-Hermitian (or antihermitian) matrix (wiki) is a complex square matrix whose conjugate transpose is the negative of the original matrix: $$\boldsymbol{H}=-\boldsymbol{H}^*$$ For an invertible (also nonsingular or nondegenerate) square matrix $\boldsymbol{A}$ (wiki) there exists a matrix $\boldsymbol{B}$ which is inverse to $\boldsymbol{A}$: $$\boldsymbol{A}\boldsymbol{B}=\boldsymbol{B}\boldsymbol{A}=\boldsymbol{I}_n$$ A singular (or degenerate) matrix (wiki) is not invertible. A cofactor matrix $\boldsymbol{C}$ (wiki; also matrix of cofactors or comatrix) of a square matrix $\boldsymbol{A}$ is defined such that the inverse of $\boldsymbol{A}$ is the transpose of the cofactor matrix times the reciprocal of the determinant of $\boldsymbol{A}$: $$\boldsymbol{A}^{-1} = \frac{1}{\operatorname{det}(\boldsymbol{A})} \boldsymbol{C}^\mathsf{T}$$ The transpose of an orthogonal matrix (wiki) is equal to its inverse: $$\boldsymbol{A}^\mathsf{T}=\boldsymbol{A}^{-1}\iff\boldsymbol{A}^\mathsf{T}\boldsymbol{A}=\boldsymbol{A}\boldsymbol{A}^\mathsf{T}=\boldsymbol{I}$$ A matrix is unitary (wiki) if its conjugate transpose $\boldsymbol{U}^*$ is also its inverse: $$\boldsymbol{U}^*\boldsymbol{U}=\boldsymbol{U}\boldsymbol{U}^*=\boldsymbol{I}$$ A symmetric square real matrix $\boldsymbol{A}$ is positive-definite (wiki), if for every non-zero column vector $\boldsymbol{z}$, $$\boldsymbol{z}^\intercal\boldsymbol{A}\boldsymbol{z}\gt0$$ holds. For negative-definite matrices, $\boldsymbol{z}^\intercal\boldsymbol{A}\boldsymbol{z}\lt0$. In the complex case, the Hermitian matrix $\boldsymbol{H}$ satisfies $\boldsymbol{z}^*\boldsymbol{H}\boldsymbol{z}\gt0$ (or $\boldsymbol{z}^*\boldsymbol{H}\boldsymbol{z}\lt0$ respectively). A positive-semidefinite (wiki; or negative-semidefinite) matrix is defined similarly to positive-definite and negative-definite matrices, with the difference that the greater than and less than comparisons are relaxed to allow for zero scalars as well. An idempotent matrix (wiki) is a square matrix which, when multiplied by itself, yields itself: $$\boldsymbol{A}\boldsymbol{A}=\boldsymbol{A}$$ A square matrix $\boldsymbol{A}$ is diagonalizable (or nondefective; wiki) if it there exists a matrix $\boldsymbol{P}$ and its inverse $\boldsymbol{P}^{-1}$ such that $$\boldsymbol{P}^{-1}\boldsymbol{A}\boldsymbol{P}$$is a diagonal matrix. A permutation matrix (wiki) is a square binary matrix that has exactly one entry of one in each row and each column and zeros elsewhere. It is orthogonal. A submatrix (wiki) of another matrix is obtained by deleting any collection of rows and/or columns from it. A Frobenius matrix (wiki) is a square matrix with the properties (1) all entries on the main diagonal are one, (2) the entries below the main diagonal of at most one column $j’$ are arbitrary, and (3) every other entry is zero: $$A_{i,j}=\begin{cases} 1&\text{if }i=j\\ A_{i,j}&\text{if }i<j\land j=j’\\ 0&\text{otherwise} \end{cases}$$
TersoffBrennerCorrectionPotential¶ class TersoffBrennerCorrectionPotential( particleType1, particleType2, activeTypes, L, U, x, z, f)¶ Constructor of the potential. To construct this potential it is necessary to specify a threedimensional function. This is done by passing the function values of this function at some grid points. For all other points, tricubic interpolation is used. At the border of the grid, all derivatives are assumed to be zero. Parameters: particleType1( ParticleType or ParticleIdentifier) – Identifier of the first particle type. particleType2( ParticleType or ParticleIdentifier) – Identifier of the second particle type. activeTypes( sequence of ParticleType or ParticleIdentifier) – List of particle types that are involved in the calculation of the so-called conjugated part of the potential. x( sequence of float) – The x-coordinates of the grid. It must have uniform spacing. The same coordinates are also used for the y-coordinates. z( sequence of float) – The z-coordinates of the grid. It must have uniform spacing. f( 3D numpy.array) – The function values at the given grid points. f should be a threedimensional array of size (len(x), len(y), len(z)). f[i, j, k] should be the function value at (x[i], y[j], z[k]) and f must be symmetric with respect to its to first components, i.e. f[i, j, k] = f[j, i, k]. getAllParameterNames()¶ Return the names of all used parameters as a list. getAllParameters()¶ Return all parameters of this potential and their current values as a <parameterName / parameterValue> dictionary. static getDefaults()¶ Get the default parameters of this potential and return them in form of a dictionary of <parameter name, default value> key-value pairs. getParameter( parameterName)¶ Get the current value of the parameter parameterName. setCutoff( r_cut)¶ Set the cutoff radius for this potential. Parameters: r_cut( PhysicalQuantity of type length) – The cutoff radius of this potential. setParameter( parameterName, value)¶ Set the parameter parameterName to the given value. Parameters: parameterName( str) – The name of the parameter that will be modified. value– The new value that will be assigned to the parameter parameterName. Usage Examples¶ Define Tersoff-Brenner correction potentials for carbon. potentialSet = TremoloXPotentialSet(name = 'TersoffBrenner_CSiF_1999')_potential = TersoffBrennerCorrectionPotential( particleType1 = ParticleIdentifier('C', []), particleType2 = ParticleIdentifier('C', []), activeTypes = [ParticleIdentifier('C', []), ], L = 2.0, U = 3.0, x = numpy.array([0., 1., 2., 3.]), z = numpy.array([0., 1., 2., 3.]), f = numpy.array([[ [ 0. , 0. , 0. , 0. ], [ 0. , -0.02882, 0. , 0. ], [ 0. , 0. , 0. , 0. ], [ 0. , 0. , 0. , 0. ]], [[ 0. , -0.02882, 0. , 0. ], [ 0. , -0.0288 , 0. , 0. ], [ 0. , -0.09 , -0.0243 , -0.0243 ], [ 0. , 0. , 0. , 0. ]], [[ 0. , 0. , 0. , 0. ], [ 0. , -0.09 , -0.0243 , -0.0243 ], [ 0. , 0.0415 , 0. , 0. ], [ 0. , -0.0363 , -0.0363 , -0.0363 ]], [[ 0. , 0. , 0. , 0. ], [ 0. , 0. , 0. , 0. ], [ 0. , -0.0363 , -0.0363 , -0.0363 ], [ 0. , 0. , 0. , 0. ]]]),)potentialSet.addPotential(_potential) Notes¶ Several modifications of the original Tersoff potential have been proposed.One of them was introduced in [AG99] and is a combination of theTersoff, Brenner and Tanaka potentials. For the sake of brevity we call it the Tersoff-Brenner potential. The potential energy of the Tersoff-Brenner potential is given as the sum of attractive and repulsive potentials: The repulsive interactions are the same as in the Tersoff potential but a different tapering function \(f^{TB}_{ij}\) is used: The attractive potential \(U^{att}_{ij}\) is given by the term The function \(g(\theta_{ijk})\), where \(\theta_{ijk}\) is the angle between theparticles i-j and i-k, is either or The potential is activated in several steps. To begin with, the pair-dependent parameters must be set by adding a TersoffBrennerPairPotential for each particle pair. The parameters in the constructor of TersoffBrennerPairPotential map the the potential parameters as follows: a The parameter \(A_{ij}\) b The parameter \(B_{ij}\) lambda The parameter \(\lambda_{ij}\) mu The parameter \(\mu_{ij}\) re The parameter \(R^{(e)}_{ij}\) that is only used if additional bond-order terms are activated r1 The parameter \(R_{ij}\) in the taper function r2 The parameter \(S_{ij}\) in the taper function This sets all pairwise parameters except \(\eta_{ij}\) and \(\delta_{ij}\) which are set to 0, thereby deactivating all three-body interactions. These parameters can be set by the TersoffBrennerBOPairPotential. Notthat these parameters are used in a non-symmetric way, which means aTersoffBrennerBOPairPotential should be specified for each ij and ji. The three-body parameters are set in a similar way. To use \(g(\theta)\)from equation (3) you need to set up aTersoffBrennerTriplePotential. For the alternative form(4) the TersoffBrennerTriplePotential2 should be used.Note that particle_type1 denotes the type of the central particle duringthe angle calculation. The remaining parameters are the following: alpha The parameter \(\alpha_{ijk}\) beta The parameter \(\beta_{ijk}\) g_a The parameter \(a_{ijk}\) (TersoffBrennerTriplePotential2 only) g_c The parameter \(c_{ijk}\) g_d The parameter \(d_{ijk}\) g_h The parameter \(h_{ij}\) Another correction modifies the term \(b_{ij}\) in equation (2) as follows: \(H_{ij}\) is an arbitrary 2D-function that is given as a bicubic spline. \(N^{(1)}_{ij}\) and \(N^{(2)}_{ij}\) are given by \(P^{(1)}_{ij}\) and \(P^{(2)}_{ij}\) are two lists of particle types that specify which interactions are taken into account for the calculation of \(N^{(1)}_{ij}\) and \(N^{(2)}_{ij}\). To enable this correction, a TersoffBrennerSplinePotential can be used where the parameters are defined as follows. particleType1 The particle type referred to as i particleType2 The particle type referred to as j activeTypes1 The type list \(P^{(1)}_{ij}\) activeTypes2 The type list \(P^{(2)}_{ij}\) x The x-coordinates of the grid of the spline function y The y-coordinates of the grid of the spline function f The spline values on the x-y-grid Note that this potential acts in a non-symmetric way, meaning that it will onlyact on ij type pairs, but not on ji type pairs. Finally, the term \(\bar{b }_{ij}\) in equation (2) can be modified in the following way: The function \(F_{corr}\) is an arbitrary 3D-function that is given as a tricubic spline. \(N^{(t)}_{ij}\) is the coordination number of particle \(i\), excluding \(j\), i.e. \(N^{(conj)}_{ij}\) is defined as follows: \(T_{ij}\) is a new type of tapering function given by \(P^{(conj)}_{ij}\) is a list of particle types that specify which interactions are taken into account for the calculation of \(N^{(conj)}_{ij}\). For this correction the TersoffBrennerCorrectionPotential can be used. particleType1 The particle type referred to as i particleType2 The particle type referred to as j activeTypes The type list among which neighbors of atoms i and j should be searched for. L The lower cutoff in \(T_{ij}\) U The upper cutoff in \(T_{ij}\) x The x-coordinates of the grid of the spline function. The same values are used for the y-coordinates z The z-coordinates of the grid of the spline function f The spline values on the x-y-z-grid In contrast to the TersoffBrennerSplinePotential, this correctionacts in a symmetric way between the types ij. [AG99] Cameron F Abrams and David B Graves. Molecular dynamics simulations of si etching by energetic cf 3+. Journal of applied physics, 86(11):5938–5948, 1999. URL: https://doi.org/10.1063/1.371637, doi:10.1063/1.371637.
Definition:Riemann Zeta Function Contents Definition $\displaystyle \map \zeta s = \sum_{n \mathop = 1}^\infty \frac 1 {n^s}$ Analytic Continuation This analytic continuation is still called the Riemann zeta function and still denoted $\zeta$. Also see Results about the Riemann $\zeta$ functioncan be found here. Special values Basel Problem, for $\zeta(2)$. Riemann Zeta Function at Even Integers Riemann Zeta Function at Non-Positive Integers Harmonic Series is Divergent: $\map \zeta s \to +\infty$ as $s \to 1$ Trivial Zeroes of Riemann Zeta Function Generalizations Source of Name This entry was named for Georg Friedrich Bernhard Riemann. The Riemann zeta function was discussed by Bernhard Riemann in his $1859$ article Ueber die Anzahl der Primzahlen under einer gegebenen Grösse. In that paper he made several statements about that function. All of the statements made in that paper have now been proved except for one. The last remaining statement which has not been resolved is the Riemann Hypothesis. Sources 1986: David Wells: Curious and Interesting Numbers... (previous) ... (next): $0 \cdotp 5$ 1992: Larry C. Andrews: Special Functions of Mathematics for Engineers... (previous) ... (next): $\S 1.2.2$: Summary of convergence tests (footnote) 1992: George F. Simmons: Calculus Gems... (previous) ... (next): Chapter $\text {A}.32$: Riemann ($1826$ – $1866$) 1992: George F. Simmons: Calculus Gems... (previous) ... (next): Chapter $\text {B}.19$: The Series $\sum 1/ p_n$ of the Reciprocals of the Primes 1997: Donald E. Knuth: The Art of Computer Programming: Volume 1: Fundamental Algorithms(3rd ed.) ... (previous) ... (next): $\S 1.2.7$: Harmonic Numbers: $(5)$ 1997: David Wells: Curious and Interesting Numbers(2nd ed.) ... (previous) ... (next): $0 \cdotp 5$ 2008: David Nelson: The Penguin Dictionary of Mathematics(4th ed.) ... (previous) ... (next): Entry: zeta function 2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics(5th ed.) ... (previous) ... (next): Entry: zeta function
Profit and loss play an important role in running businesses. Students must have heard that company A made a profit of 50 Lakhs in the year 2018. But do they know how the profit and loss are being calculated? For this students have to study Chapter 8 of ICSE Class 8 which deals with the Profit Loss and Discount. In this chapter, students will find questions related to daily life problems. The experts at BYJU’S have provided the answers in pdf format for ICSE Class 8 Maths Selina Solutions Chapter 8 Profit Loss and Discount. The questions are solved in the easiest way so that students can understand the solutions. To download the pdf click on the link below; ICSE Class 8 Maths Chapter 8 – Profit Loss and Discount has only one exercise i.e 8 (A). It contains a total of 14 questions. The step by step solutions to all these questions are also provided below: ICSE Class 8 Maths Selina Solutions Chapter 8 Profit Loss and Discount – Exercise 8 (A) Question 1. Megha bought 10 note-books for Rs.40 and sold them at Rs. 4.75 per note-book. Find, her gain percent. Solution: C.P. of 10 note-books = Rs.40 S.P. of 10 Note-books @ Rs.4.75 per note book\( =4.75 \times 10=\mathrm{Rs} .47 .50\) Gain = S.P. – C.P. = Rs.47.50 – Rs.40 = Rs.7.50 Gain \( \%=\frac{{Gain}}{C P} \times 100\) \( =\frac{7 \cdot 50}{40} \times 100=\frac{750}{40} \%\) \( =\frac{75}{4} \%=18 \frac{3}{4} \%\) Question 2 . A fruit-seller buys oranges at 4 for Rs. 3 and sells them at 3 for Rs.4. Find his profit percent. Solution: Let number of oranges bought = 12 (Note: L.C.M. of 4 and 3 = 12) ∴ C.P. of oranges=Rs. \( \frac{3}{4} \times 12=\mathrm{Rs} .9\) And S.P. of oranges \( = Rs. \frac{4}{3} \times 12=R s .16\) Profit = 16 – 9 = Rs.7 Profit%\( =\frac{\text { Profit }}{C . P} \times 100\) \( =\frac{7}{9} \times 100=\frac{700}{9} \%=77 \frac{7}{9} \%\) Question 3. A man buys a certain number of articles at 15 for Rs. 112.50 and sells them at 12 for Rs. 108 Find: (i) His gain a percent; (ii) The number of articles sold to make a profit of Rs.75. Solution: Let number of articles bought = 60 ∵ L.C.M of 15 and 12= 60 ∴ C.P. of the articles \( =\mathrm{Rs} \cdot \frac{112.50}{15} \times 60\) \( =\mathrm{Rs} . \frac{112.50 \times 60}{15}=112.508 \times 4=\mathrm{Rs} .450 .00\) and S.P. of the articles \( =\mathrm{Rs} \cdot \frac{108}{12} \times 60\) \( =R s .108 \times 5=R s .540\) (i) To find his gain as percent; Gain = S.P – C.P. = Rs.540 – Rs.450 = Rs.90\( ∴ \quad \text { Gain } \%=\frac{\text {Gain}}{C . P} \times 100\) \( =\frac{90}{450} \times 100=\frac{100}{5}=20 \%\) (ii) To find the number of articles sold to make a profit of Rs.75. Solution: To make a profit of Rs.90, the number of articles needed to be sold = 60 To make a profit of Re.1, the number of articles needed to be sold \( =\frac{60}{90}\) To make a profit of Rs.75 the number of articles needed to be sold \( =\frac{60}{90} \times 75=\frac{4500}{90}=50 \) Question 4. A boy buys an old bicycle for Rs. 162 and spends Rs.18 on its repairs before selling the bicycles for Rs. 207. Find his gain or loss percent. Solution: Buying price of the old bicycle = Rs.162 Money spent on repairs = Rs.18 Real C.P. of the bicycle = 162+18=Rs.180 S.P. of the bicycle = Rs.207 Profit= S.P. – C.P. = 207-162=Rs.45\( {Gain} \%=\frac{\text { Profit }}{\mathrm{C} \cdot \mathrm{P} .} \times 100=\frac{45}{180} \times 100=\frac{100}{4}=25 \%\) Question 5. An article is bought from Jaipur for Rs.4800 and is sold in Delhi for Rs. 5,820. If Rs. 1,200 is spent on its transportations, etc. find the loss or the gain as percent. Solution: Cost price = Rs.4,800 Selling Price = Rs.5,820 Transport etc. charges = Rs.1,200 Total cost price = Rs 4800 + Rs. 1,200 = Rs. 6,000 Loss = Rs. 6000 – Rs.5820 = Rs. 180\( ∴ {Loss} \%=\frac{180}{6000} \times 100=3 \%\) Question 6. Mohit sold a T.V. for Rs. 3,600; gaining one-sixth of its selling price. Find: (i) The gain (ii) The cost price of the TV. (iii) The gain percent. Solution: S.P. of T.V. = Rs.3,600 (i) To find the gain Gain \( =\frac{1}{6} \text { of }(3600)=\frac{1}{6} \times 3600=\mathrm{Rs} .600\) Thus, gain = Rs.600 (ii) To find the cost price of the TV. Cost price = S.P – gain = 3600 – 600 = Rs.3000 (iii) To find the gain percent. Gain % \( =\frac{600}{3000} \times 100=\frac{60}{3}=20 \% \) Question 7. By selling a certain number of goods for Rs. 5,500 a shopkeeper loses equal to one-tenth of their selling price. Find: (i) The loss incurred (ii) The cost price of the goods (iii) The loss as percent. Solution: S.P. = Rs. 5,500 (i) Loss incurred:\( {Loss}=\frac{1}{10} \text { of }(S . P .)=\frac{1}{10} \times 5500=R s .550 ) \) Loss incurred = Rs.550 (ii) Cost price: C.P = Rs.5,500 + Rs.550 = Rs.6,050 (iii) Loss as percent:\( {Loss} \%=\frac{550 \times 100}{6050}=\frac{10 \times 100}{110}=\frac{100}{11}=9 \frac{1}{11} \%\) Question 8. The selling price of a sofa-set is \( \frac{4}{5}\) times of its cost price. Find the gain or the loss as percent. Solution: Let the cost price (C.P) = 1 S.P. \( =1 \times \frac{4}{5}=\frac{4}{5}\) \( ∴ {loss} =1-\frac{4}{5}=\frac{5-4}{5}=\frac{1}{5}\) [∵Loss = C.P. – S.P.] \( ∴ {Loss} \%=\frac{\text { Loss }}{\mathrm{CP} .}=\frac{\frac{1}{5}}{1} \times 100=\frac{1}{5} \times 100=20 \%\) Question 9. The cost price of an article is 4/5 times of its selling price. Find the loss or the gain as percent. Solution: Let S.P. = 1 C.P. \( =\frac{4}{5} \times 1=\frac{4}{5} \) Formula for finding the gain% is given below, ∴ Gain = Selling Price – Cost Price I.e. Gain = S.P. – C.P.\( =1-\frac{4}{5}=\frac{5-4}{5}=\frac{1}{5}\) \( ∴ \text { Gain } \%=\frac{\text {Gain}}{C . P} \times 100=\frac{\frac{1}{5}}{\frac{4}{5}} \times 100\) \( =\frac{1}{5} \times \frac{5}{4} \times 100=25 \%\) Question 10. A shopkeeper sells his goods at 80% of their cost price. Find the percent gain or loses. Solution: Let C.P of good = Rs.100 ∴ S.P. of goods \( =\frac{80}{100} \times 100=R s .80\) Loss = C.P – S.P = Rs.100 – Rs.80 = Rs.20 Formula for finding Loss% \( =\frac{\text {Loss}}{C . P} \times 100=\frac{20}{100} \times 100=20 \%\) Question 11. The cost price of an article is 90% of its selling price. What is the profit or the loss as percent? Solution: Let S.P of the article = Rs.100 ∴ C.P. of the article \( =\frac{90}{100} \times 100=\mathrm{Rs} .90\) Gain = Rs.100 – Rs.90 = Rs.10 Gain \( \%=\frac{\text {Gain}}{C . P} \times 100\) \( =\frac{10}{90} \times 100=\frac{100}{9} \%=11 \frac{1}{9} \%\) Question 12. The cost price of an article is 30 percent less than its selling price. Find the profit or loss as percent. Solution: Let S.P of the article = Rs.100 30% of S.P. \( =\mathrm{Rs} \cdot \frac{30}{100} \times 100=R s .30\) ∴ C.P. of the article = 100 – 30 = Rs.70 Profit = S.P – C.P Rs.100 – Rs.70 = Rs.30 Profit \( \%=\frac{\text {Profit}}{C . P} \times 100=\frac{30}{70} \times 100=\frac{300}{7} \%=42 \frac{6}{7} \%\) Question 13. A shop-keeper bought 300 eggs at 80 paisa each. 30 eggs were broken in transaction and then he sold the remaining eggs at one rupee each. Find his gain or loss as percent. Solution: C.P. of 300 eggs at 80 paise each\( =300 \times 80=2400 paise = Rs.240\) No. of eggs which were broken in transaction = 30 Remaining eggs = 300 – 30 = 270 S.P. of eggs at Rs.1 each \( =270 \times 1=\mathrm{Rs} .270\) Gain = S.P – C.P. = Rs.270 – Rs.240 = Rs.30 Gain \( \%=\frac{{Gain}}{C . P} \times 100\) \( =\frac{30}{240} \times 100\) \( =\frac{100}{8} \%=12.5 \%\) Question 14. A man sold his bicycle for Rs. 405 losing one-tenth of its cost price, find: (i) Its cc price; (ii) The loss percent Solution: (i) Let C.P. of the bicycle = Rs. x ∴ Loss \( =\mathrm{Rs} \cdot \frac{x}{10}\) S.P. = C.P. – Loss\( =x-\frac{x}{10}\) But, we are given S.P. = Rs.405.\( ∴ \quad x-\frac{x}{10}=405 \) \( \Rightarrow \frac{10 x-x}{10}=405\) \( \Rightarrow \frac{9 x}{10}=405 \Rightarrow x=405 \times \frac{10}{9}\) \( \Rightarrow \quad x=\frac{4050}{9} \Rightarrow x=450\) ∴ C.P. =Rs.450 (ii) The loss percent.\( Loss =\frac{x}{10}\) \( =\frac{450}{10} \)[∵ Substituting the value of x=450] = Rs.45 Loss% \( =\frac{\text {Loss}}{C P .} \times 100=\frac{4500}{450}=10 \%\) Question 15. A man sold a radio-set for Rs.250 and gained one-ninth of its cost price. Find; (i) Its cost price; (ii) The profit percent. Solution: (i) Let C.P. of the radio –set = Rs. x Gain \( =\mathrm{Rs} \cdot \frac{x}{9}\) S . P . \( =\mathrm{Rs} .\left(x+\frac{x}{9}\right)=\left(\frac{9 x+x}{9}\right) \mathrm{Rs} .=R s \cdot \frac{10 x}{9}\) But, we are given S.P. of the radio-set =Rs.250\( ∴ \quad \frac{10 x}{9}=250\) \( \Rightarrow x=250 \times \frac{9}{10} \Rightarrow x=25 \times 9 \Rightarrow x=225\) ⇒x=225 ∴ C.P. of the radio set =Rs225 (ii) The profit percent. Profit = Rs. \( \frac{x}{9}\) \( =\mathrm{Rs} \cdot \frac{225}{9}\) [∵ Substituting the value of x=225] = Rs.25 Profit \( \%=\frac{\text { Profit }}{\text {C.p}} \times 100\) \( =\frac{25}{225} \times 100=\frac{25 \times 100}{225}=\frac{100}{9} \% \) \( =11 \frac{1}{9} \% \) To get the Selina Solutions for the chapters of class 8 Maths, visit the ICSE Class 8 Maths Selina Solutions page. ICSE Class 8 Maths Selina Solutions Chapter 8 – Profit Loss and Discount The concepts learned in percentage will be used here when students have to solve the problems related to the profit or loss percentage. In this chapter, students will learn some new terms such as cost price, selling price, marked price, discount etc. Also, they will get to know the formulas for solving the problems related to them. Slightly advanced problems are also provided in the exercise for students brainstorming. Also, students can get the answers for Maths, Physics, Chemistry and Biology subjects by clicking on ICSE Class 8 Selina Solutions. We hope this information on “ICSE Class 8 Maths Selina Solutions Chapter 8 Profit Loss and Discount” is useful for students. Keep learning and stay tuned for further updates on ICSE and other competitive exams. To access interactive Maths and Science Videos download BYJU’S App and subscribe to YouTube Channel.
I am trying to solve this eigenvalue problem: \begin{align} \mu \Psi(r) & = -\frac{1}{2}\left ( \Psi^{\prime \prime}(r) + \frac{2}{r} \Psi' (r)\right ) -4\pi \Psi(r) \int _0^\infty dr' r'^2 \frac{\Psi(r')^2}{r>}, \end{align} where $\mu$ is the eigenvalue, $\Psi(r)$ the eigenfunction I'm trying to solve, and $r_>$ is the greater one of $r$ and $r'$. The requirement of the system is \begin{align}\Psi'(0)& = 0, \cr \Psi(\infty) & = 0.\end{align} Since it involves an integral, the way I deal with it is to decouple it to two equations. \begin{align} \mu \Psi (r) & = -\frac{1}{2}\left ( \Psi^{\prime \prime}(r) + \frac{2}{r} \Psi'(r) \right ) +\Psi(r) \Phi(r), \\ \nabla^2 \Phi(r) & = 4\pi \Psi(r)^2, \end{align} with boundary \begin{align} \Psi'(0)& = 0, \\ \Psi(\infty) & = 0,\\ \Phi'(0)& = 0, \\ \Phi(\infty) & = 0. \end{align} Any idea how to solve it? Attempt 1: I manually tweak the boundary condition $\Psi(0)$ at zero trying to find a solution of $\Psi(r)$ that doesn't blow up at infinity. \begin{align} \mu \Psi (r) & = -\frac{1}{2}\left ( \Psi^{\prime \prime}(r) + \frac{2}{r} \Psi'(r) \right ) +\Psi(r) \Phi(r), \\ \nabla^2 \Phi(r) & = 4\pi \Psi(r)^2, \end{align} with boundary \begin{align} \Psi'(0)& = 0, \\ \Psi(0) & = A,\\ \Phi'(0)& = 0, \\ \Phi(\infty) & = 0. \end{align} My current method is to guess a pair of input data $(\mu, A)$, use NDSolve to solve for $\Psi, \Phi$, and check whether the solution gives me a $\Psi$ whose absolute value decreases monotonically $i.e.$ vanishes at infinity, as suggested here. However, even with this method I haven't had much success. So, a) is there a good way to implement this algorithm; b) is there a better/alternative way to attack this whole problem? --AFAIK, (N)DEigensystem cannot handle this problem. Edited: Attemp 2: So I just tried it out, naively if I use NDEigensystem directly as follows, it won't solve at all, which is not surprising. rStart1=10^-3;rEnd1=5;epsilon=0;sollst1=NDEigensystem[{(-1/2*(D[ψ[r],r,r]+2/r*D[ψ[r],r])-4Pi*ψ[r]*Integrate[rp^2*ψ[rp]^2/If[rp>r,rp,r],{rp,0,Infinity}])+NeumannValue[0,r==rStart1]},ψ[r],{r,rStart1,rEnd1},8];//AbsoluteTiming Attemp 3:So this time I have fixed $\mu$, kept the boundary condition as \begin{align} \Psi'(0)& = 0, \\ \Psi(\infty) & = 0,\\ \Phi'(0)& = 0, \\ \Phi(\infty) & = 0. \end{align} and used the shooting method to select normalization for $\Psi(0)$. The following code works but only for very specific choice of rEnd1 and $\mu=-0.4$. Changing any of them gives me a trivial solution again. It seems the code is a bit fine-tuned in this sense. rEnd1 = 3.7;rStart1 = 10^-3;stableHunter3[μ_] := Module[{}, epsilon = 0; eqn2 = {-μ*ψ[r] - 1/2*(D[ψ[r], r, r] + 2/r*D[ψ[r], r]) + ϕ[ r]*ψ[r](*-1/8*ψ[r]^3*)== 0, D[ϕ[r], r, r] + 2/r*D[ϕ[r], r] == 4 Pi*ψ[r]^2}; bc2 = {ψ[rEnd1] == epsilon, (D[ψ[r], r] /. r -> rStart1) == epsilon, (D[ϕ[r], r] /. r -> rStart1) == epsilon, (ϕ[rEnd1]) == epsilon}; sollst1 = Map[NDSolveValue[ Flatten@{eqn2, bc2}, {ψ[r], ϕ[r]}, {r, rStart1, rEnd1}, Method -> "BoundaryValues" -> {"Shooting", "StartingInitialConditions" -> {ψ[0] == #}}, Method -> "StiffnessSwitching"] &, Range[-3, 3, 0.1]] ]funclst = stableHunter3[-0.4];Plot[Evaluate[funclst /. r -> r00], {r00, rStart1, rEnd1}] Attemp 4:So based on the suggestion from @bbgodfrey, an analysis on the asymptotic behavior suggests the following. When $r\rightarrow \infty$, \begin{align}\mu \Psi(r) & = -\frac{1}{2}\left ( \Psi^{\prime \prime}(r) + \frac{2}{r} \Psi' (r)\right )-4\pi \Psi(r) \int _0^\infty dr' r'^2 \frac{\Psi(r')^2}{r>}, \cr& \approx -\frac{1}{2}\left ( \Psi^{\prime \prime}(r) + \frac{2}{r} \Psi' (r)\right ) -\frac{N}{r}\Psi(r),\end{align}where $N\equiv \int_0^\infty \Psi(r)^2 4\pi r^2 d r$. Here $\Phi(r)$ is not necessary, but if it is defined then $\Phi(r) \approx -\frac{N}{r}$ at large $r$. Then, I could use the following code to solve the eigenvalue $\mu$. rStart1 = 10^-3;rEnd1 = 5;epsilon = 0;Nphy1 = 1;sollst1 = NDEigensystem[{(-1/2*(D[ψ[r], r, r] + 2/r*D[ψ[r], r]) - Nphy1/r*ψ[r]) + NeumannValue[0, r == rStart1](*, DirichletCondition[ψ[r]\[Equal]epsilon, x\[Equal]rStart1]*)}, ψ[r], {r, rStart1, rEnd1}, 8]; // AbsoluteTimingsollst1sollst1[[2]] /. r -> rEnd1Plot[Evaluate[%[[2]]], {r, rStart1, rEnd1}, PlotRange -> All(*,PlotLabels\[Rule]{1,2,3}*)]{{-0.176753, 0.497306, -0.507498, 1.62112, 3.15356, 5.08955, 7.42803, 10.1706}, {InterpolatingFunction[{{0.001, 5.}}, <>][r], InterpolatingFunction[{{0.001, 5.}}, <>][r], InterpolatingFunction[{{0.001, 5.}}, <>][r], InterpolatingFunction[{{0.001, 5.}}, <>][r], InterpolatingFunction[{{0.001, 5.}}, <>][r], InterpolatingFunction[{{0.001, 5.}}, <>][r], InterpolatingFunction[{{0.001, 5.}}, <>][r], InterpolatingFunction[{{0.001, 5.}}, <>][r]}}{0.205938, -0.112739, 0.0259637, -0.094238, -0.0840003, -0.0769606} I believe the $\mu =-0.507498$ is the ground state eigenvalue. However, when I use the boundary conditions at $rEnd1$ to trade for the ones at infinity, I ended up with trivial solution only. stableHunter3[μ_] := Module[{}, eqn2 = {-μ*ψ[r] - 1/2*(D[ψ[r], r, r] + 2/r*D[ψ[r], r]) + ϕ[ r]*ψ[r] == 0, D[ϕ[r], r, r] + 2/r*D[ϕ[r], r] == 4 Pi*ψ[r]^2}; bc2 = {ψ[rEnd1] == 0.025963699315910634`, (D[ψ[r], r] /. r -> rStart1) == epsilon, (D[ϕ[r], r] /. r -> rStart1) == epsilon, (ϕ[rEnd1]) == Nphy1/rEnd1}; sollst1 = NDSolveValue[ Flatten@{eqn2, bc2}, {ψ[r], ϕ[r]}, {r, rStart1, rEnd1}](*Map[NDSolveValue[Flatten@{eqn2,bc2},{ψ[r],ϕ[ r]},{r,rStart1,rEnd1}, Method\[Rule]"BoundaryValues"\[Rule]{"Shooting", "StartingInitialConditions"\[Rule]{ψ[0]\[Equal]#}}]&,Range[-3, 3,0.05]]*) ]sollst1 = stableHunter3[-0.5074977775084505`]Plot[Evaluate[sollst1 /. r -> r00], {r00, rStart1, rEnd1}] Any thoughts how to proceed from there to solve $\Psi(r)$ at region where $r$ is small? Attempt 5:Something weird happened. I start believing there is something related to the precision of NDSolve. Here is the code in which only the second element of the list gives me a nontrivial solution. stableHunter3[μ_] := Module[{}, eqn2 = {-μ*ψ[r] - 1/2*(D[ψ[r], r, r] + 2/r*D[ψ[r], r]) + ϕ[ r]*ψ[r] == 0, D[ϕ[r], r, r] + 2/r*D[ϕ[r], r] == 4 Pi*ψ[r]^2}; bc2 = {ψ[rEnd1] == 0.025963699315910634`, (D[ψ[r], r] /. r -> rStart1) == epsilon, (D[ϕ[r], r] /. r -> rStart1) == epsilon, (ϕ[rEnd1]) == -Nphy1/rEnd1}; sollst1 = NDSolveValue[ Flatten@{eqn2, bc2}, {ψ[r], ϕ[r]}, {r, rStart1, rEnd1}](*Map[NDSolveValue[Flatten@{eqn2,bc2},{ψ[r],ϕ[ r]},{r,rStart1,rEnd1}, Method\[Rule]"BoundaryValues"\[Rule]{"Shooting", "StartingInitialConditions"\[Rule]{ψ[0]\[Equal]#}}]&,Range[-3, 3,0.05]]*) ]sollst1 = Map[stableHunter3[#] \&,(*Range[-0.5074977775084505-0.02,-0.5074977775084505+0.02, 0.005]*){-0.5074977775084505`, -0.5074977775084505`-0, \-0.5074977775084505`+0.01}]Plot[Evaluate[Flatten@sollst1 /. r -> r00], {r00, rStart1, rEnd1}, PlotRange -> All]
To really understand this you should study the differential geometry of geodesics in curved spacetimes. I'll try to provide a simplified explanation. Even objects "at rest" (in a given reference frame) are actually moving through spacetime, because spacetime is not just space, but also time: apple is "getting older" - moving through time. The "velocity" through spacetime is called a four-velocity and it is always equal to the speed of light. Spacetime in gravitation field is curved, so the time axis (in simple terms) is no longer orthogonal to the space axes. The apple moving first only in the time direction (i.e. at rest in space) starts accelerating in space thanks to the curvature (the "mixing" of the space and time axes) - the velocity in time becomes velocity in space. The acceleration happens because the time flows slower when the gravitational potential is decreasing. Apple is moving deeper into the graviational field, thus its velocity in the "time direction" is changing (as time gets slower and slower). The four-velocity is conserved (always equal to the speed of light), so the object must accelerate in space. This acceleration has the direction of decreasing gravitational gradient. Edit - based on the comments I decided to clarify what the four-velocity is: 4-velocity is a four-vector, i.e. a vector with 4 components. The first component is the "speed through time" (how much of the coordinate time elapses per 1 unit of proper time). The remaining 3 components are the classical velocity vector (speed in the 3 spatial directions). $$ U=\left(c\frac{dt}{d\tau},\frac{dx}{d\tau},\frac{dy}{d\tau},\frac{dz}{d\tau}\right) $$ When you observe the apple in its rest frame (the apple is at rest - zero spatial velocity), the whole 4-velocity is in the "speed through time". It is because in the rest frame the coordinate time equals the proper time, so $\frac{dt}{d\tau} = 1$. When you observe the apple from some other reference frame, where the apple is moving at some speed, the coordinate time is no longer equal to the proper time. The time dilation causes that there is less proper time measured by the apple than the elapsed coordinate time (the time of the apple is slower than the time in the reference frame from which we are observing the apple). So in this frame, the "speed through time" of the apple is more than the speed of light ($\frac{dt}{d\tau} > 1$), but the speed through space is also increasing. The magnitude of the 4-velocity always equals c, because it is an invariant (it does not depend on the choice of the reference frame). It is defined as: $$ \left\|U\right\| =\sqrt[2]{c^2\left(\frac{dt}{d\tau}\right)^2-\left(\frac{dx}{d\tau}\right)^2-\left(\frac{dy}{d\tau}\right)^2-\left(\frac{dz}{d\tau}\right)^2} $$ Notice the minus signs in the expression - these come from the Minkowski metric. The components of the 4-velocity can change when you switch from one reference frame to another, but the magnitude stays unchanged (all the changes in components "cancel out" in the magnitude).
Variational Inference Last updated at 03-06-2018 It took me more than two weeks to finally to get the essence of variational inference. The painful but fulfilling process brought me to appreciate the really difficult (at least for me) but beautiful math behind it. A couple of useful tutorials I found: D. M. Blei, A. Kucukelbir, and J. D. McAuliffe, “Variational Inference: A Review for Statisticians,” J. Am. Stat. Assoc., vol. 112, no. 518, pp. 859–877, 2017. D. G. Tzikas, A. C. Likas and N. P. Galatsanos, “The variational approximation for Bayesian inference,” in IEEE Signal Processing Magazine, vol. 25, no. 6, pp. 131-146, November 2008. doi: 10.1109/MSP.2008.929620 https://am207.github.io/2017/wiki/VI.html Machine Learning: Variational Inference by Jordan Boyd-Graber Table of Contents Introduction Evidence Lower Bound (ELBO) Mean Field Variational Family Coordinate Ascent VI (CAVI) Applying VI on GMM import numpy as npimport scipy as sp import seaborn as snsfrom matplotlib import pyplot as plt%matplotlib inline Introduction A motivating example As with expectation maximization, I start by describing a problem to motivate variational inference. Please refer to Prof. Blei’s review for more details above. Let’s start by considering a problem where we have data points sampled from mixtures of Gaussian distributions. Specifically, there are $K$ Gaussian distributions with means $\mathbf{\mu} = { \mu_1, …, \mu_K }$ and unit variance ($\mathbf{\sigma}=\mathbf{1}$) (for simplicity): univariate Please refer to my EM post on details of this sample data In a Bayesian setting, we can assume that all the means come from the same prior distribution, which is also a Gaussian $\mathcal{N}(0, \sigma^2)$, with variance $\sigma^2$ being a hyperparameter. Specifically, we can setup a very simple : generative model For each data point $x^{(i)}$, where $i=1,…,n$ Sample a cluster assigment (or membership to which Gaussian mixture component it belongs) $c^{(i)}$ uniformally: $c^{(i)} \sim Uniform(K)$ Sample its value from the correpsonding component: $x^{(i)} \sim \mathcal{N}(\mu_{c_i}, 1)$ This gives us a straightforward view of how the joint probability can be written out: Summing/integrating out the variables, we can obtain the marginal likelihood (i.e., evidence): latent Note that while it is possible to compute individual termins within the integral (Gaussian prior and Gaussian likelihood), the overall complexity will go up to $\mathcal{O}(K^n)$ (which is all possible configurations). Therefore, we need to consider approximate inference due to the intractability. General situation Actually, the motivation of VI is very similar to EM, which is to come up with an approximation of point estimates of the latent variables. Instead of point estimates, VI tries to find distributions that serve as good proxies for the exact solution. variational Suppose we have $\mathbf{z}={ z^{(1)}, …, z^{(n)}}$ as observed data and $\mathbf{z}={ z^{(1)}, …, z^{(n)}}$ as latent variables. The inference problem is to find the posterior probability of the latent variables given observations $p(\mathbf{z} \vert \mathbf{x})$: Often times, the denominator evidence is intractable. Therefore, we need approximations to VI is exactly what we need! find a relatively good solution in a reasonable amount of time. Evidence Lower Bound (ELBO) In my EM post, we can prove that the log evidence $ln~p(\mathbf{x})$ can actually be decomposed as follows (note that we will use integral this time): where $\mathcal{L}(\mathbf{x})$ is defined as ELBO: KL divergence is bounded and nonnegative. If we further decompose ELBO, we have: The last equation above shows that ELBO trades off between the two terms: The first term prefers $q(\mathbf{z})$ to be high when complete likelihood $p(\mathbf{x}, \mathbf{z})$ is high The second term encourages $q(\mathbf{z})$ to be diffuse across the space Finally, we note that, in EM, we are able to compute $p(\mathbf{z}\vert \mathbf{x})$ so that we can easily maximize ELBO. However, VI is the way to do when we cannot. Mean Field Variational Family By far, we haven’t say anything about what $q$’s should be. In this note, we only look at a classical type, called . Specifically, it assumes that latent variables are mutually independent. This means that we can easily factorize the variational distributions into groups: mean field variational family By doing this, we are unable to capture the interdependence between the latent variables. A nice visualization from Blei et al. (2017): Coordinate Ascent VI (CAVI) By factorizing the variational distributions into invidual products, we can easily apply coordinate ascent optimization on each factor. A common procedure to conduct CAVI is: Choose variational distributions $q$; Compute ELBO; Optimize individual $q_j$’s by taking the gradient for each latent variable; Repeat until ELBO converges. Derivation of optimal var. dist.: In fact, we can derive the optimal solutions without too much efforts: Now, according to the definition of expectation, we have: We assume independence between latent variables’ variational distributions $q(z)$ Therefore we have: We can see that the first two terms can be combined into a negative KL divergence between those within the $E_j\big[ \cdot \big]$. Therefore, we can write down the optimal solution as: Alternative way While the derivation through iterative expectation seems to be simpler, I personally still prefer taking partial derivatives to parameters of variational distributions, as in the following example, which seems to be more natural to me. After all, we will be using ELBO to check convergence anyway. Applying VI on GMM Let’s get back to our original problem with the univariate Gaussian mixtures with unit variance. The full parameterization is as follows: Note that $c_i$ is a vector of one’s and zero’s such that $c_{ij} = 1; c_{il} = 0 \text{ for } j\neq l$ (a.k.a, one-hot vector). By mean field VI, we can introduce variational distributions for the two latent variables $\mathbf{c}$ and $\mathbf{\mu}$: Choose $q$ According to what we have above, we will choose the following variational distributions for $c$ and $\mu$ where: Therefore, $\phi_i$ is a vector of probabilities such that $p(c_i=j) = \phi_{ij}$ ELBO The most important thing is to write down ELBO, the evidence lower bound, which is needed for (i) parameter updates; (ii) convergence check. However, I’ve seen that convergence check could be done by the relative change of parameter estimates here. If parameters do not change much, VI will stop by thinking that it has converged. Recall that $ELBO = E_q[log~p(x,z)] - E_q[log~q(z)] $. Let me split this task into two. Full joint probability The hidden/latent variables in this problem are $c$ and $\mu$. $p(c_i) = \dfrac{1}{K}$ is a constant drop it. We then expand $p(\mu_j)$: For $log~p(x_i~\vert~c_i, \mu)$, it is a bit tricky. Recall that $c_i$ is a one-hot vector, where only one of the element is 1. We can make use of this property and rewrite: Combine all the above, we can write the log full joint probability as: Entropy of variational distributions Thanks to the mean field assumption, we can factorize the joint of variational easily: Let’s expand these two terms seperately. Therefore, we have: Full ELBO Merge the results back, we have the ELBO written as: Parameter updates $\phi_{ij}$ This is a contrained optimization because $\sum_j \phi_{ij} = 1~\forall i$. However, we do not need to add the Lagrange multiplier and the result can still be normalized (we are using a lot of $\propto$ here!) $m_j$ $s_j^2$ Note that we are considering $s_j^2$ as a whole. Now that we have the ELBO and paramter update formulas, we can setup our own VI algorithm for this simple Guassian Mixture! Python Implementation import numpy as npclass UGMM(object): '''Univariate GMM with CAVI''' def __init__(self, X, K=2, sigma=1): self.X = X self.K = K self.N = self.X.shape[0] self.sigma2 = sigma**2 def _init(self): self.phi = np.random.dirichlet([np.random.random()*np.random.randint(1, 10)]*self.K, self.N) self.m = np.random.randint(int(self.X.min()), high=int(self.X.max()), size=self.K).astype(float) self.m += self.X.max()*np.random.random(self.K) self.s2 = np.ones(self.K) * np.random.random(self.K) print('Init mean') print(self.m) print('Init s2') print(self.s2) def get_elbo(self): t1 = np.log(self.s2) - self.m/self.sigma2 t1 = t1.sum() t2 = -0.5*np.add.outer(self.X**2, self.s2+self.m**2) t2 += np.outer(self.X, self.m) t2 -= np.log(self.phi) t2 *= self.phi t2 = t2.sum() return t1 + t2 def fit(self, max_iter=100, tol=1e-10): self._init() self.elbo_values = [self.get_elbo()] self.m_history = [self.m] self.s2_history = [self.s2] for iter_ in range(1, max_iter+1): self._cavi() self.m_history.append(self.m) self.s2_history.append(self.s2) self.elbo_values.append(self.get_elbo()) if iter_ % 5 == 0: print(iter_, self.m_history[iter_]) if np.abs(self.elbo_values[-2] - self.elbo_values[-1]) <= tol: print('ELBO converged with ll %.3f at iteration %d'%(self.elbo_values[-1], iter_)) break if iter_ == max_iter: print('ELBO ended with ll %.3f'%(self.elbo_values[-1])) def _cavi(self): self._update_phi() self._update_mu() def _update_phi(self): t1 = np.outer(self.X, self.m) t2 = -(0.5*self.m**2 + 0.5*self.s2) exponent = t1 + t2[np.newaxis, :] self.phi = np.exp(exponent) self.phi = self.phi / self.phi.sum(1)[:, np.newaxis] def _update_mu(self): self.m = (self.phi*self.X[:, np.newaxis]).sum(0) * (1/self.sigma2 + self.phi.sum(0))**(-1) assert self.m.size == self.K #print(self.m) self.s2 = (1/self.sigma2 + self.phi.sum(0))**(-1) assert self.s2.size == self.K Making data num_components = 3mu_arr = np.random.choice(np.arange(-10, 10, 2), num_components) +\ np.random.random(num_components)mu_arr array([ 8.79153551, 6.29803456, -5.7042636 ]) SAMPLE = 1000 X = np.random.normal(loc=mu_arr[0], scale=1, size=SAMPLE)for i, mu in enumerate(mu_arr[1:]): X = np.append(X, np.random.normal(loc=mu, scale=1, size=SAMPLE)) fig, ax = plt.subplots(figsize=(15, 4))sns.distplot(X[:SAMPLE], ax=ax, rug=True)sns.distplot(X[SAMPLE:SAMPLE*2], ax=ax, rug=True)sns.distplot(X[SAMPLE*2:], ax=ax, rug=True) <matplotlib.axes._subplots.AxesSubplot at 0x10f5784e0> ugmm = UGMM(X, 3)ugmm.fit() Init mean[9.62056838 2.48053419 8.95455044]Init s2[0.22102799 0.50256273 0.72923656]5 [ 8.78575069 -5.69598804 6.32040619]10 [ 8.77126102 -5.69598804 6.30384436]15 [ 8.77083542 -5.69598804 6.30344752]20 [ 8.77082412 -5.69598804 6.30343699]25 [ 8.77082382 -5.69598804 6.30343671]30 [ 8.77082381 -5.69598804 6.3034367 ]35 [ 8.77082381 -5.69598804 6.3034367 ]ELBO converged with ll -1001.987 at iteration 35 ugmm.phi.argmax(1) array([0, 0, 0, ..., 1, 1, 1]) sorted(mu_arr) [-5.704263600460798, 6.298034563379406, 8.791535506275245] sorted(ugmm.m) [-5.695988039984863, 6.303436701203107, 8.770823807705389] fig, ax = plt.subplots(figsize=(15, 4))sns.distplot(X[:SAMPLE], ax=ax, hist=True, norm_hist=True)sns.distplot(np.random.normal(ugmm.m[0], 1, SAMPLE), color='k', hist=False, kde=True)sns.distplot(X[SAMPLE:SAMPLE*2], ax=ax, hist=True, norm_hist=True)sns.distplot(np.random.normal(ugmm.m[1], 1, SAMPLE), color='k', hist=False, kde=True)sns.distplot(X[SAMPLE*2:], ax=ax, hist=True, norm_hist=True)sns.distplot(np.random.normal(ugmm.m[2], 1, SAMPLE), color='k', hist=False, kde=True)
I'm not an expert in lens design. I need to build a lens having fixed the focal point $f$, the lens diameter $D$, the maximum thickness $d$, the refractive index $n$ and the half-angle $\theta$ entering on the lens and the angle that I want at the exiting from the lens. With these, I want to find the radius of curvature $R_1,R_2$. In some books of optics that I used in my course of physic and I found the Lensmaker equation (because I cannot use the thin lens approximation). $\frac{1}{f} = (n-1)\left [ \frac{1}{R_1} - \frac{1}{R_2} + \frac{(n-1)d}{nR_1R_2} \right ]$ This equation don't allow me to play with the lens diameter, so looking around I found the relation between this and the numerical aperture NA from Fresnell equation but I'm not sure how can help me. $n_1\sin\theta_{in} = n_2\sin\theta_{out}$ $NA = n\sin\theta = n\sin\left [ \arctan \left ( \frac{D}{2f} \right ) \right ]$ Can you help me figuring out these radius knowing all the previous variables and how take into account the whole lens diameter? This is my geometry. The focal point is known since I have a diverging source and I want to have a planar wavefront after my lens.
Difference between revisions of "Inaccessible" m (→Hyper-inaccessible and more: by) (13 intermediate revisions by 5 users not shown) Line 3: Line 3: Inaccessible cardinals are the traditional entry-point to the large cardinal hierarchy, although weaker notions such as the [[worldly]] cardinals can still be viewed as large cardinals. Inaccessible cardinals are the traditional entry-point to the large cardinal hierarchy, although weaker notions such as the [[worldly]] cardinals can still be viewed as large cardinals. − A cardinal $\kappa$ is ''inaccessible'', also + A cardinal $\kappa$ is ''inaccessible'', also called ''strongly inaccessible'', if it is an [[uncountable]] [[regular]] [[strong limit]] cardinal. − + $\kappa$ inaccessible − * + $V_\kappa$ is a model of ZFC [[]] − * ( + , . − + * $\kappa$ is a [[beth fixed point]], and consequently $V_\kappa=H_\kappa$. − + * ()there is an inner model of a forcing extension satisfying ZF+DC in which every set of reals is Lebesgue measurablein , is of an inaccessible cardinal. − * + * , the set of $\kappa$ $\kappa$ [[]] . − + − + − == + + + + + + + + == + = + = + − A cardinal $\kappa$ is ''weakly inaccessible'' if it is an [[uncountable]] [[regular]] [[limit]] + + + A cardinal $\kappa$ is ''weakly inaccessible'' if it is an [[uncountable]] [[regular]] [[limit ]]. Under [[GCH]], this is equivalent to inaccessibility, since under GCH every limit cardinal is a strong limit cardinal. So the difference between weak and strong inaccessibility only arises when GCH fails badly. Every inaccessible cardinal is weakly inaccessible, but forcing arguments show that any inaccessible cardinal can become a non-inaccessible weakly inaccessible cardinal in a forcing extension, such as after adding an enormous number of Cohen reals (this forcing is c.c.c. and hence preserves all cardinals and cofinalities and hence also all regular limit cardinals). Meanwhile, every weakly inaccessible cardinal is fully inaccessible in any inner model of GCH, since it will remain a regular limit cardinal in that model and hence also be a strong limit there. In particular, every weakly inaccessible cardinal is inaccessible in the constructible universe $L$. Consequently, although the two large cardinal notions are not provably equivalent, they are equiconsistent. + + + + + + ==Levy collapse== ==Levy collapse== Line 39: Line 55: == Degrees of inaccessibility == == Degrees of inaccessibility == − A cardinal $\kappa$ is ''$1$-inaccessible'' if it is inaccessible and a limit of inaccessible cardinals. In other words, $\kappa$ is $1$-inaccessible if $\kappa$ is the $\kappa^{\rm th}$ inaccessible cardinal, that is, if $\kappa$ is a fixed point in the enumeration of all inaccessible cardinals. Equivalently, $\kappa$ is $1$-inaccessible if $V_\kappa$ is a universe and satisfies the universe axiom. + A cardinal $\kappa$ is ''$1$-inaccessible'' if it is inaccessible and a limit of inaccessible cardinals. In other words, $\kappa$ is $1$-inaccessible if $\kappa$ is the $\kappa^{\rm th}$ inaccessible cardinal, that is, if $\kappa$ is a fixed point in the enumeration of all inaccessible cardinals. Equivalently, $\kappa$ is $1$-inaccessible if $V_\kappa$ is a universe and satisfies the universe axiom. More generally, $\kappa$ is $\alpha$-inaccessible if it is inaccessible and for every $\beta\lt\alpha$ it is a limit of $\beta$-inaccessible cardinals. More generally, $\kappa$ is $\alpha$-inaccessible if it is inaccessible and for every $\beta\lt\alpha$ it is a limit of $\beta$-inaccessible cardinals. − ==Hyper-inaccessible== + + + + + + + ==Hyper-inaccessible == + + − + $\kappa$ is ''$\alpha$-if it is hyperinaccessible and for every $\beta\lt\alpha$ it is $\beta$-inaccessible, $\kappa$ is '''' if $\$is $\$-. − + $\$ hyper${}^\$-inaccessible $\kappa$ $\kappa$ $\$-inaccessible . − + {} Latest revision as of 09:13, 4 May 2019 Inaccessible cardinals are the traditional entry-point to the large cardinal hierarchy, although weaker notions such as the worldly cardinals can still be viewed as large cardinals. A cardinal $\kappa$ being inaccessible implies the following: $V_\kappa$ is a model of ZFC and so inaccessible cardinals are worldly. The worldly cardinals are unbounded in $\kappa$, so $V_\kappa$ satisfies the existence of a proper class of worldly cardinals. $\kappa$ is an aleph fixed point and a beth fixed point, and consequently $V_\kappa=H_\kappa$. (Solovay)there is an inner model of a forcing extension satisfying ZF+DC in which every set of reals is Lebesgue measurable; in fact, this is equiconsistent to the existence of an inaccessible cardinal. For any $A\subseteq V_\kappa$, the set of all $\alpha<\kappa$ such that $\langle V_\alpha;\in,A\cap V_\alpha\rangle\prec\langle V_\kappa;\in,A\rangle$ is club in $\kappa$. An ordinal $\alpha$ being inaccessible is equivalent to the following: $V_{\alpha+1}$ satisfies $\mathrm{KM}$. $\alpha>\omega$ and $V_\alpha$ is a Grothendiek universe. $\alpha$ is $\Pi_0^1$-Indescribable. $\alpha$ is $\Sigma_1^1$-Indescribable. $\alpha$ is $\Pi_2^0$-Indescribable. $\alpha$ is $0$-Indescribable. $\alpha$ is a nonzero limit ordinal and $\beth_\alpha=R_\alpha$ where $R_\beta$ is the $\beta$-th regular cardinal, i.e. the least regular $\gamma$ such that $\{\kappa\in\gamma:\mathrm{cf}(\kappa)=\kappa\}$ has order-type $\beta$. $\alpha = \beth_{R_\alpha}$. $\alpha = R_{\beth_\alpha}$. $\alpha$ is a weakly inaccessible strong limit cardinal (see weakly inaccessible below). Contents Weakly inaccessible cardinal A cardinal $\kappa$ is weakly inaccessible if it is an uncountable regular limit cardinal. Under GCH, this is equivalent to inaccessibility, since under GCH every limit cardinal is a strong limit cardinal. So the difference between weak and strong inaccessibility only arises when GCH fails badly. Every inaccessible cardinal is weakly inaccessible, but forcing arguments show that any inaccessible cardinal can become a non-inaccessible weakly inaccessible cardinal in a forcing extension, such as after adding an enormous number of Cohen reals (this forcing is c.c.c. and hence preserves all cardinals and cofinalities and hence also all regular limit cardinals). Meanwhile, every weakly inaccessible cardinal is fully inaccessible in any inner model of GCH, since it will remain a regular limit cardinal in that model and hence also be a strong limit there. In particular, every weakly inaccessible cardinal is inaccessible in the constructible universe $L$. Consequently, although the two large cardinal notions are not provably equivalent, they are equiconsistent. There are a few equivalent definitions of weakly inaccessible cardinals. In particular: Letting $R$ be the transfinite enumeration of regular cardinals, a limit ordinal $\alpha$ is weakly inaccessible if and only if $R_\alpha=\aleph_\alpha$ A nonzero cardinal $\kappa$ is weakly inaccessible if and only if $\kappa$ is regular and there are $\kappa$-many regular cardinals below $\kappa$; that is, $\kappa=R_\kappa$. A regular cardinal $\kappa$ is weakly inaccessible if and only if $\mathrm{REG}$ is unbounded in $\kappa$ (showing the correlation between weakly Mahlo cardinals and weakly inaccessible cardinals, as stationary in $\kappa$ is replaced with unbounded in $\kappa$) Levy collapse The Levy collapse of an inaccessible cardinal $\kappa$ is the $\lt\kappa$-support product of $\text{Coll}(\omega,\gamma)$ for all $\gamma\lt\kappa$. This forcing collapses all cardinals below $\kappa$ to $\omega$, but since it is $\kappa$-c.c., it preserves $\kappa$ itself, and hence ensures $\kappa=\omega_1$ in the forcing extension. Inaccessible to reals A cardinal $\kappa$ is inaccessible to reals if it is inaccessible in $L[x]$ for every real $x$. For example, after the Levy collapse of an inaccessible cardinal $\kappa$, which forces $\kappa=\omega_1$ in the extension, the cardinal $\kappa$ is of course no longer inaccessible, but it remains inaccessible to reals. Universes When $\kappa$ is inaccessible, then $V_\kappa$ provides a highly natural transitive model of set theory, a universe in which one can view a large part of classical mathematics as taking place. In what appears to be an instance of convergent evolution, the same universe concept arose in category theory out of the desire to provide a hierarchy of notions of smallness, so that one may form such categories as the category of all small groups, or small rings or small categories, without running into the difficulties of Russell's paradox. Namely, a Grothendieck universe is a transitive set $W$ that is closed under pairing, power set and unions. That is, (transitivity) If $b\in a\in W$, then $b\in W$. (pairing) If $a,b\in W$, then $\{a,b\}\in W$. (power set) If $a\in W$, then $P(a)\in W$. (union) If $a\in W$, then $\cup a\in W$. The Grothendieck universe axiom is the assertion that every set is an element of a Grothendieck universe. This is equivalent to the assertion that the inaccessible cardinals form a proper class. Degrees of inaccessibility A cardinal $\kappa$ is $1$-inaccessible if it is inaccessible and a limit of inaccessible cardinals. In other words, $\kappa$ is $1$-inaccessible if $\kappa$ is the $\kappa^{\rm th}$ inaccessible cardinal, that is, if $\kappa$ is a fixed point in the enumeration of all inaccessible cardinals. Equivalently, $\kappa$ is $1$-inaccessible if $V_\kappa$ is a universe and satisfies the universe axiom. More generally, $\kappa$ is $\alpha$-inaccessible if it is inaccessible and for every $\beta\lt\alpha$ it is a limit of $\beta$-inaccessible cardinals. $1$-inaccessibility is already consistency-wise stronger than the existence of a proper class of inaccessible cardinals, and $2$-inaccessibility is stronger than the existence of a proper class of $1$-inaccessible cardinals. More specifically, a cardinal $\kappa$ is $\alpha$-inaccessible if and only if for every $\beta<\alpha$: $$V_{\kappa+1}\models\mathrm{KM}+\text{There is a proper class of }\beta\text{-inaccessible cardinals}$$ As a result, if $\kappa$ is $\alpha$-inaccessible then for every $\beta<\alpha$: $$V_\kappa\models\mathrm{ZFC}+\text{There exists a }\beta\text{-inaccessible cardinal}$$ Therefore $2$-inaccessibility is weaker than $3$-inaccessibility, which is weaker than $4$-inaccessibility... all of which are weaker than $\omega$-inaccessibility, which is weaker than $\omega+1$-inaccessibility, which is weaker than $\omega+2$-inaccessibility...... all of which are weaker than hyperinaccessibility, etc. Hyper-inaccessible and more A cardinal $\kappa$ is hyperinaccessible if it is $\kappa$-inaccessible. One may similarly define that $\kappa$ is $\alpha$-hyperinaccessible if it is hyperinaccessible and for every $\beta\lt\alpha$, it is a limit of $\beta$-hyperinaccessible cardinals. Continuing, $\kappa$ is hyperhyperinaccessible if $\kappa$ is $\kappa$-hyperinaccessible. More generally, $\kappa$ is hyper${}^\alpha$-inaccessible if it is hyperinaccessible and for every $\beta\lt\alpha$ it is $\kappa$-hyper${}^\beta$-inaccessible, where $\kappa$ is $\alpha$-hyper${}^\beta$-inaccessible if it is hyper${}^\beta$-inaccessible and for every $\gamma<\alpha$, it is a limit of $\gamma$-hyper${}^\beta$-inaccessible cardinals. Meta-ordinal terms are terms like $Ω^α · β + Ω^γ · δ +· · ·+Ω^\epsilon · \zeta + \theta$ where $α, β...$ are ordinals. They are ordered as if $Ω$ were an ordinal greater then all the others. $(Ω · α + β)$-inaccessible denotes $β$-hyper${}^α$-inaccessible, $Ω^2$-inaccessible denotes hyper${}^\kappa$-inaccessible $\kappa$ etc. Every Mahlo cardinal $\kappa$ is $\Omega^α$-inaccessible for all $α<\kappa$ and probably more. Similar hierarchy exists for Mahlo cardinals below weakly compact. All such properties can be killed softly by forcing to make them any weaker properties from this family.[1] ReferencesMain library
Search Now showing items 21-26 of 26 Measurement of transverse energy at midrapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV (American Physical Society, 2016-09) We report the transverse energy ($E_{\mathrm T}$) measured with ALICE at midrapidity in Pb-Pb collisions at ${\sqrt{s_{\mathrm {NN}}}}$ = 2.76 TeV as a function of centrality. The transverse energy was measured using ... Elliptic flow of electrons from heavy-flavour hadron decays at mid-rapidity in Pb–Pb collisions at $\sqrt{s_{\rm NN}}= 2.76$ TeV (Springer, 2016-09) The elliptic flow of electrons from heavy-flavour hadron decays at mid-rapidity ($|y| < 0.7$) is measured in Pb–Pb collisions at $\sqrt{s_{\rm NN}}= 2.76$ TeV with ALICE at the LHC. The particle azimuthal distribution with ... Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-09) The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ... D-meson production in $p$–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV and in $pp$ collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2016-11) The production cross sections of the prompt charmed mesons D$^0$, D$^+$, D$^{*+}$ and D$_{\rm s}^+$ were measured at mid-rapidity in p-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm NN}}=5.02$ TeV ... Azimuthal anisotropy of charged jet production in $\sqrt{s_{\rm NN}}=2.76$ TeV Pb–Pb collisions (Elsevier, 2016-02) This paper presents measurements of the azimuthal dependence of charged jet production in central and semi-central $\sqrt{s_{\rm NN}}=2.76$ TeV Pb–Pb collisions with respect to the second harmonic event plane, quantified ... Particle identification in ALICE: a Bayesian approach (Springer Berlin Heidelberg, 2016-05-25) We present a Bayesian approach to particle identification (PID) within the ALICE experiment. The aim is to more effectively combine the particle identification capabilities of its various detectors. After a brief explanation ...
Search Now showing items 1-9 of 9 Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV (Springer, 2012-09) The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
By the "noncompact $U(1)$ group", we mean a group that is isomorphic to $({\mathbb R},+)$. In other words, the elements of $U(1)$ are formally $\exp(i\phi)$ but the identification $\phi\sim \phi+2\pi k$ isn't imposed. When it's not imposed, it also means that the dual variable ("momentum") to $\phi$, the charge, isn't quantized. One may allow fields with arbitrary continuous charges $Q$ that transform by the factor $\exp(iQ\phi)$. It's still legitimate to call this a version of a $U(1)$ group because the Lie algebra of the group is still the same, ${\mathfrak u}(1)$. In the second part of the question, where I am not 100% sure what you don't understand about the quote, you probably want to explain why compactness is related to quantization? It's because the charge $Q$ is what determines how the phase $\phi$ of a complex field is changing under gauge transformations. If we say that the gauge transformation multiplying fields by $\exp(iQ\phi)$ is equivalent for $\phi$ and $\phi+2\pi$, it's equivalent to saying that $Q$ is integer-valued because the identity $\exp(iQ\phi)=\exp(iQ(\phi+2\pi))$ holds iff $Q\in{\mathbb Z}$. It's the same logic as the quantization of momentum on compact spaces or angular momentum from wave functions that depend on the spherical coordinates. He is explaining that the embedding of the $Q$ into a non-Abelian group pretty much implies that $Q$ is embedded into an $SU(2)$ group inside the non-Abelian group, and then the $Q$ is quantized for the same mathematical reason why $J_z$ is quantized. I would only repeat his explanation because it seems utterly complete and comprehensible to me. Note that the quantization of $Q$ holds even if the $SU(2)$ is spontaneously broken to a $U(1)$. After all, we see such a thing in the electroweak theory. The group theory still works for the spontaneously broken $SU(2)$ group.This post imported from StackExchange Physics at 2014-03-22 17:13 (UCT), posted by SE-user Luboš Motl
I have an eigenvalue problem: $$-\frac{d^2}{dx^2} \psi(x) +V(x)\psi(x) = E \psi(x)$$ where $V(x)$ is a complex periodic potential: $$V(x) = 4[\cos^2(x) + i 0.3 \sin(2x)]$$ It has been claimed that the eigenvalues of this problem are all real (this is always the case if the coefficient of $\sin$ is less than $0.5$, which in this case is $0.3$), although it's not Hermitian. To verfy this, I have used the code provided here by @Jens periodic boundary conditions and NDEigensystem. But, Mathematica doesn't return anything. The code in the above link is (with an additional 1/2 for the kinetic term): spectrum[n_, dim_: 10000][potential_, {var_, varMin_, varMax_}, kBloch_: 0] := Module[{e, v, vRange, dx, grid, potentialGrid, eKin, ePot, min, interpolate},vRange = varMax - varMin;interpolate = ListInterpolation[Append[#, First[#]], {{varMin, varMax}}, PeriodicInterpolation -> True] &;dx = N[vRange/dim];grid = Range[varMin, varMax, dx];eKin = -(1/2) NDSolve`FiniteDifferenceDerivative[2, grid, PeriodicInterpolation -> True]["DifferentiationMatrix"] - I kBloch NDSolve`FiniteDifferenceDerivative[1, grid, PeriodicInterpolation -> True]["DifferentiationMatrix"];potentialGrid = Table[potential + kBloch^2/2, {var, Most[grid]}];(* eKin is periodically interpolated, so its last element is internally dropped by FiniteDifferenceDerivative, as redundant. Therefore, I also have to drop the last grid element in potentialGrid. *)min = Min[potentialGrid];(* Matrix for the potential is shifted so its minimum entry is zero, guaranteeing that eigenvalues will be sorted in descending order: *)ePot = DiagonalMatrix[SparseArray[potentialGrid - min]];{e, v} = Eigensystem[eKin + ePot, -n];(* Final step: turn vectors on spatial grid back into functions of x by interpolation: *)Append[Reverse /@ {e + min, Map[interpolate[#/Max[Abs[#]]] &, v]},(* In the eigenvalues, potential offset min was added back to get original energy scale. *) interpolate[potentialGrid]]] And the potential is defined (with an extra 1/2): potential[x_] = 1/2 (4((Cos[x])^2 + I 0.3 Sin[2x]));{eigenvals, eigenvecs, pot} = spectrum[7][potential[x], {x, -Pi/2, Pi/2}];2 eigenvals
Learning Objectives Calculate the kinetic energy of a particle given its mass and its velocity or momentum Evaluate the kinetic energy of a body, relative to different frames of reference It’s plausible to suppose that the greater the velocity of a body, the greater effect it could have on other bodies. This does not depend on the direction of the velocity, only its magnitude. At the end of the seventeenth century, a quantity was introduced into mechanics to explain collisions between two perfectly elastic bodies, in which one body makes a head-on collision with an identical body at rest. The first body stops, and the second body moves off with the initial velocity of the first body. (If you have ever played billiards or croquet, or seen a model of Newton’s Cradle, you have observed this type of collision.) The idea behind this quantity was related to the forces acting on a body and was referred to as “the energy of motion.” Later on, during the eighteenth century, the name kinetic energy was given to energy of motion. With this history in mind, we can now state the classical definition of kinetic energy. Note that when we say “classical,” we mean non-relativistic, that is, at speeds much less that the speed of light. At speeds comparable to the speed of light, the special theory of relativity requires a different expression for the kinetic energy of a particle, as discussed in Relativity. Since objects (or systems) of interest vary in complexity, we first define the kinetic energy of a particle with mass m. Kinetic Energy The kinetic energy of a particle is one-half the product of the particle’s mass m and the square of its speed \(v\): \[K = \frac{1}{2} mv^{2} \ldotp \label{7.6}\] We then extend this definition to any system of particles by adding up the kinetic energies of all the constituent particles: $$K =\sum \frac{1}{2} mv^{2} \ldotp \label{7.7}$$ Note that just as we can express Newton’s second law in terms of either the rate of change of momentum or mass times the rate of change of velocity, so the kinetic energy of a particle can be expressed in terms of its mass and momentum (\(\vec{p}\) = m \(\vec{v}\)), instead of its mass and velocity. Since v = \(\frac{p}{m}\), we see that $$K = \frac{1}{2} m \left(\dfrac{p}{m}\right)^{2} = \frac{p^{2}}{2m}$$ also expresses the kinetic energy of a single particle. Sometimes, this expression is more convenient to use than Equation \(\ref{7.6}\). The units of kinetic energy are mass times the square of speed, or kg • m 2/s 2. But the units of force are mass times acceleration, kg • m/s 2, so the units of kinetic energy are also the units of force times distance, which are the units of work, or joules. You will see in the next section that work and kinetic energy have the same units, because they are different forms of the same, more general, physical property. Example \(\PageIndex{1}\): Kinetic Energy of an Object What is the kinetic energy of an 80-kg athlete, running at 10 m/s? The Chicxulub crater in Yucatan, one of the largest existing impact craters on Earth, is thought to have been created by an asteroid, traveling at 22 km/s and releasing 4.2 x 10 23J of kinetic energy upon impact. What was its mass? In nuclear reactors, thermal neutrons, traveling at about 2.2 km/s, play an important role. What is the kinetic energy of such a particle? Strategy To answer these questions, you can use the definition of kinetic energy in Equation \(\ref{7.6}\). You also have to look up the mass of a neutron. Solution Do not forget to convert km into m to do these calculations, although, to save space, we omitted showing these conversions. $$K = \frac{1}{2} (80\; kg)(10\; m/s)^{2} = 4.0\; kJ \ldotp \nonumber $$ $$m = \frac{2K}{v^{2}} = \frac{2(4.2 \times 10^{23}\; J)}{22\; km/s)^{2}} = 1.7 \times 10^{15}\; kg \ldotp \nonumber$$ $$K = \frac{1}{2} (1.68 \times 110^{-27}\; kg) (2.2\; km/s)^{2} = 4.1 \times 10^{-21}\; J \ldotp \nonumber$$ Significance In this example, we used the way mass and speed are related to kinetic energy, and we encountered a very wide range of values for the kinetic energies. Different units are commonly used for such very large and very small values. The energy of the impactor in part (b) can be compared to the explosive yield of TNT and nuclear explosions, 1 megaton = 4.18 x 10 15 J. The Chicxulub asteroid’s kinetic energy was about a hundred million megatons. At the other extreme, the energy of subatomic particle is expressed in electron-volts, 1 eV = 1.6 x 10 −19 J. The thermal neutron in part (c) has a kinetic energy of about one fortieth of an electronvolt. Exercise \(\PageIndex{1}\) A car and a truck are each moving with the same kinetic energy. Assume that the truck has more mass than the car. Which has the greater speed? A car and a truck are each moving with the same speed. Which has the greater kinetic energy? Because velocity is a relative quantity, you can see that the value of kinetic energy must depend on your frame of reference. You can generally choose a frame of reference that is suited to the purpose of your analysis and that simplifies your calculations. One such frame of reference is the one in which the observations of the system are made (likely an external frame). Another choice is a frame that is attached to, or moves with, the system (likely an internal frame). The equations for relative motion, discussed in Motion in Two and Three Dimensions, provide a link to calculating the kinetic energy of an object with respect to different frames of reference. Example \(\PageIndex{2}\): Kinetic Energy Relative to Different Frames A 75.0-kg person walks down the central aisle of a subway car at a speed of 1.50 m/s relative to the car, whereas the train is moving at 15.0 m/s relative to the tracks. What is the person’s kinetic energy relative to the car? What is the person’s kinetic energy relative to the tracks? What is the person’s kinetic energy relative to a frame moving with the person? Strategy Since speeds are given, we can use \(\frac{1}{2}\)mv 2 to calculate the person’s kinetic energy. However, in part (a), the person’s speed is relative to the subway car (as given); in part (b), it is relative to the tracks; and in part (c), it is zero. If we denote the car frame by C, the track frame by T, and the person by P, the relative velocities in part (b) are related by \(\vec{v}_{PT}\) = \(\vec{v}_{PC}\) + \(\vec{v}_{CT}\). We can assume that the central aisle and the tracks lie along the same line, but the direction the person is walking relative to the car isn’t specified, so we will give an answer for each possibility, v PT = v CT ± v PC, as shown in Figure \(\PageIndex{1}\). Solution $$K = \dfrac{1}{2} (75.0\; kg)(11.50\; m/s)^{2} = 84.4\; J \ldotp \nonumber$$ $$v_{PT} = (15.0 \pm 1.50)7; m/s \ldotp \nonumber$$ Therefore, the two possible values for kinetic energy relative to the car are $$K = \dfrac{1}{2} (75.0\; kg)(13.5\; m/s)^{2} = 6.83\; kJ \nonumber $$ and $$K = \frac{1}{2} (75.0\; kg)(16.5\; m/s)^{2} = 10.2\; kJ \ldotp \nonumber$$ In a frame where v P= 0, K = 0 as well. Significance You can see that the kinetic energy of an object can have very different values, depending on the frame of reference. However, the kinetic energy of an object can never be negative, since it is the product of the mass and the square of the speed, both of which are always positive or zero. Exercise \(\PageIndex{2}\) You are rowing a boat parallel to the banks of a river. Your kinetic energy relative to the banks is less than your kinetic energy relative to the water. Are you rowing with or against the current? The kinetic energy of a particle is a single quantity, but the kinetic energy of a system of particles can sometimes be divided into various types, depending on the system and its motion. For example: If all the particles in a system have the same velocity, the system is undergoing translational motion and has translational kinetic energy. If an object is rotating, it could have rotational kinetic energy. If it is vibrating, it could have vibrational kinetic energy. The kinetic energy of a system, relative to an internal frame of reference, may be called internal kinetic energy. The kinetic energy associated with random molecular motion may be called thermal energy. These names will be used in later chapters of the book, when appropriate. Regardless of the name, every kind of kinetic energy is the same physical quantity, representing energy associated with motion. Example \(\PageIndex{3}\): Special Names for Kinetic Energy A player lobs a mid-court pass with a 624-g basketball, which covers 15 m in 2 s. What is the basketball’s horizontal translational kinetic energy while in flight? An average molecule of air, in the basketball in part (a), has a mass of 29 u, and an average speed of 500 m/s, relative to the basketball. There are about 3 x 10 23molecules inside it, moving in random directions, when the ball is properly inflated. What is the average translational kinetic energy of the random motion of all the molecules inside, relative to the basketball? How fast would the basketball have to travel relative to the court, as in part (a), so as to have a kinetic energy equal to the amount in part (b)? Strategy In part (a), first find the horizontal speed of the basketball and then use the definition of kinetic energy in terms of mass and speed, K = \(\frac{1}{2} mv^{2}\). Then in part (b), convert unified units to kilograms and then use K = \(\frac{1}{2} mv^{2}\) to get the average translational kinetic energy of one molecule, relative to the basketball. Then multiply by the number of molecules to get the total result. Finally, in part (c), we can substitute the amount of kinetic energy in part (b), and the mass of the basketball in part (a), into the definition K = \(\frac{1}{2} mv^{2}\), and solve for v. Solution The horizontal speed is \(\frac{(15\; m)}{(2\; s)}\), so the horizontal kinetic energy of the basketball is $$\frac{1}{2} (0.624\; kg)(7.5\; m/s)^{2} = 17.6\; J \ldotp \nonumber$$ The average translational kinetic energy of a molecule is $$\frac{1}{2} (29\; u) (1.66 \times 10^{-27}\; kg/u) (500\; m/s)^{2} = 6.02 \times 10^{-21}\; J, \nonumber $$ and the total kinetic energy of all the molecules is $$(3 \times 10^{23})(6.02 \times 10^{-21}\; J) = 1.80\; kJ \ldotp \nonumber$$ $$v = \sqrt{\frac{2(1.8\; kJ)}{(0.624\; kg)}} = 76.0\; m/s \ldotp \nonumber$$ Significance In part (a), this kind of kinetic energy can be called the horizontal kinetic energy of an object (the basketball), relative to its surroundings (the court). If the basketball were spinning, all parts of it would have not just the average speed, but it would also have rotational kinetic energy. Part (b) reminds us that this kind of kinetic energy can be called internal or thermal kinetic energy. Notice that this energy is about a hundred times the energy in part (a). How to make use of thermal energy will be the subject of the chapters on thermodynamics. In part (c), since the energy in part (b) is about 100 times that in part (a), the speed should be about 10 times as big, which it is (76 compared to 7.5 m/s). Contributors Samuel J. Ling (Truman State University), Jeff Sanny (Loyola Marymount University), and Bill Moebs with many contributing authors. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
Calculating the Heat Transfer Coefficient for Flat and Corrugated Plates In many engineering applications involving conjugate heat transfer, such as designing heat exchangers and heat sinks, it’s important to calculate the heat transfer coefficient. Often determined with the aid of correlations and empirical relations, the heat transfer coefficient provides information about heat transfer between solids and fluids. In this blog post, we discuss and demonstrate how the COMSOL Multiphysics® software can be used to evaluate the heat transfer coefficient for plate geometries. What Is the Heat Transfer Coefficient? Let us consider a heated wall or surface over which a fluid is flowing. Heat transfer in the fluid is predominantly governed by convection. Similarly, convection is the primary mode of heat transport in the case of two fluids (through a solid surface), such as with heat exchangers. The rate at which heat transfer occurs in both cases is governed by a temperature difference and a proportionality coefficient called the heat transfer coefficient. The heat transfer coefficient is indicative of the effectiveness for the rate of heat transport through a domain between the surface and the fluid. Mathematically, his the ratio of the heat flux at the wall to the temperature difference between the wall and fluid; i.e., (1) where q^{\prime \prime} is the heat flux, T_wis the wall temperature, and T_\infty is the characteristic fluid temperature. The characteristic fluid temperature can also be the external temperature far from the wall or the bulk temperature in tubes. When the object is surrounded by an infinitely large volume of air, we assume that the air temperature far away from the object is a constant, known value. The heat transfer coefficient evaluated in this case is referred to as the external heat transfer coefficient. With the above assumption, if we look closely at the wall (if the thickness of the wall is defined across the y direction, and y = 0 represents the surface/plane of the wall), it’s clear that the No Slip condition at the wall results in the formation of a stagnant, thin film of fluid. Therefore, heat transfer through the fluid immediately adjacent to the wall happens purely due to conduction. This can be written mathematically (Ref. 1) as: (2) Here, k is the thermal conductivity of the fluid, with the Tderivative being evaluated in the fluid. (3) Calculating the Heat Transfer Coefficient in COMSOL Multiphysics® Practically speaking, it is difficult to measure the temperature gradient at the wall. Additionally, it becomes essential to analyze a smart and computationally inexpensive approach for understanding the heat transfer at the wall. Therefore, nonanalytical ways of calculating the heat transfer coefficient are usually preferred. One common approach is using convective correlations defined by the dimensionless Nusselt number. These correlations are available for various cases, including natural and forced convection as well as internal and external flows, and give fast results. However, this approach can only be used for regular geometric shapes, such as horizontal and vertical walls, cylinders, and spheres. When complex shapes are involved, the heat transfer coefficient can instead be calculated by simulating the conjugate heat transfer phenomenon. Let’s now discuss two different cases and approaches: Calculating the heat transfer coefficient in regular geometries (like a horizontal plate) using: Conjugate heat transfer analysis Convective correlations; i.e., without considering flow Calculating the heat transfer coefficient in irregular/complex geometries (like a corrugated plate) Note that the flow regime is an important consideration because the heat transfer coefficient is dependent on the velocity. In both cases, a pragmatic condition, such as a fast flow in a blower system or an electronic chip cooling device, needs to be considered. This indicates that it is necessary to model the cases as a turbulent flow coupled with heat transport. Example 1: Forced Convection and Flow Past a Horizontal Plate Let’s consider the situation of modeling the flow past a horizontal flat plate with a length of 5 m, which is subjected to a constant and homogeneous heat flux of 10 W/m 2. The plate is placed in an airflow with an average velocity of 0.5 m/s and temperature of 283 K. The figure below shows the schematic of the problem definition, including the velocity and temperature profiles for a laminar flow inside the momentum (say, \delta ) and thermal boundary layer (\delta {T}), respectively. Schematics of laminar flow (top) and turbulent flow (bottom) past a horizontal plate. Conjugate Heat Transfer Analysis The numerical solution is obtained in COMSOL Multiphysics by using the Conjugate Heat Transfer interface, which couples the fluid flow and heat transfer phenomena. The velocity field and pressure are computed in the air domain, while the temperature is computed in the plate and in the air domain. The temperature distribution inside the plate and fluid is shown in the figure below. The thermal and momentum boundary layers formed inside the fluid domain can be seen in the region that goes from the wall to 2 cm above the plate. Temperature distribution (surface plot), isotherm at 11°C (red line), and velocity field (arrows) illustrating the thermal and momentum boundary layers next to the plate surface (anisotropic axis scale). From the simulation results, it is possible to evaluate the heat flux using the corresponding predefined postprocessing variable. Dividing it by the temperature difference (T_w-T_\infty) gives the heat transfer coefficient (Eq. 3). The heat transfer coefficient along the plate obtained using the conjugate heat transfer analysis is plotted on a graph in a following section. Heat Transfer Coefficient Based on Nusselt’s Number Correlations The Nusselt number correlation for forced convection past a flat plate is available in literature (Ref. 1, for example). In this second approach, the same model is solved without solving for flow; that is, using the heat transfer correlations. The computational domain is limited to the solid (plate). The heat loss from the hot plate to the cold fluid is defined using a Heat Flux boundary condition. This boundary condition contains an option to define the heat transfer coefficient using predefined Nusselt number correlations, as shown below. Note that this correlation is predefined in COMSOL Multiphysics. Settings for the Heat Flux boundary condition. Using this approach only, the temperature distribution in the plate is computed. From the heat transfer coefficient defined in the Heat Flux boundary condition, it is possible to evaluate the heat flux at the plate surface, q=h\cdot(T_\infty-T). Evaluating the Heat Transfer Coefficient For both approaches described above, it is possible to evaluate the heat transfer coefficient along the plate. The figure below compares the heat flux estimated using the two approaches. Comparison of the heat transfer coefficient along the flat plate estimated using a conjugate heat transfer simulation (blue line) and a Nusselt correlation (green line). We can see that the value obtained from the Nusselt number correlation is in close agreement with the value obtained from the full conjugate heat transfer simulation. A quantity of interest is the heat rates over the plate that are obtained in the two cases: Nusselt number correlation: 50 W/m Conjugate heat transfer: 49.884 W/m For certain calculations, the approach based on Nusselt number correlations is able to predict the heat flux with good enough accuracy. Next, we examine a case with an uncommon shape, where the Nusselt number correlations are not easily available, and the only possible approach is to run a conjugate heat transfer simulation. Example 2: Flow Past a Corrugated Horizontal Plate Let’s consider a similar configuration as in the first case, except that the plate has a corrugated top surface. The figure below shows a schematic of the problem definition. In this model, the corrugations of the top plate are considered in one section of the geometry. The rest of the plate is flat. Schematic of the flow past a horizontal plate. Here, the flow field close to the wall has recirculation zones that enhance the heat transfer rate. In the image below, we can see the temperature distribution and velocity streamlines. Temperature distribution in degrees Celsius (surface) and the velocity field (streamlines). The left plot below shows the heat transfer coefficient along the length of the corrugated plate. With a geometry such as that of the wavy plate, the heat transfer coefficient is dependent on the temperature fields; velocity fields; and geometric parameters of the corrugations, such as the height. Hence, we can observe the enhanced heat transfer coefficient as compared to the flat plate (right image below). Heat transfer coefficient along the corrugated plate (left) and along the flat plate (right). While considering complex geometries containing corrugated surfaces, the conjugate heat transfer approach may be computationally expensive, and alternative approaches are desirable. A good approximation would be to reduce the geometric complexity by representing the surfaces as noncorrugated and extrapolating the heat transfer coefficient from this geometry of a corrugated plate, considering geometric parameters such as the corrugation height, flow velocity fields, and temperature variations on the surface. It is interesting to note that if the temperature is not truly isothermal or there is no constant heat flux, the heat transfer coefficient is still of interest within a given range for some geometries until the proximity to the initial configuration is maintained. To check, we can consider a simple case wherein the heat transfer coefficients are calculated across velocity fields in the corrugated plate geometry. The data can be used to obtain an average heat transfer coefficient and can be extrapolated to the flat plate geometry model. The total heat loss from the surface, or the heat transfer coefficient obtained from flow simulations, can be investigated to understand the validity of the approximations. Concluding Thoughts In this blog post, we discussed how to calculate the heat transfer coefficient using two methods. With the conjugate heat transfer solution, you can use the built-in heat flux variables available in COMSOL Multiphysics. Using the Heat Flux boundary condition with Nusselt number correlations, you can simulate problems involving simple shapes. We also discussed how to reduce geometric complexities to obtain the heat transfer coefficient for complex geometries. Next Steps Learn more about the specialized features for modeling heat transfer in the COMSOL® software by clicking the button below. Try the approaches discussed here in the following tutorials: Natural Convection Cooling of a Vacuum Flask Nonisothermal Turbulent Flow Over a Flat Plate Nonisothermal Laminar Flow in a Circular Tube Reference A. Bejan et al., Heat Transfer Handbook, John Wiley & Sons, 2003. Comments (1) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
A stationary stochastic process have a spectral density of $$ S_{XX}(\omega) = 1 - \frac{|\omega|}{8 \pi}. $$ What is the mean square value of the process? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community A stationary stochastic process have a spectral density of $$ S_{XX}(\omega) = 1 - \frac{|\omega|}{8 \pi}. $$ What is the mean square value of the process? I think i fired it out. There is a property of the spectral density function that says that, for an stationary process $$ E[X^2(t)] = \int_{-\infty}^{\infty}S_{XX}(\omega)\,d\omega $$ Since the spectral density function must be non-negative, $-8\pi \leq \omega \leq 8\pi$ the mean square is $$ E[X^2(t)] = \int_{-8\pi}^{8\pi}\left(1 - \frac{|\omega|}{8\pi}\right)\,d\omega = 8 \pi $$
Indium compounds give a blue-violet flame test. The atomic emission responsible for this blue-violet color has a wavelength of 451 nm. Obtain the energy of a single photon of this wavelength. Solution: We will first find the frequency. To do so, we will write the wavelength, 451 nm in meters. 451\text{nm}\,=\,4.51\times 10^{-7} m Next, we will use this equation to find the frequency. (where v is frequency, c is the speed of light, and \lambda is the wavelength) Speed of light = 2.998\times 10^8 \dfrac{m}{s} Substituting the values gives us: v\,=\,6.6474\times 10^14/s To figure out the energy of a single photon, we will use the following equation: (Where E is photon energy, h is Plank’s Constant, and v is frequency) Remember that Plank’s Constant = 6.626\times 10^{-34}\text{J}\cdot\text{s} Again, substituting the values we have: E\,=\,4.404\times 10^{-19} J
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest. Nah, I have a pretty garbage question. Let me spell it out. I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$. For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$. This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin. Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle. Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$ $$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$ @user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure). The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$. @RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea. The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible. Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$.... The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case? I think there you really do need some kind of PDEs to construct good charts. I might be way overcomplicating this. If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$? I think so by the squeeze theorem or something. this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$ but then we can replace all of those $U_i$'s with balls, incurring some fixed error In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid... @BalarkaSen what is this ok but this does confirm that what I'm trying to do is wrong haha In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas... Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function? I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation.
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest. Nah, I have a pretty garbage question. Let me spell it out. I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$. For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$. This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin. Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle. Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$ $$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$ @user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure). The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$. @RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea. The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible. Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$.... The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case? I think there you really do need some kind of PDEs to construct good charts. I might be way overcomplicating this. If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$? I think so by the squeeze theorem or something. this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$ but then we can replace all of those $U_i$'s with balls, incurring some fixed error In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid... @BalarkaSen what is this ok but this does confirm that what I'm trying to do is wrong haha In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas... Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function? I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation.
Now one of the tricky things about the solution to the wave equation expressed here \[y(x,t) - y_0 = A \sin{\left( 2 \pi \dfrac{t}{T} \pm 2 \pi \dfrac{x}{\lambda} + \phi \right)}\] is that it is a function of both space (the distance along the \(x\)-axis) and a function of time (the value of the time variable, \(t\)). One way to make visualising the equation easier is to think of either \(x\) or \(t\) as being fixed at some value, so that \(y\) depends on only one variable. This is exactly what we must do in order to graph the function \(y(x, t)\) on a simple 2D graph. We will describe these two graphs for the same example wave below. Holding t constant: Displacement y vs. Position x tconstant: Displacement yvs. Position x This graph shows the displacement of the entire wave at a particular time \(t\), hence the name "Snapshot" The graph serves as an apt visual representation of a transverse wave. Although it does not visually represent the form of a longitudinal wave, the graph is correct for both kinds of wave. Holding x constant: Displacement y vs. time t xconstant: Displacement yvs. time t This graph plots the motion of a particular piece (fixed \(x\)) of the wave against time. This graph reminds us that this particular piece of the medium is undergoing simple harmonic motion. To convey all of the information contained in \(y(x,t)\) requires both graphs. Alternatively, almost all of the same information can be conveyed using two displacement vs. position graphs for two different known times, or two displacement vs. time graphs for two different known positions. Exercise Using the two graphs above, can you find the equation for the wave? You should be able to get numerical values for all the variables except one. Which one can’t you determine? Hint: Which way is the wave travelling (left or right)? You will need this to determine the + or - sign. Which parameters do you get from which graph? Which parameters require both graphs?
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
I was doing some computations for research purposes, which led me to this integral: $$I(n) = \int_0^{\infty} (t^2+t^4)^n e^{-t^2-t^4}\,dt.$$ This is very suggestively written so as to employ a parametric differentiation technique as so: $$\left(\frac{\partial^n}{\partial \alpha^n}\right)\int_0^{\infty}e^{-\alpha(t^2+t^4)}\,dt.$$ This integral has a nice, closed form expression: $$\int_0^{\infty} e^{-\alpha(t^2+t^4)}\,dt = \frac{1}{4} e^{\frac{\alpha}{8}}K_{\frac{1}{4}}\left(\frac{\alpha}{8}\right),$$ where $K_{\nu}$ is the modified Bessel function of the second kind. From here, I would have to employ $n$ differentiations which would be pretty messy to work out due to product rule. $n$ applications of product rule does have a nice combinatorial expression but it is far from explicit. Moreover, Bessel functions can get pretty complicated after differentiating so this seems like a bad approach. Instead I decided to run some examples on Mathematica and computed the first 22 of these and noticed a very surprising pattern. In what follows $I_{\nu}$ is the modified Bessel function of the first kind. $$I(0) = \frac{1}{4} e^{\frac{1}{8}}K_{\frac{1}{4}}\left(\frac{1}{8}\right)$$ $$I(1) = \frac{1}{32} e^{\frac{1}{8}}\left(K_{\frac{1}{4}}\left(\frac{1}{8}\right) + K_{\frac{3}{4}}\left(\frac{1}{8}\right)\right)$$ $$I(2) = \frac{3}{128\sqrt{2}} e^{\frac{1}{8}}\pi \left(3 I_{-\frac{1}{4}} \left(\frac{1}{8}\right) + I_{\frac{1}{4}}\left(\frac{1}{8}\right) - I_{\frac{3}{4}}\left(\frac{1}{8}\right) + I_{\frac{5}{4}}\left(\frac{1}{8}\right)\right)$$ $$I(3) = \frac{1}{256\sqrt{2}} e^{\frac{1}{8}}\pi \left(39 I_{-\frac{1}{4}} \left(\frac{1}{8}\right) + 17 I_{\frac{1}{4}}\left(\frac{1}{8}\right) - 14 I_{\frac{3}{4}}\left(\frac{1}{8}\right) + 14 I_{\frac{5}{4}}\left(\frac{1}{8}\right)\right)$$ $$I(4) = \frac{1}{2048\sqrt{2}} e^{\frac{1}{8}} \pi \left(1029 I_{-\frac{1}{4}} \left(\frac{1}{8}\right) + 367 I_{\frac{1}{4}}\left(\frac{1}{8}\right) - 349 I_{\frac{3}{4}}\left(\frac{1}{8}\right) + 349 I_{\frac{5}{4}}\left(\frac{1}{8}\right)\right)$$ $$I(5) = \frac{9}{8192\sqrt{2}} e^{\frac{1}{8}} \pi \left(1953 I_{-\frac{1}{4}} \left(\frac{1}{8}\right) + 619 I_{\frac{1}{4}}\left(\frac{1}{8}\right) - 643 I_{\frac{3}{4}}\left(\frac{1}{8}\right) + 643 I_{\frac{5}{4}}\left(\frac{1}{8}\right)\right)$$ $$I(6) = \frac{1}{16384\sqrt{2}} e^{\frac{1}{8}} \pi \left(185157 I_{-\frac{1}{4}} \left(\frac{1}{8}\right) + 53131 I_{\frac{1}{4}}\left(\frac{1}{8}\right) - 59572 I_{\frac{3}{4}}\left(\frac{1}{8}\right) + 59572 I_{\frac{5}{4}}\left(\frac{1}{8}\right)\right)$$ Repeat ad nauseum. Each of the terms in the denominator seems to be a power of $2$, the third and fourth terms seem to have the same coefficient (modulo a sign) and the signs are $+$, $+$, $-$, $+$. The "nice" output seems to suggest to me that there is a closed-form expression for $I(n)$ in general but I haven't the slightest clue as to how to come up with it. Can anyone shed some light on the matter? A PDF with more expressions can be found here. (Mathematica output.)
Definition:Conditional Preference Definition Let $G$ be a lottery. Let $P$ be a player of $G$. Let $\Xi$ be the event space of $G$. Let $f$ and $g$ be two lotteries in $G$. Let $S \subseteq \Xi$ be an event. A conditional preference is a preference relation $\succsim_S$ such that: $f \succsim_S g$ if and only if $f$ would be at least as desirable to $P$ as $g$, if $P$ was aware that the true state of the world was $S$. The notation $a \sim_S b$ is defined as: $a \sim_S b$ if and only if $a \succsim_S b$ and $b \succsim_S a$ The notation $a \succ_S b$ is defined as: $a \succ_S b$ if and only if $a \succsim_S b$ and $a \not \sim_S a$ When no conditioning event $S$ is mentioned, the notation $a \succsim_\Omega b$, $a \succ_\Omega b$ and $a \sim_\Omega b$ can be used, which mean the same as $a \succsim b$, $a \succ b$ and $a \sim b$.
The number of ways to select four marbles, one of which is yellow, would in this case be $${}_1C_1\cdot{}_4C_3=1\cdot 4=4,$$ so the probability of selecting the yellow marble is $$\frac{4}{{}_5C_4}=\frac45.$$ Alternately, we can proceed stepwise as follows: There's a $\frac45$ chance that the first marble isn't yellow. If the first marble isn't yellow, then there's a $\frac34$ chance that the second marble isn't yellow. If the first two marbles aren't yellow, then there's a $\frac23$ chance that the third marble isn't yellow. If the first three marbles aren't yellow, then there's a $\frac12$ chance that the fourth marble isn't yellow. Therefore, there's a $$\frac45\cdot\frac34\cdot\frac23\cdot\frac12=\frac15$$ chance that none of the four marbles drawn is yellow, so there's a $$1-\frac15=\frac45$$ chance that one of the four marbles is yellow. As a third approach (which I'll discuss in more detail), since there's only one yellow marble, then to get the probability that the yellow marble was chosen, we need only add the probability of the following distinct events: (i) the yellow marble was chosen first, (ii) the yellow marble was chosen second, (iii) the yellow marble was chosen third, (iv) the yellow marble was chosen fourth. Hopefully, you see why these events have no overlap (mutually exclusive), and why together they comprise all the possible ways that the yellow marble could be chosen in this circumstance. We already know that $$P(\text{yellow first})=\frac15.\tag{i}$$ If yellow is chosen second, then some other marble was chosen first--there are ${}_4C_1=4$ ways this can happen out of ${}_5C_1=5$ possibilities--leaving $1$ yellow marble out of a total of $4$ remaining, so $$P(\text{yellow second})=\frac45\cdot\frac14=\frac15.\tag{ii}$$ If yellow is chosen third, then two non-yellow marbles were chosen first--there are ${}_4C_2=6$ ways this can happen out of ${}_5C_2=10$ possibilities--leaving $1$ yellow marble out of a total of $3$ remaining, so $$P(\text{yellow third})=\frac{6}{10}\cdot\frac13=\frac15.\tag{iii}$$ If the yellow marble is chosen fourth, then three non-yellow marbles were chosen first--there are ${}_4C_3=4$ ways this can happen out of ${}_5C_3=10$ possibilities--leaving $1$ yellow marble out of a total of $2$ remaining, so $$P(\text{yellow fourth})=\frac{4}{10}\cdot\frac12=\frac15.\tag{iv}$$ Thus, $$\begin{align}P(\text{yellow chosen}) &= P(\text{yellow first})+P(\text{yellow second})+P(\text{yellow third})+P(\text{yellow fourth})\\ &= \frac15+\frac15+\frac15+\frac15\\ &= \frac45.\end{align}$$
I came across John Duffield Quantum Computing SE via this hot question. I was curious to see an account with 1 reputation and a question with hundreds of upvotes.It turned out that the reason why he has so little reputation despite a massively popular question is that he was suspended.May I ... @Nelimee Do we need to merge? Currently, there's just one question with "phase-estimation" and another question with "quantum-phase-estimation". Might we as well use just one tag? (say just "phase-estimation") @Blue 'merging', if I'm getting the terms right, is a specific single action that does exactly that and is generally preferable to editing tags on questions. Having said that, if it's just one question, it doesn't really matter although performing a proper merge is still probably preferable Merging is taking all the questions with a specific tag and replacing that tag with a different one, on all those questions, on a tag level, without permanently changing anything about the underlying tags @Blue yeah, you could do that. It generally requires votes, so it's probably not worth bothering when only one question has that tag @glS "Every hermitian matrix satisfy this property: more specifically, all and only Hermitian matrices have this property" ha? I though it was only a subset of the set of valid matrices ^^ Thanks for the precision :) @Nelimee if you think about it it's quite easy to see. Unitary matrices are the ones with phases as eigenvalues, while Hermitians have real eigenvalues. Therefore, if a matrix is not Hermitian (does not have real eigenvalues), then its exponential will not have eigenvalues of the form $e^{i\phi}$ with $\phi\in\mathbb R$. Although I'm not sure whether there could be exceptions for non diagonalizable matrices (if $A$ is not diagonalizable, then the above argument doesn't work) This is an elementary question, but a little subtle so I hope it is suitable for MO.Let $T$ be an $n \times n$ square matrix over $\mathbb{C}$.The characteristic polynomial $T - \lambda I$ splits into linear factors like $T - \lambda_iI$, and we have the Jordan canonical form:$$ J = \begin... @Nelimee no! unitarily diagonalizable matrices are all and only the normal ones (satisfying $AA^\dagger =A^\dagger A$). For general diagonalizability if I'm not mistaken onecharacterization is that the sum of the dimensions of the eigenspaces has to match the total dimension @Blue I actually agree with Nelimee here that it's not that easy. You get $UU^\dagger = e^{iA} e^{-iA^\dagger}$, but if $A$ and $A^\dagger$ do not commute it's not straightforward that this doesn't give you an identity I'm getting confused. I remember there being some theorem about one-to-one mappings between unitaries and hermitians provided by the exponential, but it was some time ago and may be confusing things in my head @Nelimee if there is a $0$ there then it becomes the normality condition. Otherwise it means that the matrix is not normal, therefore not unitarily diagonalizable, but still the product of exponentials is relatively easy to write @Blue you are right indeed. If $U$ is unitary then for sure you can write it as exponential of an Hermitian (time $i$). This is easily proven because $U$ is ensured to be unitarily diagonalizable, so you can simply compute it's logarithm through the eigenvalues. However, logarithms are tricky and multivalued, and there may be logarithms which are not diagonalizable at all. I've actually recently asked some questions on math.SE on related topics @Mithrandir24601 indeed, that was also what @Nelimee showed with an example above. I believe my argument holds for unitarily diagonalizable matrices. If a matrix is only generally diagonalizable (so it's not normal) then it's not true also probably even more generally without $i$ factors so, in conclusion, it does indeed seem that $e^{iA}$ unitary implies $A$ Hermitian. It therefore also seems that $e^{iA}$ unitary implies $A$ normal, so that also my argument passing through the spectra works (though one has to show that $A$ is ensured to be normal) Now what we need to look for is 1) The exact set of conditions for which the matrix exponential $e^A$ of a complex matrix $A$, is unitary 2) The exact set of conditions for which the matrix exponential $e^{iA}$ of a real matrix $A$ is unitary @Blue fair enough - as with @Semiclassical I was thinking about it with the t parameter, as that's what we care about in physics :P I can possibly come up with a number of non-Hermitian matrices that gives unitary evolution for a specific t Or rather, the exponential of which is unitary for $t+n\tau$, although I'd need to check If you're afraid of the density of diagonalizable matrices, simply triangularize $A$. You get $$A=P^{-1}UP,$$ with $U$ upper triangular and the eigenvalues $\{\lambda_j\}$ of $A$ on the diagonal.Then$$\mbox{det}\;e^A=\mbox{det}(P^{-1}e^UP)=\mbox{det}\;e^U.$$Now observe that $e^U$ is upper ... There's 15 hours left on a bountied question, but the person who offered the bounty is suspended and his suspension doesn't expire until about 2 days, meaning he may not be able to award the bounty himself?That's not fair: It's a 300 point bounty. The largest bounty ever offered on QCSE. Let h...
Write out the simple equations $$\begin{align}Y_j &= a_0 Z_j + a_1 Z_{j-1} + a_2 Z_{j-2}\\Y_{j-1} &= a_0 Z_{j-1} + a_1 Z_{j-2} + a_2 Z_{j-3}\end{align}$$ There are some very simple cases that make $Y_j \perp Y_{j-1}$ due to the independence assumption of the random variables $\{Z_i\}_{i\in\mathbb{Z}}$. An example is $a_0 \in \mathbb{R}\setminus \{0\},\, a_1 = 0,\, a_2 = 0$. Not sure if you were looking for a complete solution but this should help get you started. Also, an easy check for RV which are not independent is using the contrapositive form of the common theorem$$X\perp Y \implies E[XY] = E[X]E[Y]$$Note that the converse of this statement is not true. Proof Assertion $a_1a_0 + a_2a_1 = 0 \iff Y_j \perp Y_{j-1}$ Define $\mu = E[Z]$ ($\implies$) Suppose $a_1a_0 + a_2a_1 = 0$. There are two cases where this is possible. Case 1, suppose $a_1 = 0$. The equations become $$\begin{align}Y_j &= a_0 Z_j + a_2 Z_{j-2}\\Y_{j-1} &= a_0 Z_{j-1} + a_2 Z_{j-3}\end{align}$$ Their $\sigma$-algebras are given by $\sigma(Y_j) = \sigma(Z_j)\cup\sigma(Z_{j-2})$ and $\sigma(Y_{j-1}) = \sigma(Z_{j-1})\cup \sigma(Z_{j-3})$. Thus $Y_j \perp Y_{j-1}$. This could be more gruesomely detailed but I take some for granted. See this for more details including definitions etc. Case 2, suppose $a_2 = 0$ and $a_0 = 0$. The equations become $$\begin{align}Y_j &= a_1 Z_{j-1} \\Y_{j-1} &= a_1 Z_{j-2}\end{align}$$ The same $\sigma$-algebra argument applies more easily but a more elegant solution presents itself in the form of the CDF. $$\begin{align}F_{Y_j, Y_{j-1}}(y_j, y_{j-1}) &= P(Y_j \leq y_j \text{ and } Y_{j-1} \leq y_{j-1}) \\& = F_{Z_j}(y_j/a_1)F_{Z_{j-1}}(y_{j-1}/a_1)\\& = F_{Y_j}(y_j)F_{Y_{j-1}}(y_{j-1})\end{align}$$ ($\impliedby$) Suppose $Y_j \perp Y_{j-1}$ by theorem, we know that $E[Y_j Y_{j-1}] = E[Y_j]E[Y_{j-1}]$ calculating these values separately, $$\begin{align}E[Y_jY_{j-1}] & = (a_0^2 + a_0a_1 + a_0 a_2 + a_1^2 + a_1 a_2 + a_2 a_0 + a_2^2 )\mu^2 \\& + (a_1a_0 + a_2a_1)E[Z^2]\end{align}$$ $$\begin{align}E[Y_j]E[Y_{j-1}] & = (a_0^2 + a_0a_1 + a_0 a_2 + a_1^2 + a_1 a_2 + a_2 a_0 + a_2^2)\mu^2\\&+ (a_1a_0 + a_2a_1)\mu^2\end{align}$$ In the non-degenerate case when the distribution of $Z$ is not a constant, the variance is strictly positive so that $E[Z^2] - \mu^2 > 0$ and so $E[Z^2] > \mu^2$ and more importantly $E[Z^2] \neq \mu^2$ Thus for the equality $E[Y_j Y_{j-1}] = E[Y_j]E[Y_{j-1}]$ to hold, it must be the case that $a_1a_0 + a_2a_1 = 0$.
Authors Abstract Highlights Keywords Main Subjects Euler-Bernoulli hypothesis disregards the effects of the shear deformation and stress concentration which is in elementary theory of beam bending hence it is suitable for thin beams and is not suitable for deep beams since it is based on the assumption that the transverse normal to neutral axis remains so during bending and after bending, implying that the transverse shear strain is zero. Since theory neglects the transverse shear deformation. It underestimates deflections in case of thick beams where shear deformation effects are significant. Timoshenko [1] showed that the effect of transverse vibration of prismatic bars. This theory is now widely referred to as Timoshenko beam theory or first order shear deformation theory (FSDT) in the literature. But in this theory transverse shear strain distribution is assumed to be constant through the thickness of beam and thus requires shear correction factor to appropriately represent the strain energy of deformation. Cowper [2] has given refined expression for the shear correction factor for different cross-sections of beam. The accuracy of Timoshenko beam theory for transverse vibrations of simply supported beam in respect of the fundamental frequency is verified by Cowper [3] with a plane stress exact elasticity solution. To remove the discrepancies in classical and first order shear deformation theories, higher order or refined shear deformation theories were developed and available in the open literature for static and vibration analysis of beam. Krishna Murthy [4], Baluch et al. [5], Bhimaraddi and Chandrashekhara [6] were presented parabolic shear deformation theories assuming a higher variation of axial displacement in terms of thickness coordinate. These theories satisfy shear stress free boundary conditions on top and bottom surfaces of beam and thus obviate the need of shear correction factor. Kant and Gupta [7], and Heyliger and Reddy [8] presented finite element models based on higher order shear deformation uniform rectangular beams. However, these displacement based finite element models are not free from phenomenon of shear locking [9, 10]. Dahake and Ghugal [11] studied flexural analysis of thick simply supported beam using trigonometric shear deformation theory. Ghugal and Dahake [12, 13] given the flexural solution for the beam subjected to parabolic loading. Sawant and Dahake [14] developed the new hyperbolic shear deformation theory. Chavan and Dahake [15, 16] presented clamped-clamped beam using hyperbolic shear deformation theory. The displacement and stresses for thick beam given by Nimbalkar and Dahake [17]. Jadhav and Dahake [18] presented bending analysis of deep cantilever beam using steel as material. Manal et al [19] investigated the deep fixed beams using new displacement field. Patil and Dahake [20] carried out finite element analysis using 2D plane stress elements for thick beam. Dahake et al [21] studied flexural analysis of thick fixed beam subjected to cosine load. Tupe et al [22] compared various displacement fields for static analysis of thick isotropic beams. In literature, most of the researchers have used steel as a beam material. As many parts of the spacecrafts, airplane structures are made up of aluminum due to its low weight density. In this research, an attempt has been made to analyze the aluminum deep cantilever beam subjected to cosine load. The beam under consideration occupies in Cartesian coordinate system the region: where x, y, z are Cartesian coordinates, L and b are the length and width of beam in the x and y directions respectively, and h is the thickness of the beam in the z-direction. The beam is made up of homogeneous, linearly elastic isotropic material. The displacement field of the present beam theory is of the form as given below: $\begin{array}{l} {u(x,z)=-z\frac{dw}{dx} +\frac{h}{\pi } \sin \frac{\pi z}{h} \phi (x)} \\ {w(x,z)=w(x)} \end{array}$ (1) where is the axial displacement in x direction and w is the transverse displacement in z direction of the beam. The sinusoidal function is assigned according to the shear stress distribution through the thickness of the beam. The represents rotation of the beam at neutral axis, which is an unknown function to be determined. Normal strain $\varepsilon _{x} {\rm \; }{\rm \; \; }{\rm =}\frac{\partial u}{\partial x} =-z\frac{d^{2} w}{dx^{2} } +\frac{h}{\pi } \sin \frac{\pi z}{h} \frac{d\phi }{dx}$ (2) Shear strain (3) Stress-Strain Relationships Using the expressions for strains and stresses (2) through (4) and using the principle of virtual work, variationally consistent governing differential equations and boundary conditions for the beam under consideration can be obtained. The principle of virtual work when applied to the beam leads to: (5) where the symbol denotes the variational operator. Employing Green’s theorem in Eqn. (4) successively, we obtain the coupled Euler-Lagrange equations which are the governing differential equations and associated boundary conditions of the beam. The governing differential equations obtained are as follows: (6) The associated consistent natural boundary conditions obtained are of following form: At the ends x = 0 and x = L (8) Thus the boundary value problem of the beam bending is given by the above variationally consistent governing differential equations and boundary conditions. The general solution for transverse displacement w( x) and warping function( x) is obtained using Eqns. (6) and (7) using method of solution of linear differential equations with constant coefficients. Integrating and rearranging the first governing Eqn. (6), we obtain the following equation (9) where Q( x) is the generalized shear force for beam and it is given by . Now second governing Eqn. (7) is rearranged in the following form: (10) A single equation in terms of is now obtained using Eqns (11) and (12) as: (11) where constants , and in Eqns. (10) and (11) are as follows The general solution of Eqn. (11) is as follows: (12) The equation of transverse displacement w( x) is obtained by substituting the expression of ( x) in Eqn. (12) and then integrating it thrice with respect to x. The general solution for w( x)is obtained as follows: (13) where, are arbitrary constants and can be obtained by imposing natural (forced) and / or geometric or kinematical boundary / end conditions of beam. In order to prove the efficacy of the present theory, a numerical example is considered. For the static flexural analysis, a uniform beam of rectangular cross section, having span length ‘ L’, width ‘ b’ and thickness ‘ h’ of homogeneous, elastic andisotropic material is considered. The following material properties for beam are used. Table 1. Properties of Aluminum 6061-T6, 6061-T651 [13] Density 2700 kg/m Ultimate Tensile Strength 310 MPa Modulus of Elasticity 68.9 GPa Notched Tensile Strength Ultimate Bearing Strength Bearing Yield Strength Poisson's Ratio 0.33 Fatigue Strength Shear Modulus Shear Strength The beam has its origin at left hand side fixed support at x = 0 and free at x = L. The beam is subjected to cosine load, on surface z = + h/2 acting in the downward z direction with maximum intensity of load. Fig. 2. Cantilever beam with cosine load Boundary conditions associated with this problem are as follows: At Free end: at x = L and At Fixed end: = 0 at x = 0 General expressions obtained for and are as follows: (14) The axial displacement and stresses obtained based on above solutions are as follows In this paper, the results for inplane displacement, transverse displacement, inplane and transverse stresses are presented in the following non dimensional form for the purpose of presenting the results in this work. For beam subjected to cosine load The transverse shear stresses ( ) are obtained directly by constitutive relation and, alternatively, by integration of equilibrium equation of two dimensional elasticity and are denoted by ( ) and ( ) respectively. The transverse shear stress satisfies the stress free boundary conditions on the top and bottom surfaces of the beam when these stresses are obtained by both the above mentioned approaches. Table 2: Non-Dimensional Axial Displacement ( ) at( x = L , z = h/2), Transverse Deflection ( ) at ( x = L, z =0.0) Axial Stress ( ) at ( x = 0 , z = h/2)Maximum Transverse Shear Stresses ( x=0.01 L, z =0.0) and ( x, z = 0.0) of the Cantilever Beam Subjected to Cosine Load for Aspect Ratio 4 and 10. Source Aspect ratio Model Present 4 TSDT -67.5989 6.1819 36.7529 1.8181 -2.7877 Sawant and Dahake [14] HPSDT -70.024 6.1928 39.8104 2.1609 -4.5581 Krishna Murty [4] HSDT -71.2291 6.1860 37.2887 1.9004 -2.8916 Timoshenko [1] FSDT 23.1543 6.5444 22.2081 0.3076 3.7597 Bernoulli-Euler ETB 23.1543 5.7541 22.2081 — 3.7597 Present 10 TSDT -1055.4548 5.8244 176.7877 7.7501 3.2052 Sawant and Dahake [14] HPSDT -1061.5175 5.8256 178.4137 8.3023 3.8042 Krishna Murty [4] HSDT -1064.5122 5.8255 172.1017 7.8208 3.7523 Timoshenko [1] FSDT 361.7856 5.8805 138.8010 4.8073 9.3993 Bernoulli-Euler ETB 361.7856 5.7541 138.8010 — 9.3993 Fig. 2. Variation of axial displacement ( ) through the thickness of cantilever beam at ( x = L, z) for aspect ratio 4. Fig. 3. Variation of axial displacement ( ) through the thickness of cantilever beam at ( x = L, z) for aspect ratio 10. Fig. 4. Variation of maximum transverse displacement ( ) of beam at ( x=L, z = 0) with aspect ratio S. Fig. 5. Variation of axial stress ( ) through the thickness of beam at ( x= 0, z) for aspect ratio 4. Fig. 6. Variation of axial stress ( ) through the thickness of beam at ( x = 0, z) for aspect ratio 10. Fig. 7. Variation of transverse shear stress ( ) through the thickness of beam at ( x = 0.01 L, z) obtain using CR for aspect ratio 4. Fig. 8. Variation of transverse shear stress ( ) through the thickness of beam at ( x = 0.01 L, z) obtain using CR for aspect ratio 10. Fig. 9. Variation of transverse shear stress ( ) through the thickness of beam at ( x = 0.01 L, z) obtain using EE for aspect ratio 4. Fig. 10. Variation of transverse shear stress ( ) through the thickness of beam at ( x = 0, z) obtain using EE for aspect ratio 10. a) The axial displacement ( ) The present theory gives realistic results of this displacement component in commensurate with the other shear deformation theories. For cantilever beam with various loads, the result of present theory are nearly matching with those other higher order theory. b) The transverse deflection ( ) For cantilever beam with cosine load, the transverse deflection given by present theory is in excellent agreement with that of other higher order shear deformation theories. c) The axial stress ( ) The axial stress and its distribution across the thickness given by present theory is in excellent agreement with that of higher order shear deformation theories. d) The transverse shear stresses and For cantilever beam with cosine load, transverse shear stress and its distribution through the thickness of beam obtained from constitutive relation are in close agreement with that of other higher order refined theories; however, use of constitutive relation cannot predict the effect of stress concentration at the built-in end of the beam. The effect of stress concentration on variation of transverse shear stress is exactly predicted by the present theory with the use of equilibrium equation of two dimensional elasticity. The realistic variations of these stresses at the built-in end of various beams are presented. Hence the use of equilibrium equation is inevitable to predict the effect stress concentration in accordance with the higher / equivalent refined shear deformation theories. In general, the use of present theory gives accurate results as seen from the numerical examples studied and it is capable of predicting the local effects in the vicinity of the built-in end of the cantilever beam. This validates the efficacy and credibility of trigonometric shear deformation theory.
NumericalAccuracyParameters¶ class NumericalAccuracyParameters( density_mesh_cutoff=None, k_point_sampling=None, radial_step_size=None, density_cutoff=None, interaction_max_range=None, number_of_reciprocal_points=None, reciprocal_energy_cutoff=None, bands_per_electron=None, occupation_method=None, exx_grid_cutoff=None, compensation_charge_mesh_cutoff=None, grid_mesh_cutoff=None, electron_temperature=None)¶ Class for representing the parameters for setting the numerical accuracy of a calculation. Some parameter defaults are specific for each calculator, see HuckelCalculator, SlaterKosterCalculator, SemiEmpiricalCalculator, LCAOCalculator, PlaneWaveCalculator, DeviceHuckelCalculator, DeviceSlaterKosterCalculator, DeviceSemiEmpiricalCalculator, or DeviceLCAOCalculator. Parameters: density_mesh_cutoff(PhysicalQuantity of type energy | GridSampling| OptimizedFFTGridSampling) – The mesh cutoff to be used to determine the density grid sampling. The mesh cutoff must be a positive energy or a GridSamplingobject. Default:Specific for each calculator. k_point_sampling(list of int | MonkhorstPackGrid) – The k-point sampling in reciprocal space given by three Monkhorst-Pack indices or a MonkhorstPackGridobject. Default:Specific for each calculator. radial_step_size(PhysicalQuantity of type length) – The maximum sampling step size in all the radial grids. Must be positive. Default:Specific for each calculator. density_cutoff( float) – The density cutoff determines the limit where a density is considered to to be zero. Smaller values therefore leads to longer ranges and less sparsity of the models. Must be positive. Default: 1.0e-6 interaction_max_range(PhysicalQuantity of type length) – The maximum allowed interaction distance between two orbitals. Default:Specific for each calculator. number_of_reciprocal_points( int) – The number of reciprocal points used for evaluating two-center integrals. Must be larger than 1. Default: 1024 reciprocal_energy_cutoff(PhysicalQuantity of type energy) – The energy cutoff in reciprocal space used for evaluating of the two-center integrals. Must be positive. Default: 1250 * Hartree bands_per_electron( float) – The number of bands per electron. The number must be 1.0 or larger. Only used by the PlaneWaveCalculator. Default: 1.2 occupation_method( FermiDirac| GaussianSmearing| MethfesselPaxton| ColdSmearing) – The method to calculate state occupations. Default:Specific for each calculator. exx_grid_cutoff(PhysicalQuantity | GridSampling| OptimizedFFTGridSampling) – The energy cutoff/grid sampling that determines the grid size used for representing the the local exact exchange potential in plane-wave hybrid functional calculations. For lossless calculations the grid sizes should be twice that needed to represent the wave functions. However, one may use a smaller grid for faster calculations. Default:The same value as used for ‘density_mesh_cutoff’ compensation_charge_mesh_cutoff(PhysicalQuantity | GridSampling| OptimizedFFTGridSampling) – The energy cutoff/grid sampling that determines the grid size on which the compensation charge is stored in a PAW calculation. Default:The same value as used for ‘density_mesh_cutoff’. electron_temperature(PhysicalQuantity of type temperature) – Deprecated since version 2016.1: Use occupation_method=FermiDirac(electron_temperature)instead. The electron temperature used in determining the shape of the Fermi function. Must be positive. bandsPerElectron()¶ Returns: The bands per electron. Return type: float compensationChargeMeshCutoff()¶ Returns: The energy cutoff/grid sampling used for the compensation charge. Return type: PhysicalQuantity | GridSampling| OptimizedFFTGridSampling densityCutoff()¶ Returns: The density cutoff. Return type: float densityMeshCutoff()¶ Returns: The density mesh cutoff. Return type: PhysicalQuantity of type energy | GridSampling electronTemperature()¶ Returns: The electron temperature. Return type: PhysicalQuantity of type energy exxGridCutoff()¶ Returns: The energy cutoff/grid sampling used for the exact exchange potential. Return type: PhysicalQuantity | GridSampling| OptimizedFFTGridSampling gridMeshCutoff()¶ Returns: The density mesh cutoff. Return type: PhysicalQuantity of type energy | GridSampling Deprecated since version Use: densityMeshCutoff()instead. interactionMaxRange()¶ Returns: The interaction max range. Return type: PhysicalQuantity of type length numberOfReciprocalPoints()¶ Returns: The number of reciprocal points used in two-center integration. Return type: int occupationMethod()¶ Returns: The occupation method. Return type: FermiDirac| GaussianSmearing| MethfesselPaxton| ColdSmearing Usage Examples¶ Define the k-point sampling and real space grid mesh-cutoff. numerical_accuracy_parameters = NumericalAccuracyParameters( density_mesh_cutoff=12.0*Hartree, k_point_sampling=MonkhorstPackGrid(2, 1, 1), electron_temperature = 200*Kelvin )calculator = HuckelCalculator( iteration_control_parameters=iteration_control_parameters, ) Specify the electron temperature in units of eV instead of Kelvin numerical_accuracy_parameters = NumericalAccuracyParameters( electron_temperature=0.02 * electronVolt/boltzmann_constant ) Notes¶ The distance between the points in the real space grid, \(\Delta x\), is related to the density_mesh_cutoff, \(E^\mathrm{grid}\), through\[\Delta x = \frac{\pi \hbar}{\sqrt{2 m E^{grid}}}.\] In atomic units \(m = \hbar = 1\) , thus for energies in Hartree and distances in Bohr, \(\Delta x = \pi/\sqrt{2 E^\mathrm{grid}}\). When setting interaction_max_rangesome matrix elements are set to zero. For very long ranged basis sets this can make the overlap matrix ill defined at certain k-points (i.e. it is not positive definite), in such cases the matrix diagonalization routine will give a segmentation fault. The cure is to change the interaction_max_range, i.e. either make it very large to include all long range elements, or make it small so no long range elements are included.
Search Now showing items 1-2 of 2 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
Guassian Approximation to Binomial Random Variables I was reading a paper on collapsed variational inference to latent Dirichlet allocation, where the classic and smart Gaussian approximation to Binomial variables was used to reduce computational complexity. It has been a long time since I first learnt this technique in my very first statistics class and I would like to revisit this idea. import numpy as npimport scipy as sp import seaborn as snsfrom matplotlib import pyplot as plt%matplotlib inline Single coin flip I always feel like the example of coin flipping is used so much. Yet, it is simple and intuitive for understanding some (complex) concepts. Let’s say we are flipping a coin with a probability of $p$ to get a head ($H$) and $1-p$ to get a tail (T). The outcome $X$ is thus a random variable that can only have two possible values: $X \in {H, T}$. For simplicity, we can use $0$($1$) to denote $T$($H$). In this case, we say that $X$ follows a Bernoulli distribution: We can write the probability mass function as: Then according to the defintions of expectation and variance, we have Repetitive coin flips How about we extend the above experiment a bit simply by doing that over and over gain? If we define a random variable $Y$ to denote the number of $1$’s across $n$ coin flips, we say $Y$ follows a Binomial distribution such that: where $k$ is the number of $1$’s. The coefficient indicates the possible permutations of $0$’s and $1$’s in the final outcome. note that we can write $Y = \sum_i X_i$ since $X_i \in {0,1}$ It’s not difficult to get $E[Y]$ and $V[Y]$ by linearity of expectation and variance. Visualization of $Y$ I like visualizations, because they help me better feel such abstract concepts. Let’s use an example of $(n, p)$ to see what Binomial distribution ($Y$) really looks like. Specifically, we will vary $n$ and see how the shape of the distribution may differ. n_list = [10, 30, 50]p = 0.3k_list = [np.arange(0, n+1) for n in n_list] # support of Y fig, ax = plt.subplots(figsize=(8,5))for i, n in enumerate(n_list): pmf = sp.stats.binom.pmf(k_list[i], n, p) _ = ax.bar(k_list[i], pmf, alpha=.6, label='n=%d'%n)_ = ax.legend() It seems like when $n$ goes larger, the distribution gets more symmetric. This, at least to me, is some kind of visual evidence that Gaussian approximation may actually work when $n$ is large enough. Central limit theorem (CLT) CLT is amazing! It implies that no matter what type of distributions we are working with, we can apply things that will work for Gaussian distributions to them. Specifically, I quote the first paragraph in Wiki below: In probability theory, the central limit theorem (CLT) establishes that, in most situations, when independent random variables are added, their properly normalized sum tends toward a normal distribution (informally a “bell curve”) even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions. In the context of Binomial and Bernoulli random variables, we know that $Y = \sum_i X_i$. According to CLT, we have the following distribution: Combined with the equations above, we have: Validation Up to this point, we see that $Y \sim Binomial(n, p)$ can be approximated by $\mathcal{N}(np, np(1-p))$ given a relatively large number of $n$. Let’s try different $n$’s to see how well such approximation can perform! n_list = [1, 3, 7, 10]p = 0.7k_list = [np.arange(1, n+1) for n in n_list]x_list = [np.linspace(-2, 2*n, 1000) for n in n_list] fig, ax_list = plt.subplots(nrows=1, ncols=len(n_list), figsize=(4*len(n_list), 3))for i, n in enumerate(n_list): ax = ax_list[i] binom_p = sp.stats.binom.pmf(k_list[i], n, p) gauss_p = sp.stats.norm.pdf(x_list[i], n*p, n*p*(1-p)) ax.bar(k_list[i], binom_p, color='seagreen', alpha=.6, label='Binomial') ax.plot(x_list[i], gauss_p, label='Gaussian', color='seagreen') ax.set_title('$n=%d$'%n)ax.legend()fig.tight_layout() Now we can actually see that it is pretty good!
Within a certain system of measures, conversion factors are typically exact. In imperial units, this means that a foot is always twelve inches, a yard is always three feet and a mile is always 1760 yards. With the exact conversion, we can use multiplication to see that: $$1~\mathrm{yd} = 36'' \pm 0''\\1~\mathrm{m} = 5280' \pm 0'\\1~\mathrm{m} = 63360'' \pm 0''$$ Likewise, the conversion from cubic imperial units — which are defined as $\text{imperial unit}\times\text{imperial unit}\times\text{imperial unit}$ — to other cubic imperial units is equally exact. $$1~\mathrm{yd^3} = 1~\mathrm{yd}\times 1~\mathrm{yd}\times 1~\mathrm{yd} = 3' \times 3' \times 3' = 27~\mathrm{ft^3}$$ This is akin to the exactness of the conversion of metric units, except that the metric factors are always multiples of 10. But there are also semi-metric units, such as the German ‘metric pound’, by definition $500~\mathrm{g}$. Conversion from grams to German metric pounds is as exact as conversion from grams to kilograms. This becomes different if you leave your system of measures and compare different systems. Except in a few very special cases, where two systems chose physically identical starting and/or ending points but with different step sizes, [1] the conversion factors will be nonexact. Any nonexact conversion factor will introduce an error of its own into the equation and how to deal with these is best shown in Martin’s answer. Note that even if the transformation is exact at a starting and ending point, measured values will still have an inherent uncertainty in them, as Martin pointed out. The uncertainty is transformed across the conversion and does not change magnitude. [1]: See for example temperature. A measured temperature of $15~\mathrm{^\circ C}$ can be transformed into an exact value in kelvin because the kelvin and celsius scales are identical except for an addition factor. Thankfully, also the error is identical. $$(15.00 \pm 0.005)~\mathrm{^\circ C} = (288.16 \pm 0.005)~\mathrm{K}$$ Similarly, the Réamur temperature scale is defined such that the boiling point of water is $80~\mathrm{^\circ r}$ while the zero-point is identical to the celsius scale. Therefore, the step is different and $1~\mathrm{^\circ C} = 0.8~\mathrm{^\circ r}$. Upon converting, we get a similarly exact value but a different error. $$(15.00 \pm 0.005)~\mathrm{^\circ C} = (12.00 \pm 0.004)~\mathrm{^\circ r}$$
I think that the problem stems from the action of the operator $\hat p$. Please correct me if I am mistaken. The action of the operator $\hat p$ in the quantum space is defined as $<x|\hat p|a>=-i \hbar \partial_x <x|a>$ if the state $|a>$ does not depend on x. In fact, if the state $|a>$ depended on $x$, like for instance $|a>=f(x)|b>$ for any scalar function $f(x)$, then clearly the equation $<x|\hat p|a>=<x|\hat p f(x)|b>=-i \hbar \partial_x <x|f(x)|b>= -i \hbar \partial_x (f(x) <x|b>) $ would be badly defined, as it could be evaluated in another different way:$<x|\hat p|a>=<x|\hat p f(x)|b>=f(x) <x|\hat p |b>=f(x)(-i \hbar) \partial_x <x|b>$ The second evaluation comes from the fact that, in Standard Quantum Mechanics, it is postulated that any operator acts on ket vectors and not on scalars (with the exception the Time reversal operator, which is not of any use here). The commutator relation $\left[\hat x, \hat p\right]=i\hbar$ is obtained from the action of the operator $\hat p$ as defined above. Thus, it comes straightforwardly that such a commutation relation cannot be generally used in a scalar product ($<x|...|ket>$) if the ket state on the right depends on $x$. Having said that, when you perform the trace of the commutator $\left[\hat x, \hat p\right]$, you are doing $Tr\Big[\left[\hat x, \hat p\right]\Big]=\int dx <x|(\hat x\hat p-\hat p\hat x)|x>=\int dx <x|(x\hat p-\hat p x)|x>$, where in the last step above I have just extracted the eigenvalues from the eigenstates $|x>$.In the above equation you have a scalar product where the ket on the right depends on $x$. Thus, you'll have to be careful in the evaluation and you cannot use the $xp$-commutation relations straight away. With a little care, everyone can see from the above equation that, indeed, the trace gives zero $\int dx <x|(x\hat p-\hat p x)|x>=\int dx \,x<x|(\hat p-\hat p )|x>=0$, as it should. Whereas, if you had used the $xp$-commutation relations from the outset, you would have wrongly found $Tr\Big[\left[\hat x, \hat p\right]\Big]=Tr\Big[i\hbar\Big]=i\hbar$. Edited after Joe's Comment In the last equation I forgot the dimensionality of the space. It must be modified as$Tr\Big[\left[\hat x, \hat p\right]\Big]=Tr\Big[i\hbar\Big]=i\hbar\,D$ where $D$ are the dimensions of the quantum space you are taking the trace in. Thanks Joe.
International conference on Function Spaces and Approximation Theory dedicated to the 110th anniversary of S. M. Nikol'skii May 25, 2015 14:55–15:20, Функциональные пространства, Moscow, Steklov Mathematical Institute of RAS My Japanese book «Theory of Besov spaces, including a remark on the space $S'$ over $P$» Y. Sawano Tokyo Metropolitan University Number of views: This page: 108 Materials: 55 Abstract: Let ${\mathcal S}'$ denote the set of all Schwartz distributions and ${\mathcal P}$ the set of all polynomials. If we define ${\mathcal S}_\infty$ to be the set of all $f \in {\mathcal S}$ such that $\int_{{\mathbb R}^n}x^\alpha f(x) dx=0$ for all $\alpha$, we can consider the dual space ${\mathcal S}_\infty'$. We know that ${\mathcal S}_\infty'$ is isomorphic to ${\mathcal S}'/{\mathcal P}$ as linear spaces. But it seems to me that this is true topologically. In my Japanese book, I wrote a proof but I have commited the mistake. But recently I modified the proof. My result is as follows. Theorem. Equip ${\mathcal S}'$ and ${\mathcal S}'_\infty$ with the weak star topology. Then the restriction mapping from ${\mathcal S}'$ to ${\mathcal S}_\infty'$ is open. Materials: abstract.pdf (84.9 Kb) Language: English References S. Nakamura, T. Noi, Y. Sawano, “Generalized Morrey spaces and trace operator” (to appear) Y. Sawano, Theory of Besov Spaces, Nihon-Hyoronsha, 2011, 440 pp. (in Japanese)
Permutation is an arrangement of objects in a definite order. When we look at the schedules of trains, buses and the flights we really wonder how they are scheduled according to the public’s convenience. Of course permutation is very much helpful to prepare the schedules on departure and arrival of these. Also when we come across license plates of vehicles which consists of few alphabets and digits. We can easily prepare these codes using permutations. Representation of Permutation: We can represent permutation in many ways, \(\large \mathbf{P^{n}_{k}}\), \(\large \mathbf{_{n}P_{k}}\), \(\large \mathbf{^{n}P_{k}}\), \(\large \mathbf{P _{n}\, _{,k}}\), \(\large \mathbf{P(n,k)}\) Definition of Permutation Basically Permutation is an arrangement of objects in a particular way. While dealing with permutation one should concern about the selection as well as arrangement. In Short, Ordering is very much essential in permutations. Types of Permutation: Permutation can be classified in three different categories: Permutation of n different objects ( when repetition is not allowed) Repetition, where repetition is allowed Permutation when the objects are not distinct (Permutation of multisets) Let us understand both the cases of permutation in details. (1) Permutation of n different objects: If n is a positive integer and r is a whole number, such that r < n, then P(n, r) represents the number of all possible arrangements or permutations of n distinct objects taken r at a time. It can also be represented as \(^{n}P_{r}\). P(n, r) = n(n-1)(n-2)(n-3)……..upto r factors \(\Rightarrow\) P(n, r) = n(n-1)(n-2)(n-3)……..(n – r +1) \(\large \Rightarrow P(n,r) = \frac{n!}{(n-r)!}\) Example: How many 3 letter words with or without meaning can be formed out of the letters of the word SWING when repetition of letters is not allowed? Solution: Here n = 5, as the word SWING has 5 letters. Since we have to frame 3 letter words with or without meaning and without repetition, therefore total permutations possible are: \(\large \Rightarrow P(n,r) = \frac{5!}{(5-3)!} = \frac{5 \times 4 \times 3 \times 2 \times 1}{2 \times 1} = 60\) (2) Permutation when repetition is not allowed: When the number of object is “n,” and we have “r” to be the selection of object, then Choosing an object can be in n different ways (each time). Thus the permutation of objects when repetition is allowed will be equal to, \(\large n \times n \times n \times ……(r \;\;times)\) which is given as \(\large n^{r}\) Example: How many 3 letter words with or without meaning can be formed out of the letters of the word SMOKE when repetition of words is allowed? Solution: The number of objects, in this case, is 5, as the word SMOKE has 5 alphabets. and r = 3, as 3 letter word has to be chosen. Thus the permutation will be Permutation (when repetition is allowed) = \(\large 5^{3}\) \(\large = 125\) (3) Permutation of multi-sets: Permutation of n different objects when \(p_{1}\) objects among ‘ n’ objects are similar, \(p_{2}\) objects of the second kind are similar, \(p_{3}\) objects of the third kind are similar ……… and so on, \(p_{k}\) objects of the kth kind are similar and the remaining of all are of a different kind, Thus it forms a multiset, where the permutation is given as: \(\large \mathbf{\large \frac{n!}{p_{1}!\; p_{2}!\; p_{3}…..p_{n}!}}\) Fundamental Counting Principle According to this principle, “If one operation can be performed in ‘m’ ways and there are n ways of performing a second operation, then the number of ways of performing the two operations together is m x n “. This principle can be extended to the case in which the different operation be performed in m, n, p, . . . . . . ways. In this case the number of ways of performing all the operations one after the other is m x n x p x . . . . . . . . and so on Read More: Examples on Permutation Example 1: In how many ways 6 children can be arranged in a line, such that Thus, the remaining 7 gives the arrangement in 5! ways, i.e. 120. Also the two children in a line can be arranged in 2! Ways. Hence, the total number of arrangements will be, \( 5! \times 2! = 120 \times 2 = \) Out of the total arrangement, we know that, two particular children when together can be arranged in 240 ways. Therefore, total arrangement of children in which two particular children are never together will be 720 – 240 ways, i.e. 3 are to be selected. Therefore, \(P_{3}^{5} = \frac{5!}{(5-3)!}\) = 60 i.e. there are 9 positions. The even positions are , 2nd, 4th, 6th and the 8th places and These four places can be occupied by 4 women in P(4, 4) ways = 4! = 4 . 3. 2. 1 = 24 ways The remaining 5 positions can be occupied by 5 men in P(5, 5) = 5! = 5.4.3.2.1 = 120 ways Therefore, by the Fundamental Counting Principle, Total number of ways of seating arrangements = 24 x 120 = 2880 Practice Problems Practice below listed problems: How many numbers lying between 100 and 1000 can be formed with the digits 1, 2, 3, 4, 5, if the repetition of digits is not allowed. Seven athletes are participating in a race. In how many ways can the first three prizes be won. To solve more problems or to take a test, download BYJU’S – The Learning App from Google Play Store. Learn about Combination and Difference between Permutation and Combination with BYJU’S-The learning App.
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
Introduction For a few months I have been using calculations of imaginative powers and I have come across the equation that most of you are familiar with by now being: $$x^{\left(y\cdot i\right)}=\cos \left(y\cdot \ln \left(x\right)\right)+i\cdot \sin \left(y\cdot \ln \left(x\right)\right)$$ $x$ and $y$ are both considered as variables in this instance. Question How is this equation come about, I have a feeling it is something to do with $e$ being the derivative of itself, however, I haven't done any calculus yet at my school (since in the United Kingdom, Calculus isn't until A level). I was wondering if anyone would be able to give a helpful explanation of why this is true, not that I'm doubting it, just looking for some more knowledge. I have actually created a desmos graph for any of you who are interested, it uses real numbers and maps them onto an imaginary plane. Here is the link: Hope that you like the graph and hopefully you can help me out with this question.
Consider the following background information:I have a sphere that equally divided in to two hemisphere P and S. There is a plane that separate two different zone. Upper zone called A and lower zone ... I have a sphere with radius $r$ that equally divided in to two hemisphere P and S. There is a plane that separate two different zone A and B. Upper zone called A and lower zone called B. $\alpha$ is ... I have Sphere with radius R. Inside the sphere I have Two circle. One circle is fixed defined by $\alpha$. Another circle can rotate and the orientation of that circle can be defined by $\beta$. From ... I am using spherical coordinate system $r,\theta, \phi$ like this picture.Consider I have a sphere with radius $R$, is divided into two hemisphere $S$ and $P$.With this given information I am able ... Given two ellipses that take up regions $E_1$ and $E_2$ in $\mathbb{R^2}$, with the following properties:Centers defined in the Cartesian coordinate system $(c_1, 0)$ for $E_1$ and $(c_2, 0)$ for $... Let's say I have a sphere (determined by its center and radius) and two planes which cut individually the sphere. Individually, there will be to spherical caps. Let's suppose that both spherical caps ...
Recently, I was thinking about various justifications for the definition of 0! (factorial of zero) which is $$0!=1$$ The assumed value of 1 may seem quite obvious if you consider the recursive formula. However, it did not satisfy me “mathematically”. That’s why I decided to write these few sentences. I will give motivations for the less advanced ones, but there will also be motivations for slightly more insiders. ⭐️Factorial in Scalar Calculator ⭐️ Factorial and recurrence For integer n > 0 factorial is defined as follows $$n!=n\times (n-1)\times (n-2)\times \ldots \times 2\times 1$$ With ease you can see that below recursive formula follows $$n!=n\times (n-1)!$$ $$1!=1$$ ⭐️ 0! = 1 – motivation based on recurrence Small transformation of $$n!=n\times (n-1)!$$ gives $$(n-1)!=\frac{n!}{n}$$ Substituting n = 1 $$(1-1)!=\frac{1!}{1}$$ $$0!=1!=1$$ This explanation, although easy, does not provide (in my opinion) deep enough understanding of “why this should be the best option”. ⭐️ Factorial n! counts the possible distinct sequences of n distinct objects (permutations) Let’s assume we have a set containing n elements $$\{1,2,\ldots,n\}$$ Now let”s count possible ordering of elements is this set n ways of selecting first element (because we have the whole set available) n-1 ways of selecting second element (because the first was already selected, there are n-1 left) n-2 ways of selecting third element (because the two were already selected, there are n-2 left) … n- (k-1) ways of selecting element number k (because the k-1 were already selected, n- (k-1) remain) 2 ways of selecting element number n-1 (because the n-2 were selected, still 2 remain) 1 way of selecting element number n (because the n-1 were were selected, remained only one) Finally, counting all possible ways, we get $$n\times (n-1)\times (n-2)\times \ldots \times 2\times 1=n!$$ Conclusion: Factorial of n counts the number of permutation of a set containing n elements. ⭐️ k-permutations of n sometimes called partial permutations or variations The k-permutations of n are the different ordered arrangements of a k-element subset of an n-set. The number of such k-permutations of n is $$P_k^n = n\times (n-1)\times (n-2)\times\ldots\times \bigg(n-(k-1)\bigg) = \frac{n!}{(n-k)!}$$ It is easy to see that n-permutation of n is a permutation, so $$P_n^n=n!$$ $$n! = \frac{n!}{(n-n)!} = \frac{n!}{0!}$$ The next insight why 0!=1 is the correct definition comes from that for any n > 0 we should have $$0! \times n! = n!$$ ⭐️ Function as a sets mapping Function $$f:A\to B$$ Function f : A → B, where for every a ∈ A there is f(a) = b ∈ B, defines the relationship between elements a and b. We can say that the elements a ∈ A and b ∈ B are in relation “f” if and only if f(a) = b. ⭐️ Function as a subset of Cartesian product Function is a binary relation, meaning function can be expressed a subset of a Cartesian product. $$(a,b)\in f \subseteq A\times B \iff f(a)=b$$ ⭐️ Injective function Injective function is a function that preserves distinctness: it never maps distinct elements of its domain to the same element of its codomain. Shortly $$x\neq y \Rightarrow f(x) \neq f(y)$$ ⭐️ Surjective function A function f is surjective (or onto) if for every element b in codomain, there is at least one element a in the domain such such that f(a)=b . It is not required that x be unique. $$f:A\to B$$ $${\large \displaystyle\forall_{b \in B} \quad\displaystyle\exists_{a\in A}\quad}f(a)=b$$ ⭐️ Bijective function Bijective function, or one-to-one correspondence, is a function where each element of one set is paired with exactly one element of the other set, and each element of the other set is paired with exactly one element of the first set. There are no unpaired elements. In mathematical terms, a bijective function is both injective and surjective mapping of a set A to a set B. ⭐️ Bijective function vs Permutation Permutation is a function that returns the order of a set, i.e. if we consider the n-element set {1, 2, …, n} then permutation will be a function $$p:\{1, 2, …, n\}\to\{1, 2, …, n\}$$ satisfying the bijective function condition. By asking about the number of permutations we can equally ask about the number of different bijections from a given set into itself. ⭐️ Empty function An empty function is every function whose domain is an empty set. $$f:\emptyset\to B$$ The empty function “chart” is an empty set, as the Cartesian product of domain and codomain is empty. $$\emptyset\times B = \emptyset$$ The empty function preserves distinctness (is injective), because in the domain (an empty set) there are no two different elements for which the value of the function is equal. ⭐️ A special case of an empty function Let’s analyse the function that maps empty to empty set $$f:\emptyset\to\emptyset$$ Such a function is a bijection because it is injective function (as shown above) and there is no element in codomain (the codomain is an empty set) that is not in relation to the elements in the domain. Please note that there is exactly one such a bijection, which is a results of that the function is a subset of the Cartesian product of domain and codomain. In this case this is only one possible set. $$f:\emptyset\to\emptyset$$ $$\emptyset\times\emptyset = \emptyset$$ The empty set has exactly one subset, which is the empty set – thus such a bijection is uniquely defined. ⭐️ 0! = 1 vs Empty function I wrote above that the number of permutations of an n-element set equals the number of distinct bijective functions from this set into itself. Following – the permutation of 0-element set corresponds to the bijection from an empty set into the empty set/ The special case of empty function is just 1 – and I presented the proof that there exists only one such a function 🙂 Pretty deep insight why 0! should by 1. ⭐️ The gamma function In mathematics, the Gamma function is one of the extensions of the factorial function with its argument shifted down by 1, to real and complex numbers. $$\Gamma(z)=\displaystyle\int_0^{+\infty}t^{z-1}e^{-t}dt$$ After integration by parts we get the recursive formula $$\Gamma(z+1)=z\cdot\Gamma(z)$$ Let’s see the value of $$\Gamma(1)=?$$ $$\Gamma(1)=\displaystyle\int_0^{+\infty}e^{-t}dt=\displaystyle\int_{-\infty}^{0}e^{t}dt$$ Following $$\Gamma(n+1)=n!$$ $$0! = \Gamma(1) = 1$$ ⭐️ Scalar support for the Gamma function Functions in Scalar Calculator, that support Gamma special function Gamma(x)– Gamma special function Γ(s) sgnGamma(x)– Signum of Gamma special function, Γ(s) logGamma(x)– Log Gamma special function, lnΓ(s) diGamma(x)– Digamma function as the logarithmic derivative of the Gamma special function, ψ(x) GammaL(s,x)– Lower incomplete gamma special function, γ(s,x) GammaU(s,x)– Upper incomplete Gamma special function, Γ(s,x) GammaP(s,x) , GammaRegL(s,x)– Lower regularized P gamma special function, P(s,x) GammaQ(s,x), GammaRegU(s,x)– Upper regularized Q Gamma special function, Q(s,x) Gamma function chart Gamma function vs Factorial chart ⭐️ Number e and factorial relation Based on Taylor series expansion of e^x it is easy to show that $$e=\displaystyle\sum_{n=0}^\infty\frac{1}{n!}=\frac{1}{0!}+\frac{1}{1!}+\frac{1}{2!}+\frac{1}{3!}+\ldots$$ Sequence convergence This is fascinating, as it shows even stronger relation of factorial to e Thanks for reading! All the best 🙂
An $n$-dimensional (closed) pseudomanifold is a finite simplicial complex $X$ such that (i) every simplex is a face of an $n$-simplex (ii) every $(n-1)$-simplex is a face of exactly two $n$-simplices (iii) Given any two $n$-simplices $\sigma, \tau \in X$ there is a sequence of $n$-simplices $\sigma_0 = \sigma, \ldots, \sigma_k = \tau$ such that $\sigma_i \cap \sigma_{i+1}$ is an $(n-1)$-dimensional simplex for each $0 \leq i \leq k-1$. These conditions imply that the polyhedron of a pseudomanifold is path-connected. Is it true that if a finite simplicial complex $X$ satisfies (i) and (ii) and has a path-connected polyhedron then it satisfies (iii)?
How to Automate Meshing in Frequency Bands for Acoustic Simulations Think of the curved lid of an elegant grand piano. The curve corresponds to the strings’ length, which corresponds to the perception of the pitch. This visual represents an important element of acoustics: Our perception of pitch is logarithmic. This means that there is a large frequency range involved in acoustics phenomena. In turn, when modeling acoustics problems, there is a large wavelength range to be meshed. But how? Introduction to Free-Field FEM Wave Problems A large frequency range needs to be computed, which means large wavelength ranges need to be resolved by the mesh. To efficiently mesh large frequency ranges, we can optimize the mesh element size by remeshing for a given frequency range when using finite element method (FEM) interfaces in the COMSOL Multiphysics® software. The finite element method is implemented in most interfaces in COMSOL Multiphysics, including the Pressure Acoustics, Frequency Domain and the Pressure Acoustics, Transient interfaces. Other interfaces in the Acoustics Module are optimized for their intended purpose by implementing the boundary element method (BEM), ray tracing, or dG-FEM (time explicit). When using the Pressure Acoustics interface, FEM uses a mesh to discretize the geometry and solves the acoustic wave equation at these points. The full, continuous solution is interpolated from these points. An automotive muffler with a porous lining, modeled using the pressure acoustics functionality in the COMSOL® software. When meshing an FEM model, we need to get a good approximation of the geometry and include details of the physics. When using the Pressure Acoustics interface, we always need to resolve the acoustic waves. A good mesh resolves the geometry and the physics of the model, but a great mesh accurately solves the problem and also uses the smallest number of mesh elements possible. In this blog post, we will look at how to mesh free-field/open-ended problems with the fewest mesh points. Mesh elements are comprised of nodes. For a linear mesh element, the nodes are located at the vertices. Second-order polynomial interpolation is the default shape function for wave equations in COMSOL Multiphysics. Second-order (or quadratic) elements have one additional node along the length of the element and resolve waves accurately. For free-field wave problems, we need to have about 10 or 12 nodes per wavelength to resolve the wave. Consequentially, for wave-based modeling with quadratic elements, we need 5 or 6 second-order elements per wavelength (hmax = \lambda_0/5). For short wavelengths (higher frequencies), the element size needs to be smaller than at lower frequencies. Audio applications, which are concerned with human perception, have a frequency range of 20 Hz to 20 kHz. In air at room temperature, audio problems have a wavelength range from about 17 m to 17 mm. If we were to compute over the entire human auditory frequency range with one mesh, we would need to resolve for the wavelengths that correspond to 20 kHz. At the high-frequency end, this leads to a maximum element size, or spatial resolution, of (17 mm/5 =) 3.2 mm. Resolving the mesh for the highest frequency leads to an excessively dense mesh for the low-frequency predictions. At 20 Hz, the wavelength is 17 m and would have 5360 nodes per wavelength, far more than the 10 or 12 that is required. Each node corresponds to a memory allocation for the computer. While this dense mesh approach is great from an accuracy perspective, the excessively dense mesh takes up computational resources and consequentially takes longer time to compute. Efficient Meshes in COMSOL Multiphysics® Setup for Single-Octave Mesh To avoid an inefficient meshing approach, we can split the problem into smaller frequency bands; initially, one octave, where the mesh for each frequency band is resolved according to its upper frequency limit. In this example, the center frequency, f_{C,n}, is referenced from f_0, the prescribed frequency, f_{C,n} = 2^n \times f_0, where n is the octave band number from the reference (positive n is higher-pitch octaves, negative n is lower-pitch octaves). The upper and lower frequency band limits are defined from the center-band frequency f_L = 2^{-\frac{1}{2}} \times f_{C,n} , f_U = 2^{\frac{1}{2}} \times f_{C,n} Note that f_U is twice f_L (thus one octave higher). Defining the octaves in the model parameters. We can use these parameters in the frequency-domain study using the range() function to define a logarithmic distribution of points within each band 10^{\textrm{range}(\log_{10}(f_L), df_\textrm{log}, \log_{10}(f_U) – df)}, The logarithmic frequency spacing, df_\textrm{log} = (\log_{10}(f_U)-\log_{10}(f_L))/(N-1), is set by the frequency range divided by the number of frequencies N. Setting the frequencies solved for in each octave band. The maximum mesh element size (traditionally given the variable name hmax) is then taken from the higher limit of the given frequency band hmax = 343[m/s]/f_U/5. Note that if you do not know the speed of sound, you can use comp1.mat1.def.cs(23[degC]) to access the speed of sound for the first material (in a list), defined in Component 1 at 23°C. If you are using the built-in material Air, the speed of sound comes from the ideal gas law, so the fluid temperature is a required input. The custom mesh sequence with the parameter hmax applied to the Maximum element size . The Maximum element size is applied to the mesh on the Size node. The elements can be smaller than this constraint if smaller geometry details needs to be resolved, as shown in the figure below. The smallest element is controlled by the Minimum element size setting. The Curvature factor and Resolution of narrow regions settings are also important mesh settings. The mesh element quality shown on the top for two octave bands. Setup for Multiple Octave Bands If the COMSOL Multiphysics model is set up as described above, it would yield one octave’s worth of frequencies. However, we need up to 10 octaves for our audio investigations. A parametric sweep over n, such that each value of n is an octave and the upper and lower frequency limits change accordingly. To implement a parametric sweep in COMSOL Multiphysics, a Parametric Sweep study step is added to the study to change the frequency bands. The benefit of working with parameters is that all of the frequency band limits change automatically when the parameter sweep variable n changes. The parameter n is the natural choice for the parameter sweep because each value of n corresponds to a frequency band. Setting it up in this way means that the original frequency is now the reference frequency and must be chosen appropriately. For the results shown below, the same frequencies were computed over the same range with the mesh for the highest frequency. The study that splits the mesh according to the octave band number took 32 s, whereas the single-mesh approach took 79 s. This shows a significant savings of time and computational resources. The instantaneous pressure is shown on the bottom for the different frequencies and meshes. The Octave Band plot type is used to calculate the required response. Ensure that the line markers are placed in data points. Alternatively, to obtain a continuous line, change the x-Axis Data to Expression and enter freq, the variable for frequency. Plotting the continuous line. Choose Point Graph and ensure that the plot settings are set up as shown above. Setup for n th-Octaves Bands The previous discussion sets up the problem in octave bands. However, you can use the general form of f_{C,n} = 2^\frac{n}{b} \times f_0 f_L = 2^{-\frac{1}{2b}} \times f_{C,n} , f_U = 2^{\frac{1}{2b}} \times f_{C,n} , to allow fractions of octave bands. In the above setup, let b = 3 for third octave bands or 6 for sixth octave bands. The narrower the frequency band, the more times the meshing sequence runs, so there is a balance to be struck. The parameters that set up the general meshing procedure in any octave band are located in the Remeshing in Frequency Bands model. It is easy to save the necessary parameters in a .txt file and load them when setting up a new model. This avoids having to enter them every time. Discussion and Caveats of Meshing in Frequency Bands for Acoustics Simulations The method presented in this blog post uses canonical geometry to clearly illustrate the process for optimizing the mesh. Consequentially, the meshing routine takes relatively little time. For realistic geometries, the meshing routine may take longer and the benefits may be less marked. In this instance, you should defeature or use virtual operations to reduce any physically irrelevant geometry. For some problems, the temperature or density of the fluid may change significantly over the computational domain. If this occurs, the speed of sound will change and must be included in the model. The mesh must be dense enough to reflect this. This discussion is not relevant to the Ray Tracing, Pressure Acoustics, Boundary Element, or Acoustic Diffusion interfaces. With care, the information in this blog post can be applied to free-field problems of the Aeroacoustics and Thermoviscous Acoustics interfaces or the dG-FEM-based Ultrasound interfaces. The convective effect of the flow alters the wavelength, and a sophisticated mesh should reflect this up- or downstream of a source. The Linearized Navier-Stokes and Linearized Euler interfaces have default linear interpolation, so 10 or 12 elements are required per wavelength. The Thermoviscous Acoustics interface is designed for resolving the acoustic boundary layer. The thickness of this layer is also frequency dependent, and a similar method to the one discussed here can be used for efficient meshing and resolution of the layer. Finally, the discussion in this blog post explicitly assumes that the wavelength is known. This assumption is usually the case for free-field modeling, however for bounded, resonant problems, the total sound field is dependent on boundary condition values and the location of the boundaries. This means that the pressure amplitudes can have shapes with an analogous wavelength that could be significantly shorter than the free-field wavelength. To get an accurate solution, you must perform a mesh convergence study. Conclusion This blog post has demonstrated that remeshing in frequency bands can save a significant amount of time. In COMSOL Multiphysics, this is implemented by parameterizing the upper- and lower-frequency band limits. The approach demonstrated here is applicable for interfaces that implement FEM and have quadratic interpolation. Next Steps Try it yourself: Click the button below to access the MPH-file for the model discussed in this blog post. Note that you must log into COMSOL Access and have a valid software license to download the file. Read More Learn more about how to enhance your meshing processes on the COMSOL Blog: Comments (2) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
2015-09-24 Summarizing, it seems that Heegaard splittings of $0$-manifolds are problematic, or need a different definition, and similarly trisections of $1$-manifolds are problematic; in both cases, the problem is having part of the dimension be $(-1)$-dimensional. But Heegaard splittings of (closed) $1$-manifolds were fine. Now let's do Heegaard splittings of closed surfaces. Here are two options: Option 1: A "genus $g$" Heegaard splitting of a closed surface $\Sigma$ is a decomposition $\Sigma=\Sigma_1 \cup \Sigma_2$ such that (1) each $\Sigma_i \cong \natural^g S^1 \times B^1$ and (2) $\Sigma_1 \cap \Sigma_2 \cong \#^g S^1 \times S^0$. On other words, each $\Sigma_i$ is a planar surface with $g+1$ boundary componets, a.k.a. a $g$-punctured disk (well, disk with $g$ open disks removed). So just put your surface flat on the table and slice parallel to the table, like this: Note that this doesn't give us much choice. There is only one way to do this for a fixed surface of (actual) genus $h$. And maybe we really want to think of $\Sigma$ as just the double of some surface with boundary, i.e. a $2$-dimensional $1$-handlebody. So... Option 2: A Heegaard splitting of a closed surface $\Sigma$ is a decomposition $\Sigma=\Sigma_1 \cup \Sigma_2$ such that (1) each $\Sigma_i$ is a $2$-dimensional $1$-handlebody and (2) $\Sigma_1 \cap \Sigma_2$ is a disjoint union of circles. Now the formalism is breaking down, but this again is something special in low dimensions, that $2$-dimensional $1$-handlebodies are not completely determined by the number of $1$-handles. This suggests also that breaking the formalism a little might help with the $0$- and $1$-dimensional examples that failed. I'll return to this tomorrow, I think. Let's just end with a picture of this kind of Heegaard splitting: Here each half is a once-punctured torus. All I actually did was cut along the central "neck" first, but then modify that cut by a Dehn twist to make it look fancy.
ISSN: 1078-0947 eISSN: 1553-5231 All Issues Discrete & Continuous Dynamical Systems - A January 2007 , Volume 17 , Issue 1 Select all articles Export/Reference: Abstract: We study the first positive Neumann eigenvalue $\mu_1$ of the Laplace operator on a planar domain $\Omega$. We are particularly interested in how the size of $\mu_1$ depends on the size and geometry of $\Omega$. A notion of the intrinsic diameter of $\Omega$ is proposed and various examples are provided to illustrate the effect of the intrinsic diameter and its interplay with the geometry of the domain. Abstract: We discuss one parameter families of unimodal maps, with negative Schwarzian derivative, unfolding a saddle-node bifurcation. We show that there is a parameter set of positive but not full Lebesgue density at the bifurcation, for which the maps exhibit absolutely continuous invariant measures which are supported on the largest possible interval. We prove that these measures converge weakly to an atomic measure supported on the orbit of the saddle-node point. Using these measures we analyze the intermittent time series that result from the destruction of the periodic attractor in the saddle-node bifurcation and prove asymptotic formulae for the frequency with which orbits visit the region previously occupied by the periodic attractor. Abstract: This paper studies questions regarding the local and global asymptotic stability of analytic autonomous ordinary differential equations in $\mathbb{R}^n$. It is well-known that such stability can be characterized in terms of Liapunov functions. The authors prove similar results for the more geometrically motivated Dulac functions. In particular it holds that any analytic autonomous ordinary differential equation having a critical point which is a global attractor admits a Dulac function. These results can be used to give criteria of global attraction in two-dimensional systems. Abstract: In this work we characterize those shift spaces which can support a 1-block quasi-group operation and show the analogous of Kitchens result: any such shift is conjugated to a product of a full shift with a finite shift. Moreover, we prove that every expansive automorphism on a compact zero-dimensional quasi-group that verifies the medial property, commutativity and has period 2, is isomorphic to the shift map on a product of a finite quasi-group with a full shift. Abstract: Using the characteristic equation approach, the problem of asymptotic stability of linear neutral systems with multiple time delays is investigated in this paper. New delay-independent stability criteria are derived in terms of the spectral radius of corresponding modulus matrices. The structure information of the system matrices are taken into consideration in the proposed stability criteria, thus the conservatism found in the literature can be significantly reduced. The explicit nature of the construction permits us to directly express the algebraic criteria in terms of the plant parameters, thus checking of stability by our criteria can be carried out rather simply. Numerical examples are given to demonstrate the validity of the new criteria and to compare them with the previous results. Abstract: The aim of this paper is to define and study a new kind of entropy-like invariants in the case of probability space and compact metric topological group of continuous endomorphisms. These new invariants are only non-zero for non-invertible maps, but many propositions can be described and the analogue of the well-known variational principle can be established. Abstract: We study a Schrödinger equation with a nonlocal nonlinearity, which has been considered as a model for ultra-short laser pulses. An interesting feature of this equation is that the underlying dynamical system possesses a bounded non compact global attractor, actually a ball in $L^2(R)$. Existence and instability of standing waves are also proved. Abstract: Motivated by the study of actions of $\Z^{2}$ and more general groups, and their non-cocompact subgroup actions, we investigate entropy-type invariants for deterministic systems. In particular, we define a new isomorphism invariant, the entropy dimension, and look at its behaviour on examples. We also look at other natural notions suitable for processes. Abstract: In this paper we study boundary value problem with one dimensional $p$-Laplacian. Assuming complete resonance at $+\infty$ and partial resonance at $0^+$, an existence of at least one positive solution is proved. By strengthening our assumptions we can guarantee strict positivity of the obtained solution. Abstract: Existence of the global attractor is proved for the strongsolutions to the 3D viscous Primitive Equations (PEs) modeling large scale ocean and atmosphere dynamics. This result is obtained under the natural assumption that the external heat source $Q$ is square integrable. Furthermore, it is shown in [20] that the fractal and Hausdroff dimensions of the global attractor for 3D viscous PEs are both finite. Abstract: In this paper the global well-posedness in $L^2$ and $H^m$ of the Cauchy problem is proved for nonlinear Schrödinger-type equations. This we do by establishing regular Strichartz estimates for the corresponding linear equations and some nonlinear a priori estimates in the framework of Besov spaces. We further establish the regularity of the $H^m$-solution to the Cauchy problem. Abstract: For conformal hyperbolic flows, we establish explicit formulas for the Hausdorff dimension and for the pointwise dimension of an arbitrary invariant measure. We emphasize that these measures are not necessarily ergodic. The formula for the pointwise dimension is expressed in terms of the local entropy and of the Lyapunov exponents. We note that this formula was obtained before only in the special case of (ergodic) equilibrium measures, and these always possess a local product structure (which is not the case for arbitrary invariant measures). The formula for the pointwise dimension allows us to show that the Hausdorff dimension of a (nonergodic) invariant measure is equal to the essential supremum of the Hausdorff dimension of the measures in an ergodic decomposition. Abstract: We study the changes on the Bowen-Ruelle-Sinai measures along an arc that starts at an Anosov diffeomorphism on a two-torus and reaches the boundary of its stability component while a flat homoclinic tangency or a first cubic heteroclinic tangency is happening. The outermost diffeomorphisms of such arcs are not hyperbolic but are conjugate to the original Anosov diffeomorphism and share similar ergodic traits. In particular, the torus is a global attractor with a full supported physical measure. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
I'm confused about the impact that a mean reverting stock price process has on the value of an option on it. Several sources say that there is indeed an impact on the price of an option: Lo and Wang (1995) Yet, another source seems to say that mean reversion has no impact on the price of an option: "The drift term of the process has no impact on the price of a call option, since we know that under the correct pricing measure we need the discounted stock price to have zero drift. This is achieved by changing the drift of the original process, rendering any initial drift term irrelevant" Mark Joshi, Quant Job Interview Questions and Answers. So I guess my ultimate question is, if the stock price follows the following process: $$ dS_t=\alpha(\mu-S_t)dt+\sigma S_tdZ$$ Is the price of an option on the stock just equal to the BSM price where $\sigma_{BSM} = \sigma$? It would make sense to me that there is no effect, because the replicating portfolio argument still works, and we end up with the same PDE and boundary conditions, which would give the same price.
Consider a sequence of $n$ independent Bernoulli trials drawn from a list of biases $p_1,p_2,...,p_n\in[0,1]$, respectively. We set the random variable $X$ to be the sum of these trials. On wikipedia, the distribution of $X$ is called the Poisson binomial distribution. We define the sample mean and sample variance of our list of Bernoulli biases as $$ \bar{p}=\frac{1}{N}\sum_{i=1}^n p_i $$ and $$ \sigma_p^2 =\frac{1}{N}\sum_{i=1}^N(p_i-\bar{p}) =\frac{1}{N}\sum_{i=1}^N p_i^2 - \bar{p}^2. $$ Since the trials are independent, it is easy to compute that $$ \mathbb{E}[X] = \sum_{i=1}^n p_i = N\bar{p} $$ and \begin{align*} \mathbb{Var}[X] &= \sum_{i=1}^n p_i(1-p_i) \\ &= N\bar{p} - N(\sigma_p^2+\bar{p}^2) \\ &= N\bar{p}(1-\bar{p}) - N\sigma_p^2. \end{align*} The expectation value of $X$ is not surprising. Also, when $\sigma_p^2=0$ we must have $\bar{p}=p_1=\cdots=p_n$ and so $X$ is binomially distributed, which matches $\mathbb{Var}[X]$ computed above. My confusion is this: why does the variance of $X$ go down as the sample variance $\sigma_p^2$ goes up (with $\bar{p}$ and $N$ fixed)? I find this very counter-intuitive, and would appreciate an explanation. I would expect with a greater variance of biases, there would be a broader distribution of possible sums of the result...
Layer Sensitivity The You can specify relative errors in layer thicknesses on For each disturbed design, the variations \(\Delta MF_i\) with respect to design target \[ \Delta MF_i=|MF(d_1,...,d_i(1+\delta_{H,L}),...,d_m)-MF(d_1,...,d_m)|\] or to theoretical spectral characteristic \(S\) (Eq.(2)): \[ \Delta MF_i=|S(d_1,...,d_i(1+\delta_{H,L}),...,d_m)-S(d_1,...,d_m)|.\] Then, layer sensitivities \(LS(i)\) are calculated and ranged from 0 to 100% (Fig. 2, column \[ LS(i)=\frac{\Delta MF_i}{\max_{i=1,...,m}\Delta MF_i}\cdot 100\%\] In the The design is obtained using the WDM option. Stack is a set of thick media separated by gaps. Each surface can be coated by a multilayer. In Figs. 6-8, an example of stack is presented. The stack consists of two substrates, BK7 and B270, separated by an air gap, the surfaces are coated with AR_1, AR2, AR3, and AR4 12-layer coatings. Layer materials are TiO Sensitivities of layers in different coatings are shown by different colors (Fig. 9). The sensitivities were calculated assuming 0.5%, 1%, and 2%, in SiO
With which notation do you feel uncomfortable? closed as not constructive by Loop Space, Chris Schommer-Pries, Qiaochu Yuan, Scott Morrison♦ Mar 19 '10 at 6:10 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. If this question can be reworded to fit the rules in the help center, please edit the question. There is a famous anecdote about Barry Mazur coming up with the worst notation possible at a seminar talk in order to annoy Serge Lang. Mazur defined $\Xi$ to be a complex number and considered the quotient of the conjugate of $\Xi$ and $\Xi$: $$\frac{\overline{\Xi}}{\Xi}.$$ This looks even better on a blackboard since $\Xi$ is drawn as three horizonal lines. My favorite example of bad notation is using $\textrm{sin}^2(x)$ for $(\textrm{sin}(x))^2$ and $\textrm{sin}^{-1}(x)$ for $\textrm{arcsin}(x)$, since this is basically the same notation used for two different things ($\textrm{sin}^2(x)$ should mean $\textrm{sin}(\textrm{sin}(x))$ if $\textrm{sin}^{-1}(x)$ means $\textrm{arcsin}(x)$). It might not be horrible, since it rarely leads to confusion, but it is inconsistent notation, which should be avoided in general. I personally hate the notation $x \mid y$, for "$x$ divides $y$". Of course, I'm used to reading it by now, but a general principle I follow and recommend is: Never use a symmetric symbol to denote an asymmetric relation! I never liked the notation ${\mathbb Z}_p$ for the ring of residue classes modulo $p$. At one point, it confused the hell out of me, and this confusion is easily avoided by writing $C_p$, $C(p)$ or ${\mathbb Z}/p$. Mathematicians are really quite bad when it comes to notation. They should learn from programming langauges people. Bad notation actually makes it difficult for students to understand the concepts. Here are some really bad ones: Using $f(x)$ to denote both the value of $f$ at $x$ and the function $f$ itself. Because of this students in programming classes cannot tell the difference beween $f$ (the function) and $f(x)$ (function applied to an argument). When I was a student nobody ever managed to explain to me why $dy/dx$ made sense. What is $dy$ and what is $dx$? They're not numbers, yet we divide them (I am just giving a student's perspective). In Langrangian mechanics and calculus of variations people take the partial derivative of the Lagrangian $L$ with respect to $\dot q$, where $\dot q$ itself is the derivative of momentum $q$ with respect to time. That's crazy. The summation convention, e.g., that ${\Gamma^{ij}}_j$ actually means $\sum_j {\Gamma^{ij}}_j$ is useful but very hard to get used to. In category theory I wish people sometimes used anynotation as opposed to nameless arrows which are introduced in accompanying text as "the evident arrow". Physicist will hate me for this, but I never liked Einstein's summation convention, nor the famous bra ($\langle\phi|$) and ket ($|\psi\rangle$) notation. Both notations make easy things look unnecessarily complicated, and especially the bra-ket notation is no fun to use in LaTeX. My candidate would be the (internal) direct sum of subspaces $U \oplus V$ in linear algebra. As an operator it is equivalent to sum but with the side effect of implying that $U \cap V = \lbrace 0\rbrace$. Whenever I had a chance to teach linear algebra I found this terribly confusing for students. I think composition of arrows $f:X\to Y$ and $g:Y\to Z$ should be written $fg$ not $gf$. First of all it would make the notation $\hom(X,Y)\to\hom(Y,Z)\to \hom(X,Z)$ much more natural: $\hom(E,X)$ should be a left $\hom(E,E)$ module because $E$ is on the left :) Secondly, diagrams are written from left to right (even stronger: Almost anything in the western world is written left to right). And i think the strange (-1) needed when shifting complexes is an effect of this twisted notation. The notation ]a,b[ for open intervals and its ilk. Sorry, Bourbaki. Writing a finite field of size $q$ as $\mathrm{GF}(q)$ instead of as $\mathbf{F}_q$ always rubbed me the wrong way. I know where it comes from (Galois Field), and I think it is still widely used in computer science, and maybe in some allied areas of discrete math, but I still dislike it. As Trevor Wooley used to always say in class, ``Vinogradov's notation sucks....the constants away." For those who don't know, Vinogradov's notation in this context is $f(x)\ll g(x)$ meaning $f(x) = O(g(x)).$ (if you prefer big-O notation, that is). I rather dislike the notation $$\int_{\Omega}f(x)\,\mu(dx)$$ myself. I realize that just as the integral sign is a generalized summation sign, the $dx$ in $\mu(dx)$ would stand for some small measurable set of which you take the measure, but it still rubs me the wrong way. Is it only because I was brought up with the $\int\cdots\,d\mu(x)$ notation? The latter nicely generalizes the notation for the Stieltjes integral at least. I get very frustrated when an author or speaker writes "Let $X\colon= A\sqcup B$..." to mean: $A$ and $B$ are disjoint sets (in whatever the appropriate universe is), and let $X\colon= A\cup B$. If they just meant "form the disjoint union of $A$ and $B$" this would be fine. But I've seen speakers later use the fact that $A$ and $B$ are disjoint, which was never stated anywhere except as above. You should never hide an assumption implicitly in your notation. The use of squared brackets $\left[...\right]$ for anything. It's not bad per se, but unfortunately it is used both as a substitute for $\left(...\right)$ and as a notation for the floor function. And there are cases when it takes a while to figure out which of these is meant - I'm not making this up. The word "character" meaning: a 1-dimensional representation, a representation, a trace form of a representation, a formal linear combination of representations, a formal linear combination of trace forms of representations. The word "adjoint", and the corresponding notation $A\mapsto A^{\ast}$, having two completely unrelated meanings. The term "symplectic group" used to mean the group $U(n,{\mathbb H})$. It's as if people called $U(n)$ and $GL(n,{\mathbb R})$ by some single name. My personal pet peeve of notation HAS to be algebraists writing functions on the right a la Herstein's "Topics In Algebra". I don't know why they do it when everyone else doesn't. I think one of them got up one day and decided they wanted to be cooler then everyone else, seriously... I don't like (but maybe for a bad reason) the notation $F\vdash G$ for $F$ is left adjoint to $G$. Any comment ? A cute idea but for which I have yet to find supporters is D. G. Northcott's notation (used at least in [Northcott, D. G. A first course of homological algebra. Cambridge University Press, London, 1973. xi+206 pp. MR0323867) for maps in a commutative diagram, which consists in enumerating the names of the objects placed vertices along the way of the composition. Thus, if there is only one map in sight from $M$ to $N$, he writes it simply $MN$, so he has formulas looking like $$A'A(ABB'') = A'ABB'' = A'B'BB'' = 0.$$ He also writes maps on the right, so his $$xMN=0$$ means that the image of $x$ under the map from $M$ to $N$ is zero. I would not say this is among the worst notations ever, though. Students have big difficulties when first confronted with the $o(\cdot)$ and $O(\cdot)$ notation. The term $o(x^3)$, e.g., does not denote a certain function evaluated at $x^3$, but a function of $x$, defined by the context, that converges to zero when divided by $x^3$. I have struggled with 'dx'. I've spent years trying to study every different approach to calculus that I could find to try and make sense of it. I read about the limit definitions in my first book, vector calculus with them as pullbacks of linear transformations or flows/flux, differential forms from the bridge project, k-forms, nonstandard analysis which enlarges $\mathbb{R}$ to give you infinitesimals (and unbounded numbers) but the same first order properties and lets integral be defined as a sum, constructive analysis using a monad to take the closure of the rationals to give reals... but I am still just as confused as ever, I understand that the mathematical notation doesn't have a compositional semantics but still don't really get it - one of the problems is despite not really understanding it, or having any abstract definition of it.. I can still get correct answers and I really hope this doesn't become a theme as I study more topics in mathematics. p < q as in "the forcing condition p is stronger than q". I hate the short cut $ab$ for $a\cdot b$. Everyone get used to it, BUT it creates very deep problem with all other notation; say you never can be sure what $f(x+y)$ or $2\!\tfrac23$ might be... Also in modern mathematics people do not multiply things too often, so it does not have sense to make such a short cut. Yet the shortcut $x^n$ is really bad one. One can not use upper indexes after this. It would be easy to write $x^{\cdot n}$ instead.
1. What is a geometric sequence? 2. How is the common ratio of a geometric sequence found? 3. What is the procedure for determining whether a sequence is geometric? 4. What is the difference between an arithmetic sequence and a geometric sequence? 5. Describe how exponential functions and geometric sequences are similar. How are they different? For the following exercises, find the common ratio for the geometric sequence. 6. [latex]1,3,9,27,81,..[/latex]. 7. [latex]-0.125,0.25,-0.5,1,-2,..[/latex]. 8. [latex]-2,-\frac{1}{2},-\frac{1}{8},-\frac{1}{32},-\frac{1}{128},..[/latex]. For the following exercises, determine whether the sequence is geometric. If so, find the common ratio. 9. [latex]-6,-12,-24,-48,-96,..[/latex]. 10. [latex]5,5.2,5.4,5.6,5.8,..[/latex]. 11. [latex]-1,\frac{1}{2},-\frac{1}{4},\frac{1}{8},-\frac{1}{16},..[/latex]. 12. [latex]6,8,11,15,20,..[/latex]. 13. [latex]0.8,4,20,100,500,..[/latex]. For the following exercises, write the first five terms of the geometric sequence, given the first term and common ratio. 14. [latex]\begin{array}{cc}{a}_{1}=8,& r=0.3\end{array}[/latex] 15. [latex]\begin{array}{cc}{a}_{1}=5,& r=\frac{1}{5}\end{array}[/latex] For the following exercises, write the first five terms of the geometric sequence, given any two terms. 16. [latex]\begin{array}{cc}{a}_{7}=64,& {a}_{10}\end{array}=512[/latex] 17. [latex]\begin{array}{cc}{a}_{6}=25,& {a}_{8}\end{array}=6.25[/latex] For the following exercises, find the specified term for the geometric sequence, given the first term and common ratio. 18. The first term is [latex]2[/latex], and the common ratio is [latex]3[/latex]. Find the 5 th term. 19. The first term is 16 and the common ratio is [latex]-\frac{1}{3}[/latex]. Find the 4 th term. For the following exercises, find the specified term for the geometric sequence, given the first four terms. 20. [latex]{a}_{n}=\left\{-1,2,-4,8,…\right\}[/latex]. Find [latex]{a}_{12}[/latex]. 21. [latex]{a}_{n}=\left\{-2,\frac{2}{3},-\frac{2}{9},\frac{2}{27},…\right\}[/latex]. Find [latex]{a}_{7}[/latex]. For the following exercises, write the first five terms of the geometric sequence. 22. [latex]\begin{array}{cc}{a}_{1}=-486,& {a}_{n}=-\frac{1}{3}\end{array}{a}_{n - 1}[/latex] 23. [latex]\begin{array}{cc}{a}_{1}=7,& {a}_{n}=0.2{a}_{n - 1}\end{array}[/latex] For the following exercises, write a recursive formula for each geometric sequence. 24. [latex]{a}_{n}=\left\{-1,5,-25,125,…\right\}[/latex] 25. [latex]{a}_{n}=\left\{-32,-16,-8,-4,…\right\}[/latex] 26. [latex]{a}_{n}=\left\{14,56,224,896,…\right\}[/latex] 27. [latex]{a}_{n}=\left\{10,-3,0.9,-0.27,…\right\}[/latex] 28. [latex]{a}_{n}=\left\{0.61,1.83,5.49,16.47,…\right\}[/latex] 29. [latex]{a}_{n}=\left\{\frac{3}{5},\frac{1}{10},\frac{1}{60},\frac{1}{360},…\right\}[/latex] 30. [latex]{a}_{n}=\left\{-2,\frac{4}{3},-\frac{8}{9},\frac{16}{27},…\right\}[/latex] 31. [latex]{a}_{n}=\left\{\frac{1}{512},-\frac{1}{128},\frac{1}{32},-\frac{1}{8},…\right\}[/latex] For the following exercises, write the first five terms of the geometric sequence. 32. [latex]{a}_{n}=-4\cdot {5}^{n - 1}[/latex] 33. [latex]{a}_{n}=12\cdot {\left(-\frac{1}{2}\right)}^{n - 1}[/latex] For the following exercises, write an explicit formula for each geometric sequence. 34. [latex]{a}_{n}=\left\{-2,-4,-8,-16,…\right\}[/latex] 35. [latex]{a}_{n}=\left\{1,3,9,27,…\right\}[/latex] 36. [latex]{a}_{n}=\left\{-4,-12,-36,-108,…\right\}[/latex] 37. [latex]{a}_{n}=\left\{0.8,-4,20,-100,…\right\}[/latex] 38. [latex]{a}_{n}=\left\{-1.25,-5,-20,-80,…\right\}[/latex] 39. [latex]{a}_{n}=\left\{-1,-\frac{4}{5},-\frac{16}{25},-\frac{64}{125},…\right\}[/latex] 40. [latex]{a}_{n}=\left\{2,\frac{1}{3},\frac{1}{18},\frac{1}{108},…\right\}[/latex] 41. [latex]{a}_{n}=\left\{3,-1,\frac{1}{3},-\frac{1}{9},…\right\}[/latex] For the following exercises, find the specified term for the geometric sequence given. 42. Let [latex]{a}_{1}=4[/latex], [latex]{a}_{n}=-3{a}_{n - 1}[/latex]. Find [latex]{a}_{8}[/latex]. 43. Let [latex]{a}_{n}=-{\left(-\frac{1}{3}\right)}^{n - 1}[/latex]. Find [latex]{a}_{12}[/latex]. For the following exercises, find the number of terms in the given finite geometric sequence. 44. [latex]{a}_{n}=\left\{-1,3,-9,…,2187\right\}[/latex] 45. [latex]{a}_{n}=\left\{2,1,\frac{1}{2},…,\frac{1}{1024}\right\}[/latex] For the following exercises, determine whether the graph shown represents a geometric sequence. 46. 47. For the following exercises, use the information provided to graph the first five terms of the geometric sequence. 48. [latex]\begin{array}{cc}{a}_{1}=1,& r=\frac{1}{2}\end{array}[/latex] 49. [latex]\begin{array}{cc}{a}_{1}=3,& {a}_{n}=2{a}_{n - 1}\end{array}[/latex] 50. [latex]{a}_{n}=27\cdot {0.3}^{n - 1}[/latex] 51. Use recursive formulas to give two examples of geometric sequences whose 3 rd terms are [latex]200[/latex]. 52. Use explicit formulas to give two examples of geometric sequences whose 7 th terms are [latex]1024[/latex]. 53. Find the 5 th term of the geometric sequence [latex]\left\{b,4b,16b,…\right\}[/latex]. 54. Find the 7 th term of the geometric sequence [latex]\left\{64a\left(-b\right),32a\left(-3b\right),16a\left(-9b\right),…\right\}[/latex]. 55. At which term does the sequence [latex]\left\{10,12,14.4,17.28,\text{ }…\right\}[/latex] exceed [latex]100?[/latex] 56. At which term does the sequence [latex]\left\{\frac{1}{2187},\frac{1}{729},\frac{1}{243},\frac{1}{81}\text{ }…\right\}[/latex] begin to have integer values? 57. For which term does the geometric sequence [latex]{a}_{{}_{n}}=-36{\left(\frac{2}{3}\right)}^{n - 1}[/latex] first have a non-integer value? 58. Use the recursive formula to write a geometric sequence whose common ratio is an integer. Show the first four terms, and then find the 10 th term. 59. Use the explicit formula to write a geometric sequence whose common ratio is a decimal number between 0 and 1. Show the first 4 terms, and then find the 8 th term. 60. Is it possible for a sequence to be both arithmetic and geometric? If so, give an example.
CryptoDB Paper: Searchable Encryption with Optimal Locality: Achieving Sublogarithmic Read Efficiency Authors: Ioannis Demertzis Dimitrios Papadopoulos Charalampos Papamanthou Download: DOI: 10.1007/978-3-319-96884-1_13 Search ePrint Search Google Presentation: Slides Conference: CRYPTO 2018 Abstract: We propose the first linear-space searchable encryption scheme with constant locality and sublogarithmic read efficiency, strictly improving the previously best known read efficiency bound (Asharov et al., STOC 2016) from $$\varTheta (\log N \log \log N)$$Θ(logNloglogN) to $$O(\log ^{\gamma } N)$$O(logγN) where $$\gamma =\frac{2}{3}+\delta $$γ=23+δ for any fixed $$\delta >0$$δ>0 and where N is the number of keyword-document pairs. Our scheme employs four different allocation algorithms for storing the keyword lists, depending on the size of the list considered each time. For our construction we develop (i) new probability bounds for the offline two-choice allocation problem; (ii) and a new I/O-efficient oblivious RAM with $$\tilde{O}(n^{1/3})$$O~(n1/3) bandwidth overhead and zero failure probability, both of which can be of independent interest. Video from CRYPTO 2018 BibTeX @inproceedings{crypto-2018-28845, title={Searchable Encryption with Optimal Locality: Achieving Sublogarithmic Read Efficiency}, booktitle={Advances in Cryptology – CRYPTO 2018}, series={Lecture Notes in Computer Science}, publisher={Springer}, volume={10991}, pages={371-406}, doi={10.1007/978-3-319-96884-1_13}, author={Ioannis Demertzis and Dimitrios Papadopoulos and Charalampos Papamanthou}, year=2018 }
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Bootstrap Introduction In this course, we will rely on a method called the Bootstrap to approximate the sampling distribution of our statistics, insted of relying so directly on the Central Limit Theorem. The name bootstrap shows up a lot these days, and I’m positive you have used this word to describe something different than what we’ll talk about here. Our Bootstrap has nothing to do with compilers nor CSS libraries. After estimating population parameters, a natural next question is, how certain are we in our estimate? By approximating sampling distributions, the (statistical) Bootstrap will be our primary means of quantifying uncertainty in our estimates. Such quantifications will primarily come in the form of confidence intervals. Sampling Distributions The Bootstrap is a method to approximate the sampling distribution of an arbitrary statistic. The sampling distribution of a statistic is to be thought of as the collection of statistics you’d have if you repeatedly resampled the population and calculated the statistic of interest on each new sample. From this collection of resampled statistics we can estimate standard deviation of our estimator. The plot below attempts to visualize this idea, albeit for a finite number of resamples R. Different from last time we saw a visualization of the sampling distribution, this time we have no data. By sampling from the (assumed) population, instead of from our original sample, our code is truer to the theory of sampling distributions, although further away from applied statistics. Compare the code below to the example found in the Section Normal Distribution to see what the difference between resampling from data and resampling from an assumed population looks like in terms of code. import numpy as npimport pandas as pdimport bplot as bpfrom scipy.stats import norm as normalbp.LaTeX()bp.dpi(300) R = 1001N = 99mus = np.full((R,), np.nan)for r in range(R): mus[r] = np.random.gamma(2, 1/2, N).mean() # sample directly from Gamma(2, 2) bp.density(mus)bp.rug(mus)bp.labels(x='$\mu$', y='Density', size=18) <matplotlib.axes._subplots.AxesSubplot at 0x11c4c8f98> The plot above represents a finite approximation to the sampling distribution of the sample mean coming from a $\text{Gamma}(2, 2)$ population. Despite the fact that a $\text{Gamma}(2, 2)$ probability density function is right skewed, we see that the sampling distribution is shaped like the probability density function for a Normal distribution, centered at $1$ with standard deviation $\mathbb{D}(X)/\sqrt{N} = \left(1/\sqrt{2}\right) / \sqrt{99}$. The normalization of the sample means, each itself from the Gamma distribution, is due to the Central Limit Theorem. Percentiles Another informative attribute of random variables is the percentile. The $p$% percentile $\pi_p$ puts $p$% of the area under random variable’s probability density function to the left of $\pi_p$. A picture will help. Consider a standrd normal distribution, where $\pi_{.84} \approx 1$. Further, $\pi_{0.5} = 0$ for the standard normal distribution since the $\text{Normal}(0, 1)$ distribution is centered at, and perfectly symmetric about, $0$. The more common name for $pi_{0.5}$ is the median. import matplotlib.pyplot as plt x = np.linspace(-4, 4, 101)fx = normal.pdf(x)bp.curve(x, fx)bp.line_v(1, 0, normal.pdf(1))bp.labels(x='x', y='Density', size=18)plt.text(-1, 0.1, '$\sim 0.84$', size=16)plt.text(1.05, 0.01, '$\sim 0.16$', size=16)plt.text(1, -0.075, '$\pi_{.84}$', size=16) Text(1, -0.075, '$\\pi_{.84}$') R will calculate these values for us with the following code. print(normal.ppf(.84))print(normal.ppf(0.5)) 0.9944578832097530.0 When working with data, instead of a probability density function, find a sample percentile by first sorting the data into ascending order. With sorted data, find the value, not necessarily within the dataset, that puts approximately $p$% of the data to the left of the value of interest. Since R will more often than not do these calculations for us, we just need remember that R will interpolate between any two numbers in a dataset so as to best, in some sense, apply the definition of percentile to data. Uncertainty in Estimates We are slowly changing our thinking about the sample mean. Before this class, most people would think of the sample mean as a single quantity. Now, we are to think of the sample mean as one of potentially many possible values we could get by resampling the population and performing the same calculation on each new sample. Each new sample mean would provide a new estimate of the population, but none would be exactly right. How can we account for the uncertainty in our estimates? A confidence interval is the interval analalogue to the sample mean; a lower and upper bound, two numbers calculated from data used to estimate the population parameter of interest. The word confidence suggests that we want this random interval to capture the parameter of interest with some sort of degree of accuracy under repeated sampling. We next blend together the sampling distribution, as estimated by the Bootstrap, together with percentiles to build a confidence interval for a parameter. Assume you are interested in the mean of the $\text{Gamma}(2, 2)$ distribution above. Since we know the population parameters $\alpha = 2$ and $\beta = 2$, we could certainly do the math and calculate the mean by hand. Instead, let’s use $\alpha = 2$ and $\beta = 2$ to generate new data, but then pretend we don’t know these parameters when calculating a confidence interval. This will allow us to check our method’s accuracy. Above, we approximated the sampling distribution of the sample mean of a $\text{Gamma}(2, 2)$ distribution by repeatedly resampling from the assumed known population. Let’s take a step closer to applying the Bootstrap and pretend that we have only one sample of data from this $\text{Gamma}(2, 2)$ population. Then we’ll estimate the sampling distribution via Bootstrap. The vector of sample means, mus, allows us to estimate two percentiles, $\pi_{0.025}$ and $\pi_{0.975}$. The estimated percentiles will form our $95$\% confidence interval. R = 1001N = 99d = np.random.gamma(2, 1/2, N)mus = np.full((R,), np.nan)for r in range(R): idx = np.random.choice(N, N) mus[r] = d[idx].mean() np.round(np.percentile(mus, [2.5, 97.5]), 2) array([0.79, 1.03]) For me, the interval is $(0.94, 1.24)$. The Bootstrap is inherently a stochastic procedure, so if you rerun the code above you might get slightly different numbers. Since $\mathbb{E}(X) = \frac{\alpha}{\beta} = 2/2 = 1$, we see that this interval is indeed reasonably accurate. Within the framework of the Bootstrap, the only way to increase accuracy is to increase the sample size. The only way to stabilize the randomness seen by rerunning the code above, and due to the random sampling, is to increase R. Try increasing both N and R to see the effects. Pay attention to accuracy, how close the interval is to the true mean $1$, and to the precision, how many decimal places stay the same after each run. Take care to separate the ideas of accuracy and precision in your mind. To interpret this interval, we say: we are $95$% confident that the true population mean is between $.8$ and $1.1$. You should memorize the structure of this phrase. Notice that we used data to make a statement about the population parameter of interest. This is the crux of statistics: identify a parameter of interest, collect data about it, estimate the parameter, and quantify the uncertainty in your estimate. If we were to repeat this analysis an infinite number of times, $95$% of the intervals created would include the true population mean. This is operational definition of the percent confidence. Remarkably, this procedure guarantees that $95$% of all confidence intervals made will capture the true population mean. To apply this procedure to real data, we would repeatedly resample, with replacement and with equal probability, from the original dataset. Next we continue with the example about birth weights of animals from the Order Carnivora found in Section Assumed Normality. Example carnivora = pd.read_csv("https://raw.githubusercontent.com/roualdes/data/master/carnivora.csv")bw = carnivora["BW"].dropna() N = bw.size # sample sizeR = 1001 # number of resamplesmus = np.full((R,), np.nan) # this is called what?for r in range(R): idx = np.random.choice(bw._index, N) # resample index mus[r] = bw[idx].mean() # index a vector with a vector, mean np.round(np.percentile(mus, [5, 95]), 2) # 90% confidence interval array([182.31, 324.93]) We are $90$% confident that the population mean birth weight of animals from the Order Carnivora is between $179$ and $326$ grams. Again, you might get slightly different numbers if you rerun the code above; increasing R should stabilize this issue, at the cost of increasing the computational cost.
In Part VI, we saw an outline of the Pinocchio zk-SNARK. We were missing two things – an HH that supports both addition and multiplication that is needed for the verifier’s checks, and a transition from an interactive protocol to a non-interactive proof system. In this post we will see that using elliptic curves we can obtain a limited, but sufficient for our purposes, form of HH that supports multiplication. We will then show that this limited HH also suffices to convert our protocol to the desired non-interactive system. We begin by introducing elliptic curves and explaining how they give us the necessary HH. Elliptic curves and their pairings Assume :math:`p` is a prime larger than :math:`3`, and take some :math:`u,v\in\mathbb{F}_p` such that :math:`4u^3+27v^2\neq 0`. We look at the equation :math:`Y^2=X^3 +u\cdot X +v` An elliptic curve :math:`{\mathcal C}` is the of set of points :math:`(x,y)` [1] that satisfy such an equation. These curves give us an interesting way to construct groups. The group elements will be the points :math:`(x,y)\in \mathbb{F}^2_p` that are on the curve, i.e., that satisfy the equation, together with a special point :math:`{\mathcal O}`, that for technical reasons is sometimes refered to as the “point at infinity”, and serves as the identity element, i.e. the zero of the group. Now the question is how we add two points :math:`P=(x_1,y_1),Q=(x_2,y_2)` to get a third? The addition rule is derived from a somewhat abstract object called the divisor class group of the curve. For our purposes, all you have to know about this divisor class group is that it imposes the following constraint on the definition of addition: The sum of points on any line must be zero, i.e., :math:`{\mathcal O}`. Let’s see how the addition rule is derived from this constraint. Look at a vertical line, defined by an equation of the form :math:`X=c`. Suppose this line intersects the curve at a point :math:`P=(x_1,y_1)`. Because the curve equation is of the form :math:`Y^2=f(X)`, if :math:`(x_1,y_1)` is on the curve, so is the point :math:`Q:=(x_1,-y_1)`. Moreover, since it’s a vertical line and the curve equation is of degree two in :math:`Y`, we can be sure these are the only points where the line and curve intersect. Thus, we must have :math:`P+Q={\mathcal O}` which means :math:`P=-Q`; that is, :math:`Q` is the inverse of :math:`P` in the group. Now let us look at points :math:`P` and :math:`Q` that have a different first coordinate – that is, :math:`x_1\neq x_2`, and see how to add them. We pass a line through :math:`P` and :math:`Q`. Since the curve is defined by a degree three polynomial in :math:`X` and already intersects this (non-vertical) line at two points, it is guaranteed to intersect the line at a third point, that we denote :math:`R=(x,y)`, and no other points. So we must have :math:`P+Q+R={\mathcal O}`, which means :math:`P+Q=-R`; and we know by now that :math:`-R` is obtained from :math:`R` by flipping the second coordinate from :math:`y` to :math:`-y`. Thus, we have derived the addition rule for our group: Given points :math:`P` and :math:`Q`, pass a line through them, and then take the “mirror” point of the third intersection point of the line as the addition result. [2] This group is usually called :math:`{\mathcal C}(\mathbb{F}_p)` – as it consists of points on the curve :math:`{\mathcal C}` with coordinates in :math:`\mathbb{F}_p`; but let’s denote it by :math:`G_1` from now on. Assume for simplicity that the number of elements in :math:`G_1` is a prime number :math:`r`, and is different from :math:`p`. This is many times the case, for example in the curve that Zcash is currently using. In this case, any element :math:`g\in G_1` different from :math:`{\mathcal O}` generates :math:`G_1`. The smallest integer :math:`k` such that :math:`r` divides :math:`p^k-1` is called the embedding degree of the curve. It is conjectured that when :math:`k` is not too small, say, at least :math:`6`, then the discrete logarithm problem in :math:`G_1`, i.e. finding :math:`\alpha` from :math:`g` and :math:`\alpha \cdot g`, is very hard. (In BN curves [3] currently used by Zcash :math:`k=12`.) The multiplicative group of :math:`\mathbb{F}_{p^k}` contains a subgroup of order :math:`r` that we denote :math:`G_T`. We can look at curve points with coordinates in :math:`\mathbb{F}_{p^k}` and not just in :math:`\mathbb{F}_p`. Under the same addition rule, these points also form a group together with :math:`{\mathcal O}` called :math:`{\mathcal C}(\mathbb{F}_{p^k})`. Note that :math:`{\mathcal C}(\mathbb{F}_{p^k})` clearly contains :math:`G_1`. Besides :math:`G_1`, :math:`{\mathcal C}(\mathbb{F}_{p^k})` will contain an additional subgroup :math:`G_2` of order :math:`r` (in fact, :math:`r-1` additional subgroups of order :math:`r`). Fix generators :math:`g\in G_1,h\in G_2`. It turns out that there is an efficient map, called the Tate reduced pairing, taking a pair of elements from :math:`G_1` and :math:`G_2` into an element of :math:`G_T`, such that :math:`\mathrm{Tate}(g,h)=\mathbf{g}` for a generator :math:`\mathbf{g}` of :math:`G_T`, and given a pair of elements :math:`a,b \in \mathbb{F}_r`, we have :math:`\mathrm{Tate}(a\cdot g,b\cdot h)=\mathbf{g}^{ab}`. Defining :math:`\mathrm{Tate}` is a bit beyond the scope of this series, and relies on concepts from algebraic geometry, most prominently that of divisors. Here’s a sketch of :math:`\mathrm{Tate}`’s definition: [4] For :math:`a\in\mathbb{F}_p` the polynomial :math:`(X-a)^r` has a zero of multiplicity :math:`r` at the point :math:`a`, and no other zeroes. For a point :math:`P\in G_1`, divisors enable us to prove there exists a function :math:`f_P` from the curve to :math:`\mathbb{F}_p` that also has, in some precise sense, a zero of multiplicity :math:`r` at :math:`P` and no other zeroes. :math:`\mathrm{Tate}(P,Q)` is then defined as :math:`f_P(Q)^{(p^k-1)/r}`. It may not seem at all clear what this definition has to do with the stated properties, and indeed the proof that :math:`\mathrm{Tate}` has these properties is quite complex. Defining :math:`E_1(x) := x\cdot g, E_2(x):=x\cdot h, E(x):=x\cdot \mathbf{g}`, we get a weak version of an HH that supports both addition and multiplication: :math:`E_1,E_2,E` are HHs that support addition, and given the hidings :math:`E_1(x)`, :math:`E_2(y)` we can compute :math:`E(xy)`. In other words, if we have the ”right” hidings of :math:`x` and :math:`y` we can get a (different) hiding of :math:`xy`. But for example, if we had hidings of :math:`x,y,z` we couldn’t get a hiding of :math:`xyz`. We move on to discussing non-interactive proof systems. We begin by explaining exactly what we mean by ‘non-interactive’. Non-interactive proofs in the common reference string model The strongest and most intuitive notion of a non-interactive proof is probably the following. In order to prove a certain claim, a prover broadcasts a single message to all parties, with no prior communication of any kind; and anyone reading this message would be convinced of the prover’s claim. This can be shown to be impossible in most cases. [5] A slightly relaxed notion of non-interactive proof is to allow a common reference string (CRS). In the CRS model, before any proofs are constructed, there is a setup phase where a string is constructed according to a certain randomized process and broadcast to all parties. This string is called the CRS and is then used to help construct and verify proofs. The assumption is that the randomness used in the creation of the CRS is not known to any party – as knowledge of this randomness might enable constructing proofs of false claims. We will explain how in the CRS model we can convert the verifiable blind evaluation protocol of Part IV into a non-interactive proof system. As the protocol of Part VI consisted of a few such subprotocols it can be turned into a non-interactive proof system in a similar way. A non-interactive evaluation protocol The non-interactive version of the evaluation protocol basically consists of publishing Bob’s first message as the CRS. Recall that the purpose of the protocol is to obtain the hiding :math:`E(P(s))` of Alice’s polynomial :math:`P` at a randomly chosen :math:`s\in\mathbb{F}_r`. Setup: Random :math:`\alpha\in \mathbb{F}_r^*,s\in\mathbb{F}_r` are chosen and the CRS: :math:`(E_1(1),E_1(s),\ldots,E_1(s^d),` :math:`E_2(\alpha),E_2(\alpha s),\ldots,E_2(\alpha s^d))` is published. Proof: Alice computes :math:`a=E_1(P(s))` and :math:`b=E_2(\alpha P(S))` using the elements of the CRS, and the fact that :math:`E_1` and :math:`E_2` support linear combinations. Verification: Fix the :math:`x,y\in \mathbb{F}_r` such that :math:`a=E_1(x)` and :math:`b=E_2(y)`. Bob computes :math:`E(\alpha x)=\mathrm{Tate}(E_1(x),E_2(\alpha))` and :math:`E(y)=\mathrm{Tate}(E_1(1),E_2(y))`, and checks that they are equal. (If they are equal it implies :math:`\alpha x =y`.) As explained in Part IV, Alice can only construct :math:`a,b` that will pass the verification check if :math:`a` is the hiding of :math:`P(s)` for a polynomial :math:`P` of degree :math:`d` known to her. The main difference here is that Bob does not need to know :math:`\alpha` for the verification check, as he can use the pairing function to compute :math:`E(\alpha x)` only from :math:`E_1(x)` and :math:`E_2(\alpha)`. Thus, he does not need to construct and send the first message himself, and this message can simply be fixed in the CRS. [1] You may ask ‘The set of points from where?’. We mean the set of points with coordinates in the algebraic closure of :math:`\mathbb{F}_p`. Also, the curve has an affine and projective version. When we are referring to the projective version we also include the “point at infinity” :math:`{\mathcal O}` as an element of the curve. [2] We did not address the case of adding :math:`P` to itself. This is done by using the line that is tangent to the curve at :math:`P`, and taking :math:`R` to be the second intersection point of this line with the curve. [3] https://eprint.iacr.org/2005/133.pdf [4] The pairing Zcash actually uses is the optimal Ate pairing, which is based on the Tate reduced pairing, and can be computed more efficiently than :math:`\mathrm{Tate}`. [5] In computational complexity theory terms, one can show that only languages in BPP have non-interactive zero-knowledge proofs in this strong sense. The type of claims we need to prove in Zcash transactions, e.g. ‘I know a hash preimage of this string’, correspond to the complexity class NP which is believed to be much larger than BPP. [6] The images used were taken from the following article and are used under the creative commons license.
In math mode one can do $\hbar$, which produces an h with a little line through the top of it. I want to do the same thing, except with the letter d instead. Is there a generalization of $\hbar$ that works for other letters besides just h? You can create a specific command \dbar for this purpose. \newcommand{\dbar}{d\hspace*{-0.08em}\bar{}\hspace*{0.1em}} Full Code \documentclass{article}\newcommand{\dbar}{d\hspace*{-0.08em}\bar{}\hspace*{0.1em}}\begin{document}$\hbar$, $\dbar$.\end{document} produces There is code in the Comprehensive List of Symbols, but it's wrong: what's suggested is \newcommand{\dbar}{{\mathchar'26\mkern-12mu d}} but one needs to compensate the amount of backup, which is larger than the width of the bar by 3mu: \documentclass{article}\newcommand{\dbar}{{\mkern3mu\mathchar'26\mkern-12mu d}}\begin{document}$32\lambda^2 \dbar_w$$32\lambda^2 d_w$$32\lambda^2 \hat{d}_w$\end{document} The fact that the width is 9mu is confirmed by the definition of \hbar in Plain TeX: \hbar:macro:->{\mathchar '26\mkern -9muh} Of course, different math fonts may need different amounts of spacing. A possibly better definition is \newcommand{\dbar}{{d\mkern-7mu\mathchar'26\mkern-2mu}} so that the bar doesn't protrude as much on the right: \documentclass{article}\newcommand{\dbar}{{d\mkern-7mu\mathchar'26\mkern-2mu}}\begin{document}$d\dbar d$$h\hbar h$\end{document} For PDFLaTeX: As recommended by Sigur. You should load the package lmodern as well as the output will be pixeled with out it. % arara: pdflatex\documentclass{article}\usepackage{lmodern}\usepackage[T1]{fontenc}\usepackage[utf8]{inputenc}\begin{document}in text \dj{} and math $\textit{\dj}$ \end{document} For Lua- or XeLaTeX: The output is the same as above. You can use the unicode U+0111 or copy paste that symbol directly into your code. % arara: lualatex\documentclass{article}\usepackage{fontspec}\begin{document}in text \symbol{"0111} and math $\textit{\symbol{"0111}}$\end{document} The package unicode-math does not contain this symbol yet. It just contains the unicode U+00F0 with the command $\matheth$ which could be an alternative. You can find fonts that support that symbol on your system by clicking here. Here are some font examples. Choose one and write your macro like \newcommand*{\dbar}{{\fontspec{font_of_your_choice}\symbol{"0111}}}. % arara: lualatex\documentclass{article}\usepackage{fontspec}\usepackage{booktabs}\begin{document} \begin{tabular}{ll}\toprule Font & Example\\\midrule Latin Modern & \symbol{"0111}\\ Code2000 & \setmainfont{Code2000.ttf}\symbol{"0111}\\ Comic Sans MS & \setmainfont{comic.ttf}\symbol{"0111}\\ Consolas & \setmainfont{consola.ttf}\symbol{"0111}\\ DejaVu Sans & \setmainfont{DejaVuSans.ttf}\symbol{"0111}\\ EB Garamond & \setmainfont{EB Garamond}\symbol{"0111}\\ Linux Libertine &\setmainfont{Linux Libertine O}\symbol{"0111}\\ Quivira &\setmainfont{quivira.otf}\symbol{"0111}\\ XITS &\setmainfont{xits-regular.otf}\symbol{"0111}\\ \bottomrule \end{tabular}\end{document}
Let $X1, \dots, Xn$ be a random sample of size n from the continuous distribution with pdf $f_X(x|\theta) = \frac{e^{-x}}{1-e^{-\theta}} I(x)_{[0,\theta]} I(\theta)_{(0, \infty)}$. (1) Find the maximum likelihood estimator for $\theta$. (2) Find the maximum likelihood estimator for the median,$\lambda$, of this distribution. For (1), I got $X_{(n)}$ because I thought if $\theta$ was minimized, it would maximize the function. For (2), I know I set up the distribution to solve for the value of the median. I think that once I find this value I will use it in relation to the MLE from (1) to find the MLE for(2). I got $m = ln(2) - ln(e^{-\theta} + 1)$ but this doesn't seem right and I amd not sure where I am messing up. Any assistance is greatly appreciated.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Four of these evaluated a EVP4593 nmr propensity for sharing with no guarantee of reciprocity, while four considered a mutual sharing arrangement. PAIRS metric scoring and weighting The total cooperative sustainability metric is the weighted sum of the identified potential impacts within each sector. Ruboxistaurin Three questions determine the relative weighting by evaluating the economic importance, future risk, and geographic compatibility of partnerships within each sector. Several general questions address the social and political amicability of a partnership between the two communities. The formula for calculating the cooperative sustainability metric (CSM) is expressed in Eq. 2, where i represent each of the five economic sectors. $$ \textCSM = \sum \limits_i = 1^5 (\textSector Sustainability)_i+\textGeneral Amicability $$ (1) The disparity in available data for quantifiable indicators determined that a normalization approach would be best. With responses to each question worth between 0 and 3 points, qualitative indicators can be evaluated alongside more precise quantitative measures. Three points are given to responses which indicated both a high degree of existing sustainability and a large potential for improvement. GW786034 Two points were given to answers which indicated a moderate to low existing sustainability but a large potential for improvement. One point was given for responses indicating a high degree of existing sustainability with little to no foreseeable future improvement. No points were awarded to responses indicating both a low existing sustainability and/or little expected improvement. Each question is evaluated three times, once for each city independently, and once treating both cities as a single larger entity. The values Mirabegron assigned to the response of each individual city is averaged and used to normalize the combined city response. Values >1 indicates that a combination or partnership of the cities demonstrates a greater potential for improved sustainability. The responses to the questions of each sector are normalized and weighted according to Eq. 2. $$ Sector\,Sustainability = \frac\hboxmax \left( City_i ,Combined \right)\frac1n\mathop \sum \nolimits_i = 1^n City_i \times W_f $$ (2) In Eq. 2, the variables n and W f represent the number of cities being compared and the sector weighting factor, respectively. The number of cities is nominally 2, but multicity partnerships are feasible as well. The relative importance of each sector is weighted by a factor which evaluates the importance of each sector to the cities in question. Each section of the cooperative sustainability metric begins with three true/false questions, a, b, and c, to determine the weighting factor for each sector as = 1 + 3 × (# of true answers to a, b, and c). As such, the weighting factor of each sector can vary from 1 to 10. The following examples are from the water portion of the metric.
Definition:Upper Closure/Set Definition Let $T \subseteq S$. The upper closure of $T$ (in $S$) is defined as: $T^\succeq := \bigcup \left\{{t^\succeq: t \in T}\right\}$ where $t^\succeq$ denotes the upper closure of $t$ in $S$. That is: $T^\succeq := \left\{ {u \in S: \exists t \in T: t \preceq u}\right\}$ $a^\preccurlyeq := \left\{{b \in S: b \preccurlyeq a}\right\}$: the lower closure of $a \in S$: everything in $S$ that precedes $a$ $a^\succcurlyeq := \left\{{b \in S: a \preccurlyeq b}\right\}$: the upper closure of $a \in S$: everything in $S$ that succeeds $a$ $a^\prec := \left\{{b \in S: b \preccurlyeq a \land a \ne b}\right\}$: the strict lower closure of $a \in S$: everything in $S$ that strictly precedes $a$ $a^\succ := \left\{{b \in S: a \preccurlyeq b \land a \ne b}\right\}$: the strict upper closure of $a \in S$: everything in $S$ that strictly succeeds $a$. $\displaystyle T^\preccurlyeq := \bigcup \left\{{t^\preccurlyeq: t \in T:}\right\}$: the lower closure of $T \in S$: everything in $S$ that precedes some element of $T$ $\displaystyle T^\succcurlyeq := \bigcup \left\{{t^\succcurlyeq: t \in T:}\right\}$: the upper closure of $T \in S$: everything in $S$ that succeeds some element of $T$ $\displaystyle T^\prec := \bigcup \left\{{t^\prec: t \in T:}\right\}$: the strict lower closure of $T \in S$: everything in $S$ that strictly precedes some element of $T$ $\displaystyle T^\succ := \bigcup \left\{{t^\succ: t \in T:}\right\}$: the strict upper closure of $T \in S$: everything in $S$ that strictly succeeds some element of $T$. The astute reader may point out that, for example, $a^\preccurlyeq$ is ambiguous as to whether it means: The lower closure of $a$ with respect to $\preccurlyeq$ The upper closure of $a$ with respect to the dual ordering $\succcurlyeq$ By Lower Closure is Dual to Upper Closure and Strict Lower Closure is Dual to Strict Upper Closure, the two are seen to be equal. Also denoted as Other notations for closure operators include: ${\downarrow} a, {\bar \downarrow} a$ for lower closure of $a \in S$ ${\uparrow} a, {\bar \uparrow} a$ for upper closure of $a \in S$ ${\downarrow} a, {\dot \downarrow} a$ for strict lower closure of $a \in S$ ${\uparrow} a, {\dot \uparrow} a$ for strict upper closure of $a \in S$ However, as there is considerable inconsistency in the literature as to exactly which of these arrow notations is being used at any one time, its use is not endorsed on $\mathsf{Pr} \infty \mathsf{fWiki}$. Also see
Proof that \(\pi\) is irrational. Assume \(\pi\) is rational, that is, assume it is of the form \(\frac{a}{b}\) where \(a\) and \(b\) are both positive integers. Let\[\begin{align} f(x) &= \frac{x^n (a-bx)^n}{n!} \\ F(x) &= f(x) + \cdots + (-1)^j f^{[2j]}(x) + \cdots + (-1)^n f^{[2n]}(x) \end{align}\] where \(f^{[k]}\) denotes \(k\)-th derivative of \(f\). \(f(x)\) has integer coefficients except \(\frac{1}{n!}\) \(f(x) = f(\pi - x)\) \(0 \leq f(x) \leq \frac{\pi^n a^n}{n!}\) for \(0 \leq x \leq \pi\) For \(0 \leq j < n\), the \(j\)-th derivative of \(f\) equals 0 at 0 and \(\pi\) For \(j \geq n\), the \(j\)-th derivative of \(f\) is integer at 0 and \(\pi\) (from 1. above) \(F(0)\), \(F(\pi)\) is integer (from 4., 5. above) \(F(x) + F''(x) = f(x)\) \((F'(x) \sin x - F(x) \cos x)' = f(x) \sin x\) (from 7. above) \( \int_0^\pi f(x) \sin x\) is an integer For large \(n\), this integral is between 0 and 1 (from 3. above) Contradiction. So \(\pi\) is irrational.
Actually, it's not pre defined that the right hand side of the X axis is positive and the upward Y direction is positive. For example, a lot of the time in computer science, we take the bottom Y direction as positive (since text flows from top to bottom, it makes it easier to think of it that way). However, if you arbitrarily choose to pick the +ve x-axis to be the right direction and the +ve y-axis to be upward, then you can take the matrix that corresponds to your rotation by 90 degrees, make it act on the basis vectors $(1, 0)$ and $(0, 1)$, then seee what that gets us. Since the 2-d rotation matrix $R(\theta)$ that rotates the plane by $\theta$ degrees clockwise is given by $$R(\theta) = \begin{bmatrix} cos(\theta) &sin(\theta)\ \\ -sin(\theta) & cos(\theta)\end{bmatrix}$$ substitute $\theta = 90^\circ$ to give$$ T = R(90^{\circ}) = \begin{bmatrix} 0 & 1 \\ -1 & 0\end{bmatrix}$$ We can see that $T (1, 0) = (0, 1) \\T(0, 1) = (-1, 0)$ and hence, the +x axis $(1, 0)$ goes to the +y axis $(0, 1)$and the +y axis $(0, 1)$ goes to the -x axis $(-1, 0)$
Global existence and blow-up results for an equation of Kirchhoff type on $\mathbb R^N$ DOI: http://dx.doi.org/10.12775/TMNA.2001.006 Abstract We discuss the asymptotic behaviour of solutions for the nonlocal quasilinear hyperbolic problem of Kirchhoff Type $$ u_{tt}-\phi (x)\Vert\nabla u(t)\Vert^{2}\Delta u+\delta u_{t} = |u|^{a}u,\quad x\in {\mathbb R}^N,\ t\geq 0,$$ with initial conditions $ u(x,0) = u_0 (x)$ and $u_t(x,0) = u_1 (x)$, in the case where $N \geq 3$, $\delta \geq 0$ and $(\phi (x))^{-1} =g (x)$ is a positive function lying in $L^{N/2}(\mathbb R^N)\cap L^{\infty}(\mathbb R^N )$. When the initial energy $ E(u_{0},u_{1})$, which corresponds to the problem, is non-negative and small, there exists a unique global solution in time. When the initial energy $E(u_{0},u_{1})$ is negative, the solution blows-up in finite time. A combination of the modified potential well method and the concavity method is widely used. quasilinear hyperbolic problem of Kirchhoff Type $$ u_{tt}-\phi (x)\Vert\nabla u(t)\Vert^{2}\Delta u+\delta u_{t} = |u|^{a}u,\quad x\in {\mathbb R}^N,\ t\geq 0,$$ with initial conditions $ u(x,0) = u_0 (x)$ and $u_t(x,0) = u_1 (x)$, in the case where $N \geq 3$, $\delta \geq 0$ and $(\phi (x))^{-1} =g (x)$ is a positive function lying in $L^{N/2}(\mathbb R^N)\cap L^{\infty}(\mathbb R^N )$. When the initial energy $ E(u_{0},u_{1})$, which corresponds to the problem, is non-negative and small, there exists a unique global solution in time. When the initial energy $E(u_{0},u_{1})$ is negative, the solution blows-up in finite time. A combination of the modified potential well method and the concavity method is widely used. Keywords Quasilinear hyperbolic equations; global solution; blow-up; dissipation; potential well; concavity method; unbounded domains; Kirchhoff strings; generalised Sobolev spaces; weighted $L^p$ spaces Full Text:FULL TEXT Refbacks There are currently no refbacks.
I am writing math book and I am interested how I should print them correctly. I know that the main thing is sameness across the whole document, but I am really interested in ways which are recommended by respectable persons and societies, i.e. Knuth or AMS. I've already asked this question to find the true. Now I am editing my book. I come here with the question which spacing is correct between the following variants: \begin{align}&\exists m \in \bbR{:}\quad \forall n \in \bbN\quad x_n \ge m.\\&\exists m \in \bbR \mathpunct{:} \forall n \in \bbN\quad x_n \ge m.\\&\exists m \in \bbR : \forall n \in \bbN\quad x_n \ge m.\\&\exists m \in \bbR\colon \forall n \in \bbN\quad x_n \ge m.\\\end{align} Personally, I love the first variant, but it seems not so comfortable, so I don't think it is used by typographers.
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
This is just an elaboration of Martin's astute comment above: Let $N = \phi^{-1} \{0 \}$. Then $\nu A = \nu (A \setminus N)$. Furthermore, $N^c = \cup_k \Delta_k$, where $\Delta_k = \phi^{-1} (\frac{1}{k}, \infty)$. We can bound $\mu \Delta_k$ as follows: $\|\phi\|_1 \ge \nu \Delta_k = \int_{\Delta_k} \phi d \mu \ge \frac{1}{k} \mu \Delta_k$, and so $\mu \Delta_k \le k \|\phi\|_1$. We have $A \setminus N = \cup_k (A \cap \Delta_k)$, and since $\Delta_k$ is increasing, we have $\nu (A \cap \Delta_n) \to \nu(\cup_k (A \cap \Delta_k))$. Let $\epsilon>0$, then choose $n$ such that $\nu(\cup_k (A \cap \Delta_k)) - \nu (A \cap \Delta_n)< \frac{\epsilon}{2}$. Now we need to approximate $A \cap \Delta_n$ by a compact set.Since $\mu$ is a Radon measure, there exists a sequence of compact sets $C_k \subset A \cap \Delta_n$ such that $\mu C_k \to \mu (A \cap \Delta_n)$. Without loss of generality (finite unions of compact sets are still compact) we may assume that the $C_k$ are increasing. We have $\mu((A \cap \Delta_n) \setminus \cup_k C_k) = 0$, hence $1_{C_k}(x) \to 1_{A \cap \Delta_n}(x)$ a.e. [$\mu$]. Since $1_{C_k}(x) \le 1$, the dominated convergence theorem gives $\int_{C_k} \phi d \mu \to \int_{A \cap \Delta_n} \phi d \mu $. Hence for some $k$, we have $\int_{A \cap \Delta_n} \phi d \mu - \int_{C_k} \phi d \mu < \frac{\epsilon}{2}$, and so \begin{eqnarray}\nu A &=& \nu (A \setminus N) \\&=& \nu ( \cup_k (A \cap \Delta_k)) \\&<& \nu (A \cap \Delta_n) + \frac{\epsilon}{2} \\&=& \int_{A \cap \Delta_n} \phi d \mu + \frac{\epsilon}{2} \\&<& \int_{C_k} \phi d \mu + \frac{\epsilon}{2} + \frac{\epsilon}{2} \\&=& \nu C_k + \epsilon\end{eqnarray}Hence $\nu A = \sup \{ \nu C | C \subset A, C \text{ compact} \}$.$\nu$ is bounded by $\nu X = \|\phi\|_1$, hence is locally finite, and so $\nu$ is a Radon measure. If you want to get outer regularity, let $A$ be a Borel set, and let $C_k \subset A^c$ be a sequence of compact sets such that $\nu C_k \to \nu A^c$. Now let $U_k = C_k^c$. Then $U_k$ is open and $A \subset U_k$. Since $\nu$ is finite, we have $\nu X = \nu A + \nu A^c = \nu U_k + \nu C_k$. It follows that $\nu U_k \to \nu A$. It follows that $\nu A = \inf \{ \nu U | A \subset U, U \text{ open} \}$, hence $\nu$ is outer regular.
Definition:Ordering/Definition 2 Definition Let $S$ be a set. An ordering on $S$ is a relation $\mathcal R$ on $S$ such that: $(1): \quad \mathcal R \circ \mathcal R = \mathcal R$ $(2): \quad \mathcal R \cap \mathcal R^{-1} = \Delta_S$ where: $\circ$ denotes relation composition $\mathcal R^{-1}$ denotes the inverse of $\mathcal R$ $\Delta_S$ denotes the diagonal relation on $S$. Symbols used to denote a general ordering relation are usually variants on $\preceq$, $\le$ and so on. On $\mathsf{Pr} \infty \mathsf{fWiki}$, to denote a general ordering relation it is recommended to use $\preceq$ and its variants: $\preccurlyeq$ $\curlyeqprec$ $\leqslant$ $\leqq$ $\eqslantless$ The symbol $\subseteq$ is universally reserved for the subset relation. $a \preceq b$ can be read as: $a$ precedes, or is the same as, $b$. Similarly: $a \preceq b$ can be read as: $b$ succeeds, or is the same as, $a$. If, for two elements $a, b \in S$, it is not the case that $a \preceq b$, then the symbols $a \npreceq b$ and $b \nsucceq a$ can be used. It is not demanded of an ordering $\preceq$, defined in its most general form on a set $S$, that every pair of elements of $S$ is related by $\preceq$. They may be, or they may not be, depending on the specific nature of both $S$ and $\preceq$. It is wise to be certain of what is meant. As a consequence, on $\mathsf{Pr} \infty \mathsf{fWiki}$ we resolve any ambiguity by reserving the terms for the objects in question as follows: Also see Results about orderingscan be found here.
When considering a bilattice you need to distinguish two type of sites. A B A B -------o-------o---------------o-------o---- |<--a-->|<------b------>| For instance you can denote the two kind of sites with letters $A$ and $B$ as it is shown above. Then you have now two different creation and destruction operators . $c_{iA}^{\dagger}$ and $c_{iA}$ in order to create or annihilate a particle in $A$ sites and $c_{iB}^{\dagger}$ and $c_{iB}$ for $B$ sites. You have to be careful with indices $i$. With a simple lattice the sites had indices like that: A A A A A -------o-------o-------o-------o-------o i-1 i i+1 i+2 i+3 |<--a-->|<--a-->|<--a-->| But know the indices are different since there is two kinds of sites: A B A B -------o-------o---------------o-------o---- i i i+1 i+1 |<--a-->|<------b------>| All this changes expressions of Fourier transforms and hamiltonian:$$c_{kA}=\frac{1}{\sqrt{V/2}}\sum_{i}{c_{iA} e^{i k r_{i}}} \quad \text{where} \quadr_i=i*(a+b)$$$$c_{kB}=\frac{1}{\sqrt{V/2}}\sum_{i}{c_{iB} e^{i k r_{i}}} \quad \text{where} \quadr_i=i*(a+b)+a$$ EDIT:The volume of the system has to be divided by two in Fourier transforms since there is now two sites in each primitive cell (there were only one before). END EDIT The hamiltonian now takes the form:$$H=-\sum_{i}{t_s (c^{\dagger}_{iA} c_{iB} + c^{\dagger}_{iB} c_{iA})+t_l (c^{\dagger}_{iB} c_{(i+1)A} + c^{\dagger}_{(i+1)A} c_{iB})}$$where I have considered a one-dimensional problem. Since the distance between sites is not always the same, you might want to consider two different hopping parameters: $t_s$ for short jumps and $t_l$ for long ones. EDIT: I had forgotten terms in the Hamiltonian, nearest neighbor hopping terms must be present in both ways $(A,i)\rightarrow (B,i)$ and $(B,i) \rightarrow(A,i)$. The long jumps are $(A,i+1)\rightarrow (B,i)$ and $(B,i)\rightarrow (A,i+1)$. END EDIT If you want to obtain a diagonal form for your Hamiltonian, you can try to find a $2\times2$ matrix $M$ such that:$$H=-\sum_{k}{(c^{\dagger}_{kA} c^{\dagger}_{kB})M\binom{c_{kA}}{c_{kB}}}$$$M$ will contain hopping parameters $t_s$ and $t_l$, once you have diagonalized $M$ your problem is solved.This post imported from StackExchange Physics at 2014-05-04 11:30 (UCT), posted by SE-user ChocoPouce
Discrete space In topology, a discrete space is a particularly simple example of a topological space or similar structure, one in which the points form a discontinuous sequence, meaning they are isolated from each other in a certain sense. The discrete topology is the finest topology that can be given on a set, i.e., it defines all subsets as open sets. In particular, each singleton is an open set in the discrete topology. Contents Definitions 1 Properties 2 Uses 3 Indiscrete spaces 4 Quotation 5 See also 6 References 7 Definitions Given a set X: the discrete topologyon Xis defined by letting every subset of Xbe open (and hence also closed), and Xis a discrete topological spaceif it is equipped with its discrete topology; the discrete uniformityon Xis defined by letting every superset of the diagonal {( x, x) : xis in X} in X× Xbe an entourage, and Xis a discrete uniform spaceif it is equipped with its discrete uniformity. the discrete metric\rho on Xis defined by \rho(x,y) = \left\{\begin{matrix} 1 &\mbox{if}\ x\neq y , \\ 0 &\mbox{if}\ x = y \end{matrix}\right. for any x,y \in X. In this case (X,\rho) is called a discrete metric space or a space of isolated points. a set Sis discretein a metric space (X,d), for S \subseteq X, if for every x \in S, there exists some \delta >0 (depending on x) such that d(x,y) >\delta for all y \in S\setminus\{x\}; such a set consists of isolated points. A set Sis uniformly discretein the metric space (X,d), for S \subseteq X, if there exists ε> 0 such that for any two distinct x, y \in S, d(x, y) > ε. A metric space (E,d) is said to be uniformly discrete if there exists a "packing radius" r>0 such that, for any x,y \in E, one has either x=y or d(x,y)>r. [1] The topology underlying a metric space can be discrete, without the metric being uniformly discrete: for example the usual metric on the set {1, 1/2, 1/4, 1/8, ...} of real numbers. Properties The underlying uniformity on a discrete metric space is the discrete uniformity, and the underlying topology on a discrete uniform space is the discrete topology. Thus, the different notions of discrete space are compatible with one another. On the other hand, the underlying topology of a non-discrete uniform or metric space can be discrete; an example is the metric space X := {1/ n : n = 1,2,3,...} (with metric inherited from the real line and given by d( x, y) = | x − y|). Obviously, this is not the discrete metric; also, this space is not complete and hence not discrete as a uniform space. Nevertheless, it is discrete as a topological space. We say that X is topologically discrete but not uniformly discrete or metrically discrete. Additionally: The topological dimension of a discrete space is equal to 0. A topological space is discrete if and only if its singletons are open, which is the case if and only if it doesn't contain any accumulation points. The singletons form a basis for the discrete topology. A uniform space Xis discrete if and only if the diagonal {( x, x) : xis in X} is an entourage. Every discrete topological space satisfies each of the separation axioms; in particular, every discrete space is Hausdorff, that is, separated. A discrete space is compact if and only if it is finite. Every discrete uniform or metric space is complete. Combining the above two facts, every discrete uniform or metric space is totally bounded if and only if it is finite. Every discrete metric space is bounded. Every discrete space is first-countable; it is moreover second-countable if and only if it is countable. Every discrete space with at least two points is totally disconnected. Every non-empty discrete space is second category. Any two discrete spaces with the same cardinality are homeomorphic. Every discrete space is metrizable (by the discrete metric). A finite space is metrizable only if it is discrete. If Xis a topological space and Yis a set carrying the discrete topology, then Xis evenly covered by X× Y(the projection map is the desired covering) The subspace topology on the integers as a subspace of the real line is the discrete topology. A discrete space is separable if and only if it is countable. Any function from a discrete topological space to another topological space is continuous, and any function from a discrete uniform space to another uniform space is uniformly continuous. That is, the discrete space X is free on the set X in the category of topological spaces and continuous maps or in the category of uniform spaces and uniformly continuous maps. These facts are examples of a much broader phenomenon, in which discrete structures are usually free on sets. With metric spaces, things are more complicated, because there are several categories of metric spaces, depending on what is chosen for the morphisms. Certainly the discrete metric space is free when the morphisms are all uniformly continuous maps or all continuous maps, but this says nothing interesting about the metric structure, only the uniform or topological structure. Categories more relevant to the metric structure can be found by limiting the morphisms to Lipschitz continuous maps or to short maps; however, these categories don't have free objects (on more than one element). However, the discrete metric space is free in the category of bounded metric spaces and Lipschitz continuous maps, and it is free in the category of metric spaces bounded by 1 and short maps. That is, any function from a discrete metric space to another bounded metric space is Lipschitz continuous, and any function from a discrete metric space to another metric space bounded by 1 is short. Going the other direction, a function f from a topological space Y to a discrete space X is continuous if and only if it is locally constant in the sense that every point in Y has a neighborhood on which f is constant. Uses A discrete structure is often used as the "default structure" on a set that doesn't carry any other natural topology, uniformity, or metric; discrete structures can often be used as "extreme" examples to test particular suppositions. For example, any group can be considered as a topological group by giving it the discrete topology, implying that theorems about topological groups apply to all groups. Indeed, analysts may refer to the ordinary, non-topological groups studied by algebraists as "discrete groups" . In some cases, this can be usefully applied, for example in combination with Pontryagin duality. A 0-dimensional manifold (or differentiable or analytical manifold) is nothing but a discrete topological space. We can therefore view any discrete group as a 0-dimensional Lie group. A product of countably infinite copies of the discrete space of natural numbers is homeomorphic to the space of irrational numbers, with the homeomorphism given by the continued fraction expansion. A product of countably infinite copies of the discrete space {0,1} is homeomorphic to the Cantor set; and in fact uniformly homeomorphic to the Cantor set if we use the product uniformity on the product. Such a homeomorphism is given by using ternary notation of numbers. (See Cantor space.) Indiscrete spaces In some ways, the opposite of the discrete topology is the trivial topology (also called the indiscrete topology), which has the fewest possible open sets (just the empty set and the space itself). Where the discrete topology is initial or free, the indiscrete topology is final or cofree: every function from a topological space to an indiscrete space is continuous, etc. Quotation Stanislaw Ulam characterized Los Angeles as "a discrete space, in which there is an hour's drive between points". [2] See also References Pleasants, Peter A.B. (2000). "Designer quasicrystals: Cut-and-project sets with pre-assigned properties". In Baake, Michael. Directions in mathematical quasicrystals. CRM Monograph Series 13. Providence, RI: Stanislaw Ulam's autobiography, Adventures of a Mathematician.
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response) Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Date of Award 1-1-2012 Document Type Campus Access Dissertation Department Mathematics First Advisor Lu, Linyuan Abstract This dissertation mainly comes from my recent study of fractional chromatic numbers of graphs, spectra of edge-independent random graphs, Laplacian spectra of hypergraphs, and loose Laplacian spectra of random hypergraphs. For a graph $G$, let $\chi_f(G)$ be the fractional chromatic number of $G$. Based on the study of independence numbers of triangle-free graphs with maximum degree at most three, Heckman and Thomas conjectured that $\chi_f(G) \leq 3-\frac{1}{5}$ if $G$ is triangle-free and has maximum degree at most three. Since the fractional chromatic number of the generalized Peterson graph $P(7,2)$ is $3-\frac{1}{5}$, the conjecture is tight if it is true. The first result on this conjecture is due to Hatami and Zhu who proved $\chi_f(G) \leq 3-\frac{3}{64}$. We prove $\chi_f(G) \leq 3-\frac{3}{43}$. We also consider the following general question. What is the fractional chromatic number of a $K_{\Delta}$-free graph with maximum degree $\Delta $ for $\Delta \geq 3$? Heckman and Thomas' conjecture is a special case of this question for $\Delta=3$. We are able to prove that except for two graphs, the fractional chromatic number of each $K_{\Delta}$-free graph with maximum degree $\Delta$ is at most $\Delta-\frac{2}{67}$. There are a lot of literature addressing the spectra of random graphs. Recently, a new random graph model (edge-independent random graphs) attracted more and more attention. Let $A(G)$ and $L(G)$ be the adjacency matrix and the Laplacian matrix of an edge-independent graph $G$. Oliveira and Chung-Radcliffe showed eigenvalues of $A(G)$ (and $L(G)$) can be approximated by those of the ``expectation'' of $A$ (and the ``expectation'' of $L$) with some error term involving the maximum expected degree (and the minimum expected degree) and the number of vertices. We improve previous results by removing the $\sqrt{\ln n}$-factor from the error terms with a slightly stronger condition. Laplacians of graphs are studied extensively in the literature. There are some attempts to investigate the Laplacian matrices of hypergraphs. For an $r$-uniform hypergraph $H$, we will define the $s$-th Laplacian matrix $L^{(s)}(H)$ for each $1 \leq s \leq r-1$; we will also show some applications of Laplacians of hypergraphs. A natural question is: what are the Laplacian eigenvalues of a random hypergraph? Let $H^r(n,p)$ be a random hypergraph. For each $1 \leq s \leq r/2$, we prove that the eigenvalues of the $s$-th Laplacian $L^{(s)}(H^{r}(n,p))$ can be approximated by those of the complete hypergraph. Moreover, we show the distribution of eigenvalues of $L^{(s)}(H^r(n,p))$ satisfies the Semicircle Law for $1 \leq s \leq r/2$. Recommended Citation Peng, X.(2012). Fractional Chromatic Numbers and Spectra of Graphs. (Doctoral dissertation). Retrieved from https://scholarcommons.sc.edu/etd/1610
The Annals of Statistics Ann. Statist. Volume 7, Number 6 (1979), 1264-1276. Estimation of the Inverse Covariance Matrix: Random Mixtures of the Inverse Wishart Matrix and the Identity Abstract Let $S_{p \times p}$ have a nonsingular Wishart distribution with unknown matrix $\Sigma$ and $k$ degrees of freedom. For two different loss functions, estimators of $\Sigma^{-1}$ are given which dominate the obvious estimators $aS^{-1}, 0 < a \leqslant k - p - 1$. Our class of estimators $\mathscr{C}$ includes random mixtures of $S^{-1}$ and $I$. A subclass $\mathscr{C}_0 \subset \mathscr{C}$ was given by Haff. Here, we show that any member of $\mathscr{C}_0$ is dominated in $\mathscr{C}$. Some troublesome aspects of the estimation problem are discussed, and the theory is supplemented by simulation results. Article information Source Ann. Statist., Volume 7, Number 6 (1979), 1264-1276. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176344845 Digital Object Identifier doi:10.1214/aos/1176344845 Mathematical Reviews number (MathSciNet) MR550149 Zentralblatt MATH identifier 0436.62046 JSTOR links.jstor.org Citation Haff, L. R. Estimation of the Inverse Covariance Matrix: Random Mixtures of the Inverse Wishart Matrix and the Identity. Ann. Statist. 7 (1979), no. 6, 1264--1276. doi:10.1214/aos/1176344845. https://projecteuclid.org/euclid.aos/1176344845 Corrections See Correction: L. R. Haff. Corrections to "Estimation of the Inverse Covariance Matrix: Random Mixtures of the Inverse Wishart Matrix and the Identity". Ann. Statist., Volume 9, Number 5 (1981), 1132--1132.
The setup is as follows: $k/\mathbb{Q}_p$ is a finite extension, $\mathfrak{p}$ is the maximal ideal of $\mathcal{O}_k$, $q=\#(\mathcal{O}_k/\mathfrak{p})$ $k'/k$ is a finite unramified extension of degree $d$ It's known that for a relative Lubin-Tate formal group $\mathcal{F}$ relative to $k'/k$ with parameter $\xi$ ($\xi\in\mathcal{O}_k$ with $\mathrm{ord}_{\mathfrak{p}}(\xi)=d$), it gives an abelian extension tower $k'(\mathcal{F}[\mathfrak{p}^n])$ of $k$ with degree $[k'(\mathcal{F}[\mathfrak{p}^n]):k']=(q-1)q^{n-1}$ and each $k'(\mathcal{F}[\mathfrak{p}^n])/k'$ is totally ramified. My question is, if a tower $\{k_n'\}$ is given such that for any $n$, $k_n'/k$ is abelian, $k_n'/k'$ is totally ramified, and $[k_n':k']=(q-1)q^{n-1}$, then can we find a relative Lubin-Tate formal group $\mathcal{F}$ such that $k_n'=k'(\mathcal{F}[\mathfrak{p}^n])$? If not, are there any criteria for it? In fact I'm considering the following special case: let $K$ be an imaginary quadratic field, $p$ be a prime split in $K$, $\mathfrak{p}$ be a prime of $K$ above $p$, $H$ be the Hilbert class field of $K$, $w$ be a prime of $H$ above $\mathfrak{p}$. Let $H_n$ be the ring class field of $K$ of conductor $p^n$, then $w$ is totally ramified over $H_n/H$ and we have $[H_n:H]=2(p-1)p^{n-1}/\#\mathcal{O}_K^\times$. I'd like to know the answer of above question for $k=K_{\mathfrak{p}}\cong\mathbb{Q}_p$, $k'=H_w$, $k_n'=H_{n,w}$, i.e. whether the anti-cyclotomic tower $\{H_{n,w}\}$ comes from (relative) Lubin-Tate formal group. (Of course it's true for cyclotomic tower and the $\mathbb{Z}_p$ extension tower unramified outside $\mathfrak{p}$. And it looks like that we are in trouble when $d_K=-3,-4$.) EDIT: If we assume $k=\mathbb{Q}_p$ (as in the special case I'm considering) and $p\geq 3$, then by the fact $\mathbb{Q}_p^{\mathrm{ab}}=\mathbb{Q}_p^{\mathrm{unr}}(\mu_{p^\infty})$ and that the open subgroup of $\mathrm{Gal}(\mathbb{Q}_p^{\mathrm{unr}}(\mu_{p^\infty})/\mathbb{Q}_p^{\mathrm{unr}})\cong\mathbb{Z}_p^\times$ of index $(p-1)p^{n-1}$ is unique, we can conclude that $\mathbb{Q}_p^{\mathrm{unr}}k_n'=\mathbb{Q}_p^{\mathrm{unr}}(\mu_{p^n})$ for any $n$. Can we obtain more information from this?
Definition:Group Direct Product/Finite Product Definition Let $\struct {G_1, \circ_1}, \struct {G_2, \circ_2}, \ldots, \struct {G_n, \circ_n}$ be groups. Let $\displaystyle G = \prod_{k \mathop = 1}^n G_k$ be their cartesian product. Let $\circ$ be the operation defined on $G$ as: $\circ := \tuple {g_1, g_2, \ldots, g_n} \circ \tuple {h_1, h_2, \ldots, h_n} = \tuple {g_1 \circ_1 h_1, g_2 \circ_2 h_2, \ldots, g_n \circ_n h_n}$ for all ordered $n$-tuples in $G$. The group $\struct {G, \circ}$ is called the (external) direct product of $\struct {G_1, \circ_1}, \struct {G_2, \circ_2}, \ldots, \struct {G_n, \circ_n}$. Also see External Direct Product of Groups is Group/Finite Product, where it is proved that $\struct {G, \circ}$ is a group.
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty). And Chrome has a Personal Blocklist extension which does what you want. : ) Of course you already have a Google account but Chrome is cool : ) Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies? do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created. @QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value. I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$. @QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0. @KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc. In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results @QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O @NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that. @NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment. @QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h). @KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow) Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
@Rubio The options are available to me and I've known about them the whole time but I have to admit that it feels a bit rude if I act like an attribution vigilante that goes around flagging everything and leaving comments. I don't know how the process behind the scenes works but what I have done up to this point is leave a comment then wait for a while. Normally I get a response or I flag after some time has passed. I'm guessing you say this because I've forgotten to flag several times You can always leave a friendly comment if you like, but flagging gets eyes on it to get the problem addressed - ideally before people start answering it. something we don't want is for people to farm rep off someone else's content, which we see occasionally; but even beyond that, SE in general and we in particular dislike it when people post content they didn't create without properly acknowledging its source. And most of the creative effort here is in the question. So yeah, it's best to flag it when you see it. That'll put it into the queue for reviewers to agree (or not) - so don't worry that you're single-handedly (-footedly?) stomping on people :) Unfortunately, a significant part of the time, the asker never supplies the origin. Sometimes they self-delete the question rather than just tell us where it came from. Other times they ignore the request and the whole thing, including whatever effort people put into answering, gets discarded when the question is deleted. Okay. This is the first Riley I've written, and it gets progressively harder as you go along, so here goes. I wrote this, and then realized that I used a mispronunciation of the target, so I had to sloppily improvise. I apologize. Anyway, I hope you enjoy it!My prefix is just shy of white,Yet... IBaNTsJTtStPMP means "I'm Bad at Naming Things, so Just Try to Solve this Patterned Masyu Puzzle!".The original Masyu rules apply.Make a single loop with lines passing through the centers of cells, horizontally or vertically. The loop never crosses itself, branches off, or goe... This puzzle is based off the What is a Word™ and What is a Phrase™ series started by JLee and their spin-off What is a Number™ series.If a word conforms to a certain rule, I call it an Etienne Word™. Use the following examples to find the rule:These are not the only examples of Etienne Wo... This puzzle is based off the What is a Word™ and What is a Phrase™ series started by JLee and their spin-off What is a Number™ series.If a word conforms to a certain rule, I call it an Eternal Word™. Use the following examples to find the rule:$$% set Title text. (spaces around the text ARE ... IntroductionI am an enthusiastic geometry student, preparing for my first quiz. Yet while revising I accidentally spilt my coffee onto my notes. Can you rescue me and draw me a diagram so that I can revise it for tomorrow’s test? Thank you very much!My Notes Sometimes you are this wordRemove the first letter, does not change the meaningRemove the first two letters, still feels the sameRemove the first three letters and you find a wayRemove the first four letters and you get a numberThe letters rearranged is a surnameWh... – "Sssslither..."Brigitte jumped. The voice had whispered almost directly into her ear, yet there was nobody to be seen. She looked at the ground beneath her feet. Was something moving? She was probably imagining things again.– Did you hear something? she asked her guide, Skaylee.... The creator of this masyu forgot to add the final stone, so the puzzle remains incomplete. Finish his job for him by placing one additional stone (either black or white) on the board so that the result is a uniquely solvable masyu.Normal masyu rules apply. So here's a standard Nurikabe puzzle.I'll be using the final (solved) grid for my upcoming local puzzle competition logo as it will spell the abbreviation of the competition name. So, what does it spell?Rules (adapted from Nikoli):Fill in the cells under the following rules.... I've designed a set of dominoes puzzles that I call Donimoes. You slide thedominoes like the cars in Nob Yoshigahara's Rush Hour puzzle, always alongtheir long axis. The goal of Blocking Donimoes is to slide all the dominoesinto a rectangle, without sliding any matching numbers next to each ot... I am mud that will trap you. I am a colloid hydrogel. What am I?Take the first half of me and add me to this:I am dangerous to wolves and werewolves alike. Some people even say that I am dangerous to unholy things. Use the creator of Poirot to find out: What am I?Now, take another word for ... Clark who is consecutive in nature, lives in California near the 100th street. Today he decided to take his palindromic boat and visit France. He booked a room which has a number of thrice a prime. Then he ordered Taco and Cola for his breakfast. The online food delivery site asked him to enter t... Suppose you are sitting comfortably in your universe admiring the word SING. Just then, Q enters your universe and insists that you insert the string "IMMER" into your precious word to create a new word for his amusement.Okay, you can make the word IMMERSING...But then you realize, you can a... You! I see you walking thereNary a worry or a careCome, listen to me speakMy mind is strong, though my body is weak.I've got a riddle for you to ponderSomething to think about whilst you wanderIt's a classic Riley, a word split in threeFor a prefix, an... @OmegaKrypton rather a poor solution, I think, but I'll try it anyway: Quarrel= cross words. When combined heartlessly: put them together by removing the middle space. Thus, crosswords. Nonstop: remove the final letter. We've made crossword = feature in daily newspaper I saw this photo on LinkedIn:Is this a puzzle? If so, what does it mean and what is a solution?What I've found so far:$a = \pi r^2$ is clearly the area of a disk of radius $r$$2\pi r$ is clearly its diameter$\displaystyle \int\dfrac{dx}{sin\ x} = ln\left(\left| tan \dfrac{x}{2}\right|\...
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty). And Chrome has a Personal Blocklist extension which does what you want. : ) Of course you already have a Google account but Chrome is cool : ) Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies? do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created. @QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value. I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$. @QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0. @KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc. In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results @QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O @NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that. @NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment. @QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h). @KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow) Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
You can take the expression $C=\frac{\delta Q}{\mathrm dT}$ as the infinitesimal version of$$C=\frac{Q}{\Delta T}$$or a formal rewrite of$$\delta Q=C\mathrm dT$$which, however, doesn't make sense in the language of differential forms as division by the form $\mathrm dT$ is not defined. Let's take a look at the meaning of $\delta Q=C\mathrm dT$ assuming differential forms: By the second law of thermodynamics, $\delta Q = T\mathrm dS$. The $\delta$ has no special meaning, it's just a reminder that we're dealing with a differential form and not a function (we can't write $\mathrm dQ$ here as the form is not exact, ie not the differential of some state function $Q$). Thermodynamical systems are in general at least two-dimensional and allow different choices of coordinates, so assume $S$ is represented by a function of temperature and another variable, eg $S=S(V,T)$ or $S=S(P,T)$. The definition of heat capacity from above assumes that $S$ is a function of $T$ alone as the right-hand side doesn't contain terms with $\mathrm dV$ or $\mathrm dP$. In general, we thus need a further restriction on permitted processes, like $V=\mathrm{const}$ or $P=\mathrm{const}$, which yields $C_V$ or $C_P$ respectively. Under this assumption, we have$$\mathrm dS = \frac{\partial S}{\partial T} \mathrm dT$$ie$$C\mathrm dT = \delta Q = T\frac{\partial S}{\partial T} \mathrm dT$$and finally$$C = T\frac{\partial S}{\partial T}$$ A further note for the more mathematically inclined: Geometrically, the restrictions $V=\mathrm{const}$ or $P=\mathrm{const}$ define a 1-dimensional submanifold where the pullback of $\delta Q$ via the natural embedding will be (locally) exact. In fact, this pullback needs to be included to make the equations above conform to the notation used in differential geometry: Let $\nu$ be our embedding with $\mathrm d\tau = \nu^*\mathrm dT$ non-degenerate. There's a function $C_\nu$ and (as $\nu^*\delta Q$ is closed) another function $Q_\nu$ (or rather a family of locally defined functions) with$$\nu^*\delta Q = C_\nu \mathrm d\tau = \mathrm dQ_\nu$$that is$$C_\nu = \frac{\partial Q_\nu}{\partial\tau}$$In case of $V=\mathrm{const}$, $Q_\nu$ is the pullback of the internal energy $U$, whereas in case of $P=\mathrm{const}$, $Q_\nu$ is the pullback of the Enthalpy $H$. In physicist's notation this reads$$C_V = \left(\frac{\partial U}{\partial T}\right)_V \\C_P = \left(\frac{\partial H}{\partial T}\right)_P$$
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Title: Quantitative homogenization in a balanced random environment Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). Wednesday, February 27 at 1:10pm Jon Peterson, Purdue Title: Functional Limit Laws for Recurrent Excited Random Walks Abstract: Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina. March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison Title: Outliers in the spectrum for products of independent random matrices Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke. April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge. Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems. April 18, Andrea Agazzi, Duke Title: Large Deviations Theory for Chemical Reaction Networks Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes. April 25, Kavita Ramanan, Brown Title: Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu. April 26, Colloquium, Kavita Ramanan, Brown Title: Tales of Random Projections Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems.
In this section we contrast the classical and quantum mechanical treatments of the harmonic oscillator, and we describe some of the properties that can be calculated using the quantum mechanical harmonic oscillator model. The problems at the end of the chapter require that you do some of these calculations, which involve the evaluation of non-trivial integrals. Methods for evaluating such integrals are provided in a detailed math supplement. These integrals are important. They also will appear in later chapters on electronic structure. Working through the problems with the support of the link will give you the opportunity to engage the mathematics on your own terms and deepen your understanding of the material in this section. For a classical oscillator as described in Section 6.2 we know exactly the position, velocity, and momentum as a function of time. The frequency of the oscillator (or normal mode) is determined by the effective mass M and the effective force constant K of the oscillating system and does not change unless one of these quantities is changed. There are no restrictions on the energy of the oscillator, and changes in the energy of the oscillator produce changes in the amplitude of the vibrations experienced by the oscillator. For the quantum mechanical oscillator, the oscillation frequency of a given normal mode is still controlled by the mass and the force constant (or, equivalently, by the associated potential energy function). However, the energy of the oscillator is limited to certain values. The allowed quantized energy levels are equally spaced and are related to the oscillator frequencies as given by Equation \(\ref{6-30}\). \[E_v = \left ( v + \dfrac {1}{2} \right ) \hbar \omega \label {6-30}\] with \[v = 0, 1, 2, 3, \cdots \] In a quantum mechanical oscillator, we cannot specify the position of the oscillator (the exact displacement from the equilibrium position) or its velocity as a function of time; we can only talk about the probability of the oscillator being displaced from equilibrium by a certain amount. This probability is given by \[Pr [ Q \text {to} Q + dQ] = \Psi ^*_v (Q) \Psi _v (Q) dQ \label {6-32}\] We can, however, calculate the average displacement and the mean square displacement of the atoms relative to their equilibrium positions. This average is just \(\left \langle Q \right \rangle\), the expectation value for Q, and the mean square displacement is \(\left \langle Q^2 \right \rangle\), the expectation value for \(Q_2\). Similarly we can calculate the average momentum \(\left \langle P_Q \right \rangle\), and the mean square momentum \(\left \langle P^2_Q \right \rangle\), but we cannot specify the momentum as a function of time. Physically what do we expect to find for the average displacement and the average momentum? Since the potential energy function is symmetric around Q = 0, we expect values of Q > 0 to be equally as likely as Q < 0. The average value of Q therefore should be zero. These results for the average displacement and average momentum do not mean that the harmonic oscillator is sitting still. As for the particle-in-a-box case, we can imagine the quantum mechanical harmonic oscillator as moving back and forth and therefore having an average momentum of zero. Since the lowest allowed harmonic oscillator energy, \(E_0\), is \(\dfrac{\hbar \omega}{2}\) and not 0, the atoms in a molecule must be moving even in the lowest vibrational energy state. This phenomenon is called the zero-point energy or the zero-point motion, and it stands in direct contrast to the classical picture of a vibrating molecule. Classically, the lowest energy available to an oscillator is zero, which means the momentum also is zero, and the oscillator is not moving. Exercise 6.23b Compare the quantum mechanical harmonic oscillator to the classical harmonic oscillator at v=1 and v=50. Since the average values of the displacement and momentum are all zero and do not facilitate comparisons among the various normal modes and energy levels, we need to find other quantities that can be used for this purpose. We can use the root mean square deviation (see also root-mean-square displacement) (also known as the standard deviation of the displacement) and the root-mean-square momentum as measures of the uncertainty in the oscillator's position and momentum. These uncertainties are calculated in Problem 3 at the end of this chapter. For a molecular vibration, these quantities represent the standard deviation in the bond length and the standard deviation in the momentum of the atoms from the average values of zero, so they provide us with a measure of the relative displacement and the momentum associated with each normal mode in all its allowed energy levels. These are important quantities to determine because vibrational excitation changes the size and symmetry (or shape) of molecules. Such changes affect chemical reactivity, the absorption and emission of radiation, and the dissipation of energy in radiationless transitions. In Problem 2, we show that the product of the standard deviations for the displacement and the momentum, \(\sigma_Q\) and \(\sigma_p\), satisfies the Heisenberg Uncertainty Principle. \[\sigma_Q \sigma_p \ge \dfrac{\hbar}{2}\] The harmonic oscillator wavefunctions form an orthonormal set, which means that all functions in the set are normalized individually \[\int \limits _{-\infty}^{\infty} \Psi ^*_v (x) \Psi _v (x) dx = 1 \label {6-33}\] and are orthogonal to each other. \[\int \limits _{-\infty}^{\infty} \Psi ^*_{v'} (x) \Psi _v (x) dx = 0 \label {6-34}\] for \(v' \ne v\). The fact that a family of wavefunctions forms an orthonormal set is often helpful in simplifying complicated integrals. We will use these properties in Section 6.6, for example, when we determine the harmonic oscillator selection rules for vibrational transitions in a molecule and calculate the absorption coefficients for the absorption of infrared radiation. Finally, we can calculate the probability that a harmonic oscillator is in the classically forbidden region. What does this tantalizing statement mean? Classically, the maximum extension of an oscillator is obtained by equating the total energy of the oscillator to the potential energy, because at the maximum extension all the energy is in the form of potential energy. If all the energy weren't in the form of potential energy at this point, the oscillator would have kinetic energy and momentum and could continue to extend further away from its rest position. Interestingly, as we show below, the wavefunctions of the quantum mechanical oscillator extend beyond the classical limit, i.e. beyond where the particle can be according to classical mechanics. The lowest allowed energy for the quantum mechanical oscillator is called the zero-point energy, \(E_0 = \dfrac {\hbar \omega}{2} \). Using the classical picture described in the preceding paragraph, this total energy must equal the potential energy of the oscillator at its maximum extension. We define this classical limit of the amplitude of the oscillator displacement as \(Q_0\). When we equate the zero-point energy for a particular normal mode to the potential energy of the oscillator in that normal mode, we obtain \[ \dfrac {\hbar \omega}{2} = \dfrac {KQ^2_0}{2} \label {6-35}\] Recall that K is the effective force constant of the oscillator in a particular normal mode and that the frequency of the normal mode is given by Equation \(\ref{6-31}\) which is \[\omega = \sqrt {\dfrac {K}{M}} \label {6-31}\] Solving for Q0 in Equation \(\ref{6-35}\) by substituting for ω and rearranging, we obtain the very interesting result \[Q^2_0 = \dfrac {\hbar \omega}{K} = \dfrac {\hbar}{M\omega} = \dfrac {\hbar}{\sqrt {KM}} = {\beta}^2 \label {6-36}\] Here we see that β, the parameter we introduced in Equation 6-20, is more than just a way to collect variables; β has physical significance. It is the classical limit to the amplitude (maximum extension) of an oscillator with energy \(E_0 = \dfrac {\hbar \omega}{2} \). Because β has this meaning, the variable x gives the displacement of the oscillator from its equilibrium position in units of the maximum classically allowed displacement for the v = 0 state (lowest energy state). In other words, x = 1 means the oscillator is at this classical limit, and x = 0.5 means it is halfway there. Exercise 6.24 The HCl equilibrium bond length is 0.127 nm and the v = 0 to v = 1 transition is observed in the infrared at 2886 cm-1. Compute the vibrational energy of HCl in its lowest state. Compute the classical limit for the stretching of the HCl bond from its equilibrium length in this state. What percent of the equilibrium bond length is this extension? The classical limit, \(Q_0\), for the lowest-energy state is given by Equation \(\ref{6-36}\); i.e., \(Q_0 = \pm \beta\) or \(x = \dfrac {Q_0}{\beta} = \pm 1 \). Examination of the quantum mechanical wavefunction for the lowest-energy state reveals that the wavefunction Ψ0(x) extends beyond these points. Higher energy states have higher total energies, so the classical limits to the amplitude of the displacement will be larger for these states. Exercise 6.25 Mark x = +1 and x = - 1 on the graph for \(|\Psi _0 (x)^2|\) in Figure 6.7 and note whether the wavefunction is zero at these points. The observation that the wavefunctions are not zero at the classical limit means that the quantum mechanical oscillator has a finite probability of having a displacement that is larger than what is classically possible. The oscillator can be in a region of space where the potential energy is greater than the total energy. Classically, when the potential energy equals the total energy, the kinetic energy and the velocity are zero, and the oscillator cannot pass this point. A quantum mechanical oscillator, however, has a finite probability of passing this point. For a molecular vibration, this property means that the amplitude of the vibration is larger than what it would be in a classical picture. In some situations, a larger amplitude vibration could enhance the chemical reactivity of a molecule. Exercise 6.26 Plot the probability density for v = 0 and v = 1 states. Mark the classical limits on each of the plots, since the limits are different because the total energy is different for v = 0 and v = 1. Shade in the regions of the probability densities that extend beyond the classical limit. The fact that a quantum mechanical oscillator has a finite probability to enter the classically forbidden region of space is a consequence of the wave property of matter and the Heisenberg Uncertainty Principle. A wave changes gradually, and the wavefunction approaches zero gradually as the potential energy approaches infinity. We should be able to calculate the probability that the quantum mechanical harmonic oscillator is in the classically forbidden region for the lowest energy state, the state with v = 0. The classically forbidden region is shown by the shading of the regions beyond Q0 in the graph you constructed for Exercise 6.26. The area of this shaded region gives the probability that the bond oscillation will extend into the forbidden region. To calculate this probability, we use \[ Pr [ \text {forbidden}] = 1 - Pr [ \text {allowed}] \label {6-37}\] because the integral from 0 to \(Q_0\) for the allowed region can be found in integral tables and the integral from \(Q_0\) to ∞ cannot. The form of the integral, Pr[allowed], to evaluate is \[Pr [ \text {allowed}] = 2 \int \limits _0^{Q_0} \Psi _0^* (Q) \Psi _0 (Q) dQ \label {6-38}\] The factor 2 appears in Equation \(\ref{6-38}\) from the symmetry of the wavefunction, which extends from \(-Q_0 to +Q_0\). To evaluate the integral in Equation \(\ref{6-38}\), use the wavefunction and do the integration in terms of x, Equation (6-29). Recall that for v = 0, Q = Q0 corresponds to x = 1. Including the normalization constant, Equation \(\ref{6-28}\) produces \[Pr [ \text {allowed}] = \dfrac {2}{\sqrt {\pi}} \int \limits _0^1 exp (-x^2) dx \label {6-39}\] The integral in Equation \(\ref{6-39}\) is called an error function (ERF), and can only be evaluated numerically. Values can be found in books of mathematical tables or obtained with Mathcad. When the limit of integration is 1, ERF(l) = 0.843 and Pr[forbidden] = 0.157. This result means that the quantum mechanical oscillator can be found in the forbidden region 16% of the time. This effect is substantial and leads to the phenomenon called quantum mechanical tunneling. Exercise 6.27 Numerically Verify that Pr[allowed] in Equation (6-39) equals 0.843. To obtain a value for the integral do not use symbolic integration or symbolic equals. Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
Update: I went over this answer and clarified some parts. Most importantly I expanded the Forces section to connect better with the question. I like your reasoning and you actually come to the right conclusions, so congratulations on that! But understanding the relation between forces and particles isn't that simple and in my opinion the best one can do is provide you with the bottom-up description of how one arrives to the notion of force when one starts with particles. So here comes the firmware you wanted. I hope you won't find it too long-winded. Particle physics So let's start with particle physics. The building blocks are particles and interactions between them. That's all there is to it. Imagine you have a bunch of particles of various types (massive, massless, scalar, vector, charged, color-charged and so on) and at first you could suppose that all kinds of processes between this particles are allowed (e.g. three photons meeting at a point and creating a gluon and a quark; or sever electrons meeting at a point and creating four electrons a photon and three gravitons). Physics could indeed look like this and it would be an incomprehensible mess if it did. Fortunately for us, there are few organizing principles that make the particle physics reasonably simple (but not too simple, mind you!). These principles are known as conservation laws. After having done large number of experiments, we became convinced that electric charged is conserved (the number is the same before and after the experiment). We have also found that momentum is conserved. And lots of other things too. This means that processes such as the ones I mentioned before are already ruled out because they violate some if these laws. Only processes that can survive (very strict) conservation requirements are to be considered possible in a theory that could describe our world. Another important principle is that we want our interactions simple. This one is not of experimental nature but it is appealing and in any case, it's easier to start with simpler interactions and only if that doesn't work trying to introduce more complex ones. Again fortunately for us, it turns out basic interactions are very simple. In a given interaction point there is always just a small number of particles. Namely: two: particle changing direction three: particle absorbing another particle, e.g. $e^- + \gamma \to e^-$ or one particle decaying to two other particles $W^- \to e^- + \bar\nu_e$ four: these ones don't have as nice interpretation as the above ones; but to give an example anyone, one has e.g. two gluons going in and two gluons going out So one example of such a simple process is electron absorbing a photon. This violates no conservation law and actually turns out to be the building block of a theory of electromagnetism. Also, the fact that there is a nice theory for this interaction is connected to the fact that the charge is conserved (and in general there is a relation between conservation of quantities and the way we build our theories) but this connection is better left for another question. Back to the forces So, you are asking yourself what was all that long and boring talk about, aren't you? The main point is: our world (as we currently understand it) is indeed described by all those different species of particles that are omnipresent everywhere and interact by the bizarre interactions allowed by the conservation laws. So when one wants to understand electromagnetic force all the way down, there is no other way (actually, there is one and I will mention it in the end; but I didn't want to over-complicate the picture) than to imagine huge number of photons flying all around, being absorbed and emitted by charged particles all the time. So let's illustrate this on your problem of Coulomb interaction between two electrons. The complete contribution to the force between the two electrons consists of all the possible combination of elementary processes. E.g. first electron emits photon, this then flies to the other electron and gets absorbed, or first electron emits photon, this changes to electron-positron pair which quickly recombine into another photon and this then flies to the second electron and gets absorbed. There is huge number of these processes to take into account but actually the simplest ones contribute the most. But while we're at Coulomb force, I'd like to mention striking difference to the classical case. There the theory tells you that you have an EM field also when one electron is present. But in quantum theory this wouldn't make sense. The electron would need to emit photons (because this is what corresponds to the field) but they would have nowhere to fly to. Besides, electron would be losing energy and so wouldn't be stable. And there are various other reasons while this is not possible. What I am getting at is that a single electron doesn't produce any EM field until it meets another charged particle! Actually, this should make sense if you think about it for a while. How do you detect there is an electron if nothing else at all is present? The simple answer is: you're out of luck, you won't detect it. You always need some test particles. So the classical picture of an electrostatic EM field of a point particle describes only what would happen if another particle would be inserted in that field. The above talk is part of the bigger bundle of issues with measurement (and indeed even of the very definition of) the mass, charge and other properties of system in quantum field theory. These issues are resolved by the means of renormalization but let's leave that for another day. Quantum fields Well, turns out all of the above talk about particles (although visually appealing and technically very useful) is just an approximation to the more precise picture of there existing just one quantum field for every particle type and the huge number of particles everywhere corresponding just to sharp local peaks of that field. These fields then interact by the means of quite complex interactions that reduce to the usual particle stuff when once look what those peaks are doing when they come close together. This field view can be quite enlightening for certain topics and quite useless for others. One place where it is actually illuminating is when one is trying to understand to spontaneous appearance of so-called virtual particle-antiparticle pairs. It's not clear where do they appear from as particles. But from the point of view of the field, they are just local excitations. One should imagine quantum field as a sheet that is wiggling around all the time (by the means of inherent quantum wigglage) and from time to time wiggling hugely enough to create a peak that corresponds to the mentioned pair.This post imported from StackExchange Physics at 2014-04-01 16:26 (UCT), posted by SE-user Marek
Abstract Our main result is a nontrivial lower bound for the distortion of some specific knots. In particular, we show that the distortion of the torus knot $T_{p,q}$ satisfies $\delta(T_{p,q}) \geq \frac 1{160}\min(p,q)$. This answers a 1983 question of Gromov. [bing] R. H. Bing, "An alternative proof that $3$-manifolds can be triangulated," Ann. of Math., vol. 69, pp. 37-65, 1959. @article {bing, MRKEY = {0100841}, AUTHOR = {Bing, R. H.}, TITLE = {An alternative proof that {$3$}-manifolds can be triangulated}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {69}, YEAR = {1959}, PAGES = {37--65}, ISSN = {0003-486X}, MRCLASS = {55.00}, MRNUMBER = {0100841}, MRREVIEWER = {S. S. Cairns}, DOI = {10.2307/1970092}, ZBLNUMBER = {0106.16604}, } [dennesullivan] E. Denne and J. M. Sullivan, "The distortion of a knotted curve," Proc. Amer. Math. Soc., vol. 137, iss. 3, pp. 1139-1148, 2009. @article {dennesullivan, MRKEY = {2457456}, AUTHOR = {Denne, Elizabeth and Sullivan, John M.}, TITLE = {The distortion of a knotted curve}, JOURNAL = {Proc. Amer. Math. Soc.}, FJOURNAL = {Proceedings of the American Mathematical Society}, VOLUME = {137}, YEAR = {2009}, NUMBER = {3}, PAGES = {1139--1148}, ISSN = {0002-9939}, CODEN = {PAMYAR}, MRCLASS = {57M25 (53A04 58E10)}, MRNUMBER = {2457456}, MRREVIEWER = {Dave Auckly}, DOI = {10.1090/S0002-9939-08-09705-0}, ZBLNUMBER = {1179.53003}, } @article {foxwild, MRKEY = {0030745}, AUTHOR = {Fox, Ralph H.}, TITLE = {A remarkable simple closed curve}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {50}, YEAR = {1949}, PAGES = {264--265}, ISSN = {0003-486X}, MRCLASS = {56.0X}, MRNUMBER = {0030745}, MRREVIEWER = {S. Eilenberg}, DOI = {10.2307/1969450}, ZBLNUMBER = {0033.13603}, } [freedman] M. H. Freedman, Z. He, and Z. Wang, "Möbius energy of knots and unknots," Ann. of Math., vol. 139, iss. 1, pp. 1-50, 1994. @article {freedman, MRKEY = {1259363}, AUTHOR = {Freedman, Michael H. and He, Zheng-Xu and Wang, Zhenghan}, TITLE = {Möbius energy of knots and unknots}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {139}, YEAR = {1994}, NUMBER = {1}, PAGES = {1--50}, ISSN = {0003-486X}, CODEN = {ANMAAH}, MRCLASS = {58E10 (57M25)}, MRNUMBER = {1259363}, MRREVIEWER = {Anders Linn{é}r}, DOI = {10.2307/2946626}, ZBLNUMBER = {0817.57011}, } [gromov1] M. Gromov, "Homotopical effects of dilatation," J. Differential Geom., vol. 13, iss. 3, pp. 303-310, 1978. @article {gromov1, MRKEY = {0551562}, AUTHOR = {Gromov, Mikhael}, TITLE = {Homotopical effects of dilatation}, JOURNAL = {J. Differential Geom.}, FJOURNAL = {Journal of Differential Geometry}, VOLUME = {13}, YEAR = {1978}, NUMBER = {3}, PAGES = {303--310}, ISSN = {0022-040X}, CODEN = {JDGEAS}, MRCLASS = {58D15 (58E10 58E15)}, MRNUMBER = {0551562}, MRREVIEWER = {Y. Mut{ô}}, ZBLNUMBER = {0427.58010}, URL = {http://projecteuclid.org/euclid.jdg/1214434601}, } [gromov2] M. Gromov, "Filling Riemannian manifolds," J. Differential Geom., vol. 18, iss. 1, pp. 1-147, 1983. @article {gromov2, MRKEY = {0697984}, AUTHOR = {Gromov, Mikhael}, TITLE = {Filling {R}iemannian manifolds}, JOURNAL = {J. Differential Geom.}, FJOURNAL = {Journal of Differential Geometry}, VOLUME = {18}, YEAR = {1983}, NUMBER = {1}, PAGES = {1--147}, ISSN = {0022-040X}, CODEN = {JDGEAS}, MRCLASS = {53C20 (53C21 57R99)}, MRNUMBER = {0697984}, MRREVIEWER = {Yu. Burago}, ZBLNUMBER = {0515.53037}, URL = {http://projecteuclid.org/euclid.jdg/1214509283}, } [hamilton] A. J. S. Hamilton, "The triangulation of $3$-manifolds," Quart. J. Math. Oxford Ser., vol. 27, iss. 105, pp. 63-70, 1976. @article {hamilton, MRKEY = {0407848}, AUTHOR = {Hamilton, A. J. S.}, TITLE = {The triangulation of {$3$}-manifolds}, JOURNAL = {Quart. J. Math. Oxford Ser.}, FJOURNAL = {The Quarterly Journal of Mathematics. Oxford. Second Series}, VOLUME = {27}, YEAR = {1976}, NUMBER = {105}, PAGES = {63--70}, ISSN = {0033-5606}, MRCLASS = {57C15}, MRNUMBER = {0407848}, MRREVIEWER = {Mario Pezzana}, ZBLNUMBER = {0318.57003}, DOI = {10.1093/qmath/27.1.63}, } [kirsie] R. C. Kirby and L. C. Siebenmann, Foundational Essays on Topological Manifolds, Smoothings, and Triangulations, with notes by J. Milnor and M. Atiyah, Princeton, N.J.: Princeton Univ. Press, 1977, vol. 88. @book{kirsie, MRKEY = {0645390}, AUTHOR={Kirby, Robion C. and Siebenmann, L. C.}, TITLE={Foundational Essays on Topological Manifolds, Smoothings, and Triangulations, {\rm with notes by J. Milnor and M. Atiyah}}, PUBLISHER={Princeton Univ. Press}, ADDRESS={Princeton, N.J.}, YEAR={1977}, SERIES={Ann. of Math. Studies}, VOLUME={88}, MRNUMBER = {0645390}, ZBLNUMBER = {0361.57004}, } [moise] E. E. Moise, "Affine structures in $3$-manifolds. V. The triangulation theorem and Hauptvermutung," Ann. of Math., vol. 56, pp. 96-114, 1952. @article {moise, MRKEY = {0048805}, AUTHOR = {Moise, Edwin E.}, TITLE = {Affine structures in {$3$}-manifolds. {V}. {T}he triangulation theorem and {H}auptvermutung}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {56}, YEAR = {1952}, PAGES = {96--114}, ISSN = {0003-486X}, MRCLASS = {56.0X}, MRNUMBER = {0048805}, MRREVIEWER = {S. S. Cairns}, DOI = {10.2307/1969769}, ZBLNUMBER = {0048.17102}, } [mullikin] C. A. S. Mullikin, "A class of curves in every knot type where chords of high distortion are common," Topology Appl., vol. 154, iss. 14, pp. 2697-2708, 2007. @article {mullikin, MRKEY = {2340952}, AUTHOR = {Mullikin, Chad A. S.}, TITLE = {A class of curves in every knot type where chords of high distortion are common}, JOURNAL = {Topology Appl.}, FJOURNAL = {Topology and its Applications}, VOLUME = {154}, YEAR = {2007}, NUMBER = {14}, PAGES = {2697--2708}, ISSN = {0166-8641}, CODEN = {TIAPD9}, MRCLASS = {58E10 (57M25)}, MRNUMBER = {2340952}, MRREVIEWER = {Anders Linn{é}r}, DOI = {10.1016/j.topol.2007.05.003}, ZBLNUMBER = {1139.57006}, } [munkres] J. Munkres, "Obstructions to the smoothing of piecewise-differentiable homeomorphisms," Ann. of Math., vol. 72, pp. 521-554, 1960. @article {munkres, MRKEY = {0121804}, AUTHOR = {Munkres, James}, TITLE = {Obstructions to the smoothing of piecewise-differentiable homeomorphisms}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {72}, YEAR = {1960}, PAGES = {521--554}, ISSN = {0003-486X}, MRCLASS = {57.00}, MRNUMBER = {0121804}, MRREVIEWER = {P. Dedecker}, DOI = {10.2307/1970228}, ZBLNUMBER = {0108.18101}, } [ohara1] J. O’Hara, "Family of energy functionals of knots," Topology Appl., vol. 48, iss. 2, pp. 147-161, 1992. @article {ohara1, MRKEY = {1195506}, AUTHOR = {O'Hara, Jun}, TITLE = {Family of energy functionals of knots}, JOURNAL = {Topology Appl.}, FJOURNAL = {Topology and its Applications}, VOLUME = {48}, YEAR = {1992}, NUMBER = {2}, PAGES = {147--161}, ISSN = {0166-8641}, CODEN = {TIAPD9}, MRCLASS = {58E20 (57M25 58E10)}, MRNUMBER = {1195506}, MRREVIEWER = {Caio J. C. Negreiros}, DOI = {10.1016/0166-8641(92)90023-S}, ZBLNUMBER = {0769.57006}, } [ohara2] J. O’Hara, "Energy functionals of knots. II," Topology Appl., vol. 56, iss. 1, pp. 45-61, 1994. @article {ohara2, MRKEY = {1261169}, AUTHOR = {O'Hara, Jun}, TITLE = {Energy functionals of knots. {II}}, JOURNAL = {Topology Appl.}, FJOURNAL = {Topology and its Applications}, VOLUME = {56}, YEAR = {1994}, NUMBER = {1}, PAGES = {45--61}, ISSN = {0166-8641}, CODEN = {TIAPD9}, MRCLASS = {58D10 (49Q10 57M25 58E10)}, MRNUMBER = {1261169}, MRREVIEWER = {Anders Linn{é}r}, DOI = {10.1016/0166-8641(94)90108-2}, ZBLNUMBER = {0996.57503}, }
Export file: Format RIS(for EndNote,Reference Manager,ProCite) BibTex Text Content Citation Only Citation and Abstract Some results on ordinary words of standard Reed-Solomon codes 1 Mathematical College, Sichuan University, Chengdu 610064, P. R. China; 2 Department of Mathematics, Sichuan Tourism University, Chengdu 610100, P. R. China Received: , Accepted: , Published: uis called an ordinary word of $RS_q({\mathbb F}_q^*, k)$, k) if the error distance $d({ u}, RS_q({\mathbb F}_q^*, k))=n-\deg(u(x))$ with u( x) being the Lagrange interpolation polynomial of u.In this paper,we make use of the polynomial method and particularly,we use the König-Rados theorem on the number of nonzero solutions of polynomial equation over finite fields to show that if $q\geq 4, 2\leq{k}\leq{q-2}$,then the received word ${ u}\in{\mathbb F}_q^{q-1}$ of degree q-2 is an ordinary word of $RS_q({\mathbb F}_q^*, k)$ if and only if its Lagrange interpolation polynomial u( x) is of the form $$u(x)=\lambda\sum\limits_{i=k}^{q-2}a^{q-2-i}x^i+f_{\leq k-1}(x)$$ with $a, \lambda\in{\mathbb F}_q^*$ and $ f_{\leq k-1}(x)\in {\mathbb F}_q[x]$ being of degree at most k-1.This answers partially an open problem proposed by J.Y.Li and D.Q.Wan in [On the subset sum problem over finite fields,Finite Fields Appls.14(2008),911-929]. Citation: Xiaofan Xu, Yongchao Xu, Shaofang Hong. Some results on ordinary words of standard Reed-Solomon codes. AIMS Mathematics, 2019, 4(5): 1336-1347. doi: 10.3934/math.2019.5.1336 References: 1. Q. Cheng and E. Murray, On deciding deep holes of Reed-Solomon codes. In:J.Y. Cai, S.B. Cooper, H. Zhu(eds) Theory and Applications of Models of Computation. TAMC 2007, Lecture Notes in Computer Science, vol. 4484, Springer, Berlin, Heidelberg. 2. S. F. Hong and R. J. Wu, On deep holes of generalized Reed-Solomon codes, AIMS Math., 1 (2016), 96-101. 3. J. Y. Li and D. Q. Wan, On the subset sum problem over finite fields, Finite Fields Th. App., 14 (2008), 911-929. 4. Y. J. Li and D. Q. Wan, On error distance of Reed-Solomon codes, Sci. China Math., 51 (2008), 1982-1988. 5. Y. J. Li and G. Z. Zhu, On the error distance of extended Reed-Solomon codes, Adv. Math. Commun., 10 (2016), 413-427. 6. R. Lidl and H. Niederreiter, Finite fields, Encyclopedia of Mathematics and its Applications, 2 Eds., Cambridge:Cambridge University Press, 1997. 7. G. Rados, Zur Theorie der Congruenzen höheren Grades, J. reine angew. Math., 99 (1886), 258-260. 8. G. Raussnitz, Zur Theorie de Conguenzen höheren Grades, Math. Naturw. Ber. Ungarn., 1 (1882/83), 266-278. 9. R. J. Wu and S. F. Hong, On deep holes of standard Reed-Solomon codes, Sci. China Math., 55 (2012), 2447-2455. 10. X. F. Xu, S. F. Hong and Y. C. Xu, On deep holes of primitive projective Reed-Solomon codes, SCIENTIA SINICA Math., 48 (2018), 1087-1094. 11. X. F. Xu and Y. C. Xu, Some results on deep holes of generalized projective Reed-Solomon codes, AIMS Math., 4 (2019), 176-192. 12. G. Z. Zhu and D. Q. Wan. Computing error distance of Reed-Solomon codes. In:TAMC 2012 Proceedings of the 9th Annual international conference on Theory and Applications of Models of Computation, (2012), 214-224. Reader Comments your name: * your email: * © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution Licese (http://creativecommons.org/licenses/by/4.0)
Fit the Thomas Point Process by Minimum Contrast Fits the Thomas point process to a point pattern dataset by the Method of Minimum Contrast using the K function. Usage thomas.estK(X, startpar=c(kappa=1,scale=1), lambda=NULL, q = 1/4, p = 2, rmin = NULL, rmax = NULL, ...) Arguments X Data to which the Thomas model will be fitted. Either a point pattern or a summary statistic. See Details. startpar Vector of starting values for the parameters of the Thomas process. lambda Optional. An estimate of the intensity of the point process. q,p Optional. Exponents for the contrast criterion. rmin, rmax Optional. The interval of \(r\) values for the contrast criterion. … Optional arguments passed to optimto control the optimisation algorithm. See Details. Details This algorithm fits the Thomas point process model to a point pattern dataset by the Method of Minimum Contrast, using the \(K\) function. The argument X can be either a point pattern: An object of class "ppp"representing a point pattern dataset. The \(K\) function of the point pattern will be computed using Kest, and the method of minimum contrast will be applied to this. a summary statistic: An object of class "fv"containing the values of a summary statistic, computed for a point pattern dataset. The summary statistic should be the \(K\) function, and this object should have been obtained by a call to Kestor one of its relatives. The algorithm fits the Thomas point process to X, by finding the parameters of the Thomas model which give the closest match between the theoretical \(K\) function of the Thomas process and the observed \(K\) function. For a more detailed explanation of the Method of Minimum Contrast, see mincontrast. The Thomas point process is described in Moller and Waagepetersen (2003, pp. 61--62). It is a cluster process formed by taking a pattern of parent points, generated according to a Poisson process with intensity \(\kappa\), and around each parent point, generating a random number of offspring points, such that the number of offspring of each parent is a Poisson random variable with mean \(\mu\), and the locations of the offspring points of one parent are independent and isotropically Normally distributed around the parent point with standard deviation \(\sigma\) which is equal to the parameter scale. The named vector of stating values can use either sigma2 (\(\sigma^2\)) or scale as the name of the second component, but the latter is recommended for consistency with other cluster models. The theoretical \(K\)-function of the Thomas process is $$ K(r) = \pi r^2 + \frac 1 \kappa (1 - \exp(-\frac{r^2}{4\sigma^2})). $$ The theoretical intensity of the Thomas process is \(\lambda = \kappa \mu\). In this algorithm, the Method of Minimum Contrast is first used to find optimal values of the parameters \(\kappa\) and \(\sigma^2\). Then the remaining parameter \(\mu\) is inferred from the estimated intensity \(\lambda\). If the argument lambda is provided, then this is used as the value of \(\lambda\). Otherwise, if X is a point pattern, then \(\lambda\) will be estimated from X. If X is a summary statistic and lambda is missing, then the intensity \(\lambda\) cannot be estimated, and the parameter \(\mu\) will be returned as NA. The remaining arguments rmin,rmax,q,p control the method of minimum contrast; see mincontrast. The Thomas process can be simulated, using rThomas. Homogeneous or inhomogeneous Thomas process models can also be fitted using the function kppm. The optimisation algorithm can be controlled through the additional arguments "..." which are passed to the optimisation function optim. For example, to constrain the parameter values to a certain range, use the argument method="L-BFGS-B" to select an optimisation algorithm that respects box constraints, and use the arguments lower and upper to specify (vectors of) minimum and maximum values for each parameter. Value An object of class "minconfit". There are methods for printing and plotting this object. It contains the following main components: Vector of fitted parameter values. Function value table (object of class "fv") containing the observed values of the summary statistic ( observed) and the theoretical values of the summary statistic computed from the fitted model parameters. References Diggle, P. J., Besag, J. and Gleaves, J. T. (1976) Statistical analysis of spatial point patterns by means of distance methods. Biometrics 32 659--667. Moller, J. and Waagepetersen, R. (2003). Statistical Inference and Simulation for Spatial Point Processes. Chapman and Hall/CRC, Boca Raton. Thomas, M. (1949) A generalisation of Poisson's binomial limit for use in ecology. Biometrika 36, 18--25. Waagepetersen, R. (2007) An estimating function approach to inference for inhomogeneous Neyman-Scott processes. Biometrics 63, 252--258. See Also Aliases thomas.estK Examples # NOT RUN { data(redwood) u <- thomas.estK(redwood, c(kappa=10, scale=0.1)) u plot(u)# } Documentation reproduced from package spatstat, version 1.55-1, License: GPL (>= 2)
Hyperbolic Embeddings with a Hopefully Right Amount of Hyperbole by Chris De Sa, Albert Gu, Chris Ré, and Fred Sala Valuable knowledge is encoded in structured data such as carefully curated databases, graphs of disease interactions, and even low-level information like hierarchies of synonyms. Embedding these structured, discrete objects in a way that can be used with modern machine learning methods, including deep learning, is challenging. Fundamentally, the problem is that these objects are discrete and structured, while much of machine learning works on continuous and unstructured data. Recent work proposes using a fancy sounding technique—hyperbolic geometry—to encode these structures. This post describes the magic of hyperbolic embeddings, and some of our recent progress solving the underlying optimization problems. We cover a lot of the basics, and, for experts, we show a simple linear-time approach that offers excellent quality(0.989 in the MAP graph reconstruction metric on WordNet synonyms—better than previous published approaches—with just two dimensions!), how to solve the optimal recovery problem and dimensionality reduction (called principal geodesic analysis), tradeoffs and theoretical properties for these strategies; these give us a new simple and scalable PyTorch-based implementation that we hope people can extend! Hyperbolic embeddings have captured the attention of the machine learning community through two exciting recent proposals. The motivation is to combine structural information with continuous representations suitable for machine learning methods. One example is embedding taxonomies (such as Wikipedia categories, lexical databases like WordNet, and phylogenetic relations). The big goal when embedding a space into another is to preserve distances and more complex relationships. It turns out that hyperbolic space can better embed graphs (particularly hierarchical graphs like trees) than is possible in Euclidean space. Even better—angles in the hyperbolic world are the same as in Euclidean space, suggesting that hyperbolic embeddings are useful for downstream applications (and not just a quirky theoretical idea). In this post, we describe the exciting properties of hyperbolic space and introduce a combinatorial construction (building on an elegant algorithm by Sarkar) for embedding tree-like graphs. In our work, we extend this construction with tools from coding theory. We also solve what is called the exact recovery problem (given only distance information, recover the underlying hyperbolic points). These ideas reveal trade-offs involving precision, dimensionality, and fidelity that affect all hyperbolic embeddings. We built a scalable PyTorch implementation using the insights we gained from our exploration. We also investigated the advantages of embedding structured data in hyperbolic space for certain tasks within natural language processing and relationship prediction using knowledge bases. We hope our effort inspires further development of techniques for constructing hyperbolic embeddings and incorporating them into more applications. What’s Great About Hyperbolic Space? Following prior work, our hyperbolic spaces of choice are the Poincaré disk, in which all points are in the interior of the unit disk in two dimensions, and the Poincaré ball, its higher-dimensional cousin. Hyperbolic space is a beautiful and sometimes weird place. The “shortest paths”, called geodesics, are curved in hyperbolic space. The figure shows parallel lines (geodesics); for example, there are infinitely many lines through a point parallel to a given line. The picture is from the Wikipedia article that contains much more information (or see Geometry). The hyperbolic distance between two points \(x\) and \(y\) has a particularly nice form: \[ d_H(x,y) = \mathsf{acosh}\left(1+2\frac{\|x-y\|^2}{(1-\|x\|^2)(1-\|y\|^2)} \right). \] Consider the distance from the origin \(d_H(O,x)\). Notice as \(x\) approaches the edge of the disk, the distance grows toward infinity, and the space is increasingly densely packed. Let’s dig one level deeper to see how we can embed short paths that define graph-like structures—in a much better way than we could in Euclidean space. The goal when embedding a graph G into a space \(V\) is to preserve the graph distance (the shortest path between a pair of vertices) in the space \(V\). If \(x\) and \(y\) are two vertices and their graph distance is \(d(x,y)\), we would like the embedding to have \(d_V(x,y)\) close to \(d(x,y)\). Now consider two children \(x\), \(y\) of parent \(z\) in a tree. We place \(z\) at the origin \(O\). The graph distance is \(d(x,y) = d(x,O) + d(O,y)\), so normalizing we get \(\frac{d(x,y)}{d(x,O)+d(O,y)} = 1\). Time to embed \(x\) and \(y\) in two-dimensional space. We start with \(x\) and \(y\) at the origin, and with equal speed move them toward the edge of the unit disk. What happens to their normalized distance? In Euclidean space, \(\frac{d_E(x,y)}{d_E(x,O)+d_E(O,y)}\) is a constant, as shown by the blue line in the plot above, so Euclidean distance does not seem to capture the graph-like structure! But in hyperbolic space, \(\frac{d_H(x,y)}{d_H(x,O)+d_H(O,y)}\)(red line) approaches \(1\). We can’t exactly achieve the graph distance in hyperbolic space, but we can get arbitrarily close! The trade off is that we have to make the edges increasingly long, which pushes the points out the edge of the disk. Now that we understand this basic idea, let’s actually see a simple way to embed a tree due to a remarkably simple construction of Sarkar. A Simple Combinatorial Way to Create Hyperbolic Embeddings. Sarkar knows! Since hyperbolic space is tree-like, it’s natural to consider embedding trees—which we can do with arbitrarily low distortion, for any number of dimensions! In our paper, we show how to extend this technique to arbitrary graphs, a problem with a lot of history in Euclidean space. We use a two-step strategy for embedding graphs into hyperbolic space: Embed a graph G = (V, E) into a tree T. Embed T into the Poincaré ball. To do task 2, we build on an elegant construction by Sarkar for two-dimensional hyperbolic space; we call this the combinatorial construction. The idea is to embed the root at the origin and recursively embed the children of each node in the tree by spacing them around a sphere centered at the parent. As we will see, the radius of the sphere can be set to precisely control the distortion. There are two factors to good embeddings: local and global. Locally, the children of each node should be spread out on the sphere around the parent. Globally, subtrees should be separated from each other; in hyperbolic space, the separation is determined by the sphere radius. Observe as we increase the radius (from 1 to 2 to 3), each subtree becomes increasingly separated: How do we precisely measure the quality of an embedding? We might want a local measure that checks whether neighbors remain closest to each other—but doesn’t care about the explicit distances. We also want a global measure that keeps track of these explicit values. We use MAP for our local measure; MAP captures how well each vertex's neighborhoods are preserved. For our global measure, we use distortion \[ \text{distortion}(G) = \sum_{x,y\in G, x\neq y} \frac{|d(x,y)-d_H(x,y)|}{d(x,y)} \] On the left, with poor separation, some nodes are closer to nodes from other subtrees than to their actual neighbors—leading to a poor MAP of 0.19. The middle embedding, with better separation, has MAP=0.68. The well-separated right embedding gets to a perfect MAP of 1.0. Now we turn to the global case—preserving the distances to minimize distortion. Recall that the further out towards the edge of the disk two points are, the closer their distance is to the path through the origin; the idea is to place them far enough to make the distortion as low as we desire. When we do this, the norms of the points get closer and closer to 1. For example, to store the point \((0.9999999,0.0)\), we need to represent numbers with at least 7 decimal points. In general, to store coordinate \(a\), we need roughly \(-\log_2(1-|a|)\) bits. The idea is illustrated by the right-most embedding above, where most of the points are clustered near the edge of the disk. This produces a tradeoff involving precision: the largest norm of a point is determined by the longest path in the tree (which we do not control) and the desired distortion (which we do control). That is, the better distortion we want, the more we pay for it in precision. Such precision-dimension-fidelity tradeoffs are unavoidable, and we show why in the paper! We can see where hyperbolic embeddings shine (short, bushy trees) and struggle (trees with long paths). Below we have three types of tree embeddings (designed to get a distortion under 0.1) with 31 nodes: the three chains tree requires twice as many bits to represent vertices. The example tree only has 31 nodes—the effect gets much worse with large graphs. Tree Vertices Max. Degree Longest Path Distortion Bits of Precision Three Chains 31 3 10 0.01 50 Binary 31 3 4 0.01 23 Star 31 30 1 0.04 18 A few other comments about the two-step strategy for embedding graphs: The combinatorial construction embeds trees with arbitrarily low distortionand is very fast! There is a large literature on embedding general graphs into trees, so that we can apply the strategy to general graphs and inherit a large number of beautiful results including bounds on the distortion. We describe in the paper the explicit link between precision and dimension. It turns out that in higher dimensions, one can use fewer bits of precision to represent the numbers. How to use the higher dimensions appropriately is challenging, and we propose some techniques inspired by coding theory (a natural connection, since coding theory is all about fighting noise by separating codewords in space). This simple strategy is already quite useful, and it’s able to quickly generate embeddings that have higher accuracy than other approaches to hyperbolic embeddings. We can also use it to warm startour PyTorch implementation (below). A simple modification enables us to embed forests: select a large distance \(s\), much larger than all intra-component distances. Generate a new connected graph \(G'\) where the distances between a pair of nodes in each pair of components is set to \(s\), and embed \(G'\). Getting It Exactly Right: From Distances to Points with Hyperbolic Multidimensional Scaling A closely related problem is to recover a set of hyperbolic points \(x_1, x_2, \ldots, x_N\) without directly observing them. Instead, we are only shown the distances between points \(d_H(x_i,x_j)\). Our work shows we can (and how to) find these points! Even better, our technique doesn't necessarily require iteratively solving an optimization problem. A key technical step in the algorithm is normalizing the matrix of distances so that the center of mass of the points is at the origin. This mirrors the Euclidean solution to the problem, (multidimensional scaling, or MDS). We study the properties of h-MDS under various noisy conditions (more technically involved!). We consider h-MDS’s sensitivity to noise through a perturbation analysis. Not surprisingly, h-MDS solutions that involve points near the edge of the disk are more sensitive to noise than traditional Euclidean MDS. We study dimensionality reduction with principal geodesic analysis (PGA), which mirrors the case in which there is stochastic noise around the hyperplane. PyTorch FTW We used the insights from our study to build a scalable PyTorch-based implementation for generating hyperbolic embeddings. Our implementation minimizes a loss function based on the hyperbolic distance; a key idea is to use the squared hyperbolic distance for the loss function, as the derivative of the hyperbolic distance function contains a singularity. Even better—our approaches (combinatorial, h-MDS) can be used to warm-start the PyTorch implementation. You can try producing your own hyperbolic embeddings with our code! Takeaways: Hyperbolic space is awesome and has many cool properties, These properties make it possible to embed graphs very well. There are fast ways to embed (combinatorial), optimal solutions (h-MDS), and lots of interesting theory! Try our (preliminary!) code and check out our paper for all the details. Data Release In our website, we release our hyperbolic entity embeddings in GloVe format that can be further integrated into applications related to knowledge base completion or can be supplied as features into various NLP tasks such as Question Answering. We provide the hyperbolic embeddings for chosen hierarchies from WordNet, Wikidata and MusicBrainz and also evaluate the intrinsic quality of the embeddings through hyperbolic-analogues of "similarity" and "analogy". We thank Charles Srisuwananukorn and Theo Rekatsinas for their helpful suggestions.
In order to present affine spaces in this fashion you need a variation of operads called clones or cartesian operads (which eventually are equivalent to Lawvere theories). You can find some references on nLab. A clone is like a symmetric operad, but with all maps between finite sets acting not just bijections. However, it is more convenient to phrase the definition as follows. A clone is a functor $C \colon \mathsf{FinSet} \to \mathsf{Set}$ equipped with a family of substitution operators, i.e. for each pair of finite sets $S$ and $T$ we specify a function $(C S)^T \times C T \to C S$ and distinguish projection operations $\pi_s \in C S$ for each $s \in S$. These are supposed to satisfy some axioms (see below). I will call the clone for affine spaces $\mathbb{A}$. (And I will refrain from using vocabulary from probability theory, this would go better with convex spaces.) For each $S$, $\mathbb{A} S$ is the set of formal affine combinations of elements of $S$, functoriality of $\mathbb{A}$ is given by grouping coefficients of points together when they are identified by a function between finite sets. Substitution operations evaluate affine combinations of affine combinations. If I have a combination $b \in \mathbb{A} T$ and a family $a \colon T \to \mathbb{A}S$, I will denote the resulting substitution by $a \bullet b \in \mathbb{A}S$. (It is really just a shorthand for $s \mapsto \sum_{t \in T} a_{t, s} b_t$.) Moreover, $\pi_s = e_s$ is the trivial combination with coefficient $1$ at $s$ and $0$ elsewhere. We can now give the following definition. An affine space is a set $E$ equipped with operations $E^S \times \mathbb{A}S \to E$, $(x , a) \mapsto x \bullet a$ subject to the following axioms. For every finite set $S$ and all $x \colon S \to E$ and $s \in S$, $x \bullet e_s = x_s$. For every pair of finite sets $S$ and $T$ and all $x \colon S \to E$, $a \colon T \to \mathbb{A} S$ and $b \in \mathbb{A} T$, $x \bullet (a \bullet b) = (x \bullet a) \bullet b$. For every function between finite sets $\phi \colon S \to S'$ and all $x \colon S' \to E$ and $a \in \mathbb{A} S$, $(x \phi) \bullet a = x \bullet (\phi a)$. In particular, $\mathbb{A} T$ for a fixed $T$ carries a standard structure of an affine space given by the structure maps of $\mathbb{A}$. In this case, the axioms specialize to some standard facts about formal affine combinations and they are in fact the same as axioms saying that $\mathbb{A}$ is a clone in the first place. This way you can recover the full definition of a clone which I neglected to write down above.
In Praise of Odds Problem Solution setup 1 The tied games are of no consequence in determining the winner of the series and can be ignored. We know, that the odds of $A$ winning a game is $p:q$ so that a step towards $A$ winning a series has the probability of $\displaystyle p'=\frac{p}{p+q}.$ For $B$ the probability is $\displaystyle q'=\frac{q}{p+q}.$ Naturally, $p'+q'=1$ and $p':q'=p:q.$ Solution setup 2 Let $A$ denote the game won by $A,$ $B$ the game won by $B$ and $T$ the game that ended up tied. The latter are of no consequence in determining the winner of the series and can be ignored. Then the series might like the one below $TATTBATBTTTBTTA \ldots$ We shall split the sequence into "consequential" pieces by grouping a won game with the preceding ties: $(TA)(TTB)(A)(TB)(TTTB)(TTA) \ldots$ What matters at the end of the day is the last game in a group. The groups that end with $A$ contribute to the advancement of $A.$ These are: $A, TA, TTA, TTTA, \ldots,$ each exclusive of any other. The corresponding probabilities are $p, rp, r^2p,\ldots$ that add up to $\displaystyle p+rp+r^2p+\ldots = p\cdot\frac{1}{1-r}=\frac{p}{p+q}$ which we shall denote as $p'.$ $q'$ is defined similarly, $\displaystyle q'=\frac{q}{p+q}.$ Solution 1 Either way the problem reduces to a win/lose sequence with the probabilities $p'$ and $q'.$ The last two games in a series are bound to be won by the same team and the games beforehand come in pairs of wins (or losses) for both teams. Thus those pairs have the probability $p'q'+q'p'=2p'q'.$ A sequence of $n$ such pairs comes with the probability of $(2p'q')^n.$ It follows that $A$ wins a series of $2n+2$ games with the probability $(2p'q')^n(p')^2.$ The total probability of $A$ being declared a winner is $\displaystyle\begin{align} P(A)&=\sum_{k=0}^{\infty}(2p'q')^n(p')^2=(p')^2\sum_{k=0}^{\infty}\frac{1}{1-2p'q'}\\ &=\frac{p^2}{(p+q)^2}\cdot\frac{(p+q)^2}{(p+q)^2-2pq}=\frac{p^2}{p^2+q^2}. \end{align}$ Solution 2 The last two games in a series are bound to be won by the same team and the games beforehand come in pairs of wins (or losses) for both teams. The odds of $A$ vs $B$ of winning two games in a row are $p^2:q^2.$ It follows that $A$ wins the series with the probability of $\displaystyle \frac{p^2}{p^2+q^2}.$ Acknowledgment This is a slight modification of a problem from Chapter 12 of Ross Honsberger's Mathematical Delights (MAA, 2004). Honsberger refers to problem 1582(a) ( The Mathematics Magazine) proposed by the Western Maryland College Problems Group and solved by the OSWEGO Problems Group (Solution 1). 65462364
Many important biological reactions, such as the formation of double-stranded DNA from two complementary strands, can be described using second order kinetics. In a second-order reaction, the sum of the exponents in the rate law is equal to two. The two most common forms of second-order reactions will be discussed in detail in this section. Reaction Rate Integration of the second-order rate law \[\dfrac{d[A]}{dt}=-k[A]^2\] yields \[ \dfrac{1}{[A]} = \dfrac{1}{[A]_0}+kt\] which is easily rearranged into a form of the equation for a straight line and yields plots similar to the one shown on the left below. The half-life is given by \[ t_{1/2}=\dfrac{1}{k[A_o]}\] Notice that the half-life of a second-order reaction depends on the initial concentration, in contrast to first-order reactions. For this reason, the concept of half-life for a second-order reaction is far less useful. Reaction rates are discussed in more detail here. Reaction orders are defined here. Here are explanations of zero and first order reactions. Case 1: Identical Reactants Two of the same reactant (A) combine in a single elementary step. \[A + A \longrightarrow P\] \[2A \longrightarrow P\] The reaction rate for this step can be written as \[\text{Rate} = - \dfrac{1}{2} \dfrac{d[A]}{dt} = + \dfrac{d[P]}{dt}\] and the rate of loss of reactant \(A\) \[\dfrac{dA}{dt}= -k[A][A] = -k[A]^2\] where k is a second order rate constant with units of M -1 min -1 or M -1 s -1. Therefore, doubling the concentration of reactant A will quadruple the rate of the reaction. In this particular case, another reactant (\(B\)) could be present with \(A\); however, its concentration does not affect the rate of the reaction, i.e., the reaction order with respect to B is zero, and we can express the rate law as \(v = k[A]^2[B]^0\). Case 2: Different Reactants Two different reactants (A and B) combine in a single elementary step: The reaction rate for this step can be written as \[\text{Rate} = - \dfrac{d[A]}{dt}= - \dfrac{d[B]}{dt}= + \dfrac{d[P]}{dt}\] and the rate of loss of reactant \(A\) \[ \dfrac{d[A]}{dt}= - k[A][B]\] where the reaction order with respect to each reactant is 1. This means that when the concentration of reactant A is doubled, the rate of the reaction will double, and quadrupling the concentration of reactant in a separate experiment will quadruple the rate. If we double the concentration of A and quadruple the concentration of B at the same time, then the reaction rate is increased by a factor of 8. This relationship holds true for any varying concentrations of A or B. Derivative and Integral Forms To describe how the rate of a second-order reaction changes with concentration of reactants or products, the differential (derivative) rate equation is used as well as the integrated rate equation. The differential rate law can show us how the rate of the reaction changes in time, while the integrated rate equation shows how the concentration of species changes over time. The latter form, when graphed, yields a linear function and is, therefore, more convenient to look at. Nonetheless, both of these equations can be derived from the above expression for the reaction rate. Plotting these equations can also help us determine whether or not a certain reaction is second-order. Case 1: A + A → P (Second Order Reaction with Single Reactant) The rate at which A decreases can be expressed using the differential rate equation. \[-\dfrac{d[A]}{dt} = k[A]^2\] The equation can then be rearranged: \[\dfrac{d[A]}{[A]^2} = -k\,dt\] Since we are interested in the change in concentration of A over a period of time, we integrate between \(t = 0\) and \(t\), the time of interest. \[ \int_{[A]_o}^{[A]_t} \dfrac{d[A]}{[A]^2} = -k \int_0^t dt\] To solve this, we use the following rule of integration (power rule): \[\int \dfrac{dx}{x^2} = -\dfrac{1}{x} + constant\] We then obtain the integrated rate equation. \[\dfrac{1}{[A]_t} - \dfrac{1}{[A]_o} = kt\] Upon rearrangement of the integrated rate equation, we obtain an equation of the line: \[\dfrac{1}{[A]_t} = kt + \dfrac{1}{[A]_o}\] The crucial part of this process is not understanding precisely how to derive the integrated rate law equation, rather it is important to understand how the equation directly relates to the graph which provides a linear relationship. In this case, and for all second order reactions, the linear plot of \(\dfrac{1}{[A]_t}\) versus time will yield the graph below. This graph is useful in a variety of ways. If we only know the concentrations at specific times for a reaction, we can attempt to create a graph similar to the one above. If the graph yields a straight line, then the reaction in question must be second order. In addition, with this graph we can find the slope of the line and this slope is \(k\), the reaction constant. The slope can be found be finding the "rise" and then dividing it by the "run" of the line. For an example of how to find the slope, please see the example section below. There are alternative graphs that could be drawn. The plot of \([A]_t\) versus time would result in a straight line if the reaction were zeroth order. It does, however, yield less information for a second order graph. This is because both the graphs of a first or second order reaction would look like exponential decays. The only obvious difference, as seen in the graph below, is that the concentration of reactants approaches zero more slowly in a second-order, compared to that in a first order reaction. Case 2: A + B → P (Second Order Reaction with multiple reactants) As before, the rate at which \(A\) decreases can be expressed using the differential rate equation: \[ \dfrac{d[A]}{dt} = -k[A][B] \] Two situations can be identified. Situation 2a: \([A]_0 \neq [B]_0\) Situation 2a is the situation that the initial concentration of the two reactants are not equal. Let \(x\) be the concentration of each species reacted at time \(t\). Let \( [A]_0 =a\) and \([B]_0 =b\), then \([A]= a-x\) ;\( [B]= b-x\). The expression of rate law becomes: \[-\dfrac{dx}{dt} = -k([A]_o - x)([B]_o - x)\] which can be rearranged to: \[\dfrac{dx}{([A]_o - x)([B]_o - x)} = kdt\] We integrate between \(t = 0\) (when \(x = 0\)) and \(t\), the time of interest. \[ \int_0^x \dfrac{dx}{([A]_o - x)([B]_o - x)} = k \int_0^t dt \] To solve this integral, we use the method of partial fractions. \[ \int_0^x \dfrac{1}{(a - x)(b -x)}dx = \dfrac{1}{b - a}\left(\ln\dfrac{1}{a - x} - \ln\dfrac{1}{b - x}\right)\] Evaluating the integral gives us: \[ \int_0^x \dfrac{dx}{([A]_o - x)([B]_o - x)} = \dfrac{1}{[B]_o - [A]_o}\left(\ln\dfrac{[A]_o}{[A]_o - x} - \ln\dfrac{[B]_o}{[B]_o - x}\right) \] Applying the rule of logarithm, the equation simplifies to: \[\int _0^x \dfrac{dx}{([A]_o - x)([B]_o - x)} = \dfrac{1}{[B]_o - [A]_o} \ln \dfrac{[B][A]_o}{[A][B]_o} \] We then obtain the integrated rate equation (under the condition that [A] and [B] are not equal). \[ \dfrac{1}{[B]_o - [A]_o}\ln \dfrac{[B][A]_o}{[A][B]_o} = kt \] Upon rearrangement of the integrated rate equation, we obtain: \[ \ln\dfrac{[B][A]_o}{[A][B]_o} = k([B]_o - [A]_o)t \] Hence, from the last equation, we can see that a linear plot of \(\ln\dfrac{[A]_o[B]}{[A][B]_o}\) versus time is characteristic of second-order reactions. This graph can be used in the same manner as the graph in the section above or written in the other way: \[\ln\dfrac{[A]}{[B]} = k([A]_o - [B]_o)t+\ln\dfrac{[A]_o}{[B]_o}\] in form \( y = ax + b\) with a slope of \(a= k([B]_0-[A]_0)\) and a y-intercept of \( b = \ln \dfrac{[A]_0}{[B]_0}\) Situation 2b: \([A]_0 =[B]_0\) Because \(A + B \rightarrow P\) Since \(A\) and \(B\) react with a 1 to 1 stoichiometry, \([A]= [A]_0 -x\) and \([B] = [B]_0 -x\) at any time \(t\), \([A] = [B]\) and the rate law will be, \[\text{rate} = k[A][B] = k[A][A] = k[A]^2.\] Thus, it is assumed as the first case!!! Example 1 The following chemical equation reaction represents the thermal decomposition of gas \(E\) into \(K\) and \(G\) at 200° C \[ 5E_{(g)} \rightarrow 4K_{(g)}+G_{(g)} \] This reaction follows a second order rate law with regards to \(E\). What is the initial rate of decomposition of \(E\). For this reaction suppose that the rate constant at 200° C is equivalent to \(4.0 \times 10^{-2} M^{-1}s^{-1}\) and the initial concentration is \(0.050\; M\)? SOLUTION Start by defining the reaction rate in terms of the loss of reactants \[ \text{Rate (initial)} = - \dfrac{1}{5} \dfrac{d[E]}{dt}\] and then use the rate law to define the rate of loss of \(E\) \[ \dfrac{d[E]}{dt} = -k [A]_i^2 \] We already know \(k\) and \([A]_i\) but we need to figure out \(x\). To do this look at the units of \(k\) and one sees it is M -1s -1 \[\text{Initial rate} = (4.0 \times 10^{-2} M^{-1}s^{-1})(0.050\,M)^2 =1 \times 10^{-4} \, Ms^{-1}\] Half-Life Another characteristic used to determine the order of a reaction from experimental data is the half-life (\(t_{1/2}\)). By definition, the half life of any reaction is the amount of time it takes to consume half of the starting material. For a second-order reaction, the half-life is inversely related to the initial concentration of the reactant (A). For a second-order reaction each half-life is twice as long as the life span of the one before. Consider the reaction \(2A \rightarrow P\): We can find an expression for the half-life of a second order reaction by using the previously derived integrated rate equation. \[\dfrac{1}{[A]_t} - \dfrac{1}{[A]_o} = kt\] Since, \[[A]_{t_{1/2}} = \dfrac{1}{2}[A]_o\] When, \[t = t_{1/2}\] Our integrated rate equation becomes: \[\dfrac{1}{\dfrac{1}{2}[A]_o} - \dfrac{1}{[A]_o} = kt_{1/2}\] After a series of algebraic steps, \[\dfrac{2}{[A]_o} - \dfrac{1}{[A]_o} = kt_{1/2}\] \[\dfrac{1}{[A]_o} = kt_{1/2}\] We obtain the equation for the half-life of a second order reaction: \[t_{1/2} = \dfrac{1}{k[A]_o}\] This inverse relationship suggests that as the initial concentration of reactant is increased, there is a higher probability of the two reactant molecules interacting to form product. Consequently, the reactant will be consumed in a shorter amount of time, i.e. the reaction will have a shorter half-life. This equation also implies that since the half-life is longer when the concentrations are low, species decaying according to second-order kinetics may exist for a longer amount of time if their initial concentrations are small. Note that for the second scenario in which \(A + B \rightarrow P\), the half-life of the reaction cannot be determined. As stated earlier, \([A]_o\) cannot be equal to \([B]_o\). Hence, the time it takes to consume one-half of A is not the same as the time it takes to consume one-half of B. Because of this, we cannot define a general equation for the half-life of this type of second-order reaction. Example \(\PageIndex{1}\) If the only reactant is the initial concentration of \(A\), and it is equivalent to \([A]_0=4.50 \times 10^{-5}\,M\) and the reaction is a second order with a rate constant \(k=0.89 M^{-1}s^{-1}\) what is the rate of the reaction? SOLUTION \[ \dfrac{1}{k[A]_0} = \dfrac{1}{(4.50x10^{-5} M)(0.89 M^{-1}{s^{-1})} } = 2.50 \times 10^4 \,s \] Summary \(2A \rightarrow P\) \(A + B \rightarrow P\) Differential Form \(-\dfrac{d[A]}{dt} = k[A]^2\) \(-\dfrac{d[A]}{dt} = k[A][B]\) Integral Form \(\dfrac{1}{[A]_t} = kt + \dfrac{1}{[A]_o}\) \(\dfrac{1}{[B]_o - [A]_o}\ln\dfrac{[B][A]_o}{[A][B]_o} = kt\) Half Life \(t_{1/2} = \dfrac{1}{k[A]_o}\) Cannot be easily defined; \(t_{1/2}\) for A and B are different. The graph below is the graph that tests if a reaction is second order. The reaction is second order if the graph has a straight line, as is in the example below. Practice Problems 1. Given the following information, determine the order of the reaction and the value of k, the reaction constant. Concentration (M) Time (s) 1.0 10 0.50 20 0.33 30 *Hint: Begin by graphing 2. Using the following information, determine the half life of this reaction, assuming there is a single reactant. Concentration (M) Time (s) 2.0 0 1.3 10 0.9633 20 3. Given the information from the previous problem, what is the concentration after 5 minutes? Solutions 1 Make graphs of concentration vs. time (zeroth order), natural log of concentration vs. time (first order), and one over concentration vs. time (second order). Determine which graph results in a straight line. This graph reflects the order of the reaction. For this problem, the straight line should be in the 3rd graph, meaning the reaction is second order. The numbers should have are: 1/Concentration (M-1) Time (s) 1 10 2 20 3 30 The slope can be found by taking the "rise" over the "run". This means taking two points, (10,1) and (20,2). The "rise" is the vertical distance between the points (2-1=1) and the "run" is the horizontal distance (20-10=10). Therefore the slope is 1/10=0.1. The value of k, therefore, is 0.1 M -2s -1. 2 Determine the order of the reaction and the reaction constant, k, for the reaction using the tactics described in the previous problem. The order of the reaction is second, and the value of k is 0.0269 M -2s -1. Since the reaction order is second, the formula for t1/2 = k-1[A] o -1. This means that the half life of the reaction is 0.0259 seconds. 3 Convert the time (5 minutes) to seconds. This means the time is 300 seconds. Use the integrated rate law to find the final concentration. The final concentration is .1167 M. References Atkins, P. W., & De Paula, J. (2006). Physical Chemistry for the Life Sciences.New York, NY: W. H. Freeman and Company. Petrucci, R. H., Harwood, W. S., & Herring, F. G. (2002). General Chemistry: Principles and Modern Applications.Upper Saddle River, NJ: Prentice-Hall, Inc. Contributors Lannah Lua, Ciara Murphy, Victoria Blanchard
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range Physical Review Letters, ISSN 0031-9007, 03/2018, Volume 120, Issue 12, pp. 121801 - 121801 A measurement is reported of the ratio of branching fractions R(J/ψ)=B(B_{c}^{+}→J/ψτ^{+}ν_{τ})/B(B_{c}^{+}→J/ψμ^{+}ν_{μ}), where the τ^{+} lepton is... Luminosity | Standard deviation | Large Hadron Collider | Leptons | Particle collisions | Decay | Física de partícules | Experiments | Particle physics | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Luminosity | Standard deviation | Large Hadron Collider | Leptons | Particle collisions | Decay | Física de partícules | Experiments | Particle physics | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 2. Observation of a peaking structure in the J/psi phi mass spectrum from B-+/- -> J/psi phi K-+/- decays PHYSICS LETTERS B, ISSN 0370-2693, 06/2014, Volume 734, Issue 370-2693 0370-2693, pp. 261 - 281 A peaking structure in the J/psi phi mass spectrum near threshold is observed in B-+/- -> J/psi phi K-+/- decays, produced in pp collisions at root s = 7 TeV... PHYSICS, NUCLEAR | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | J/psi --> muon+ muon | experimental results | Particle Physics - Experiment | Nuclear and High Energy Physics | Phi --> K+ K | vertex [track data analysis] | CERN LHC Coll | B+ --> J/psi Phi K | Peaking structure | hadronic decay [B] | Integrated luminosity | info:eu-repo/classification/ddc/ddc:530 | Data sample | final state [dimuon] | mass enhancement | width [resonance] | (J/psi Phi) [mass spectrum] | Breit-Wigner [resonance] | 7000 GeV-cms | leptonic decay [J/psi] | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS PHYSICS, NUCLEAR | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | J/psi --> muon+ muon | experimental results | Particle Physics - Experiment | Nuclear and High Energy Physics | Phi --> K+ K | vertex [track data analysis] | CERN LHC Coll | B+ --> J/psi Phi K | Peaking structure | hadronic decay [B] | Integrated luminosity | info:eu-repo/classification/ddc/ddc:530 | Data sample | final state [dimuon] | mass enhancement | width [resonance] | (J/psi Phi) [mass spectrum] | Breit-Wigner [resonance] | 7000 GeV-cms | leptonic decay [J/psi] | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 3. Observation of J /ψφ Structures Consistent with Exotic States from Amplitude Analysis of B+ →j /ψφK+ Decays Physical Review Letters, ISSN 0031-9007, 01/2017, Volume 118, Issue 2, pp. 022003 - 022003 Journal Article Physical Review Letters, ISSN 0031-9007, 06/2019, Volume 122, Issue 23, pp. 232002 - 232002 Journal Article Physical Review Letters, ISSN 0031-9007, 08/2015, Volume 115, Issue 7 Journal Article 6. Re: Predictive Nomogram for Recurrence following Surgery for Nonmetastatic Renal Cell Cancer with Tumor Thrombus: E. J. Abel, T. A. Masterson, J. A. Karam, V. A. Master, V. Margulis, R. Hutchinson, C. A. Lorentz, E. Bloom, T. M. Bauman, C. G. Wood and M. L. Blute, Jr. J Urol 2017;198:810–816 The Journal of Urology, ISSN 0022-5347, 03/2018, Volume 199, Issue 3, pp. 853 - 854 Journal Article 7. Measurement of the ratio of the production cross sections times branching fractions of B c ± → J/ψπ ± and B± → J/ψK ± and ℬ B c ± → J / ψ π ± π ± π ∓ / ℬ B c ± → J / ψ π ± $$ \mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm }{\pi}^{\pm }{\pi}^{\mp}\right)/\mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm}\right) $$ in pp collisions at s = 7 $$ \sqrt{s}=7 $$ TeV Journal of High Energy Physics, ISSN 1029-8479, 1/2015, Volume 2015, Issue 1, pp. 1 - 30 The ratio of the production cross sections times branching fractions σ B c ± ℬ B c ± → J / ψ π ± / σ B ± ℬ B ± → J / ψ K ± $$ \left(\sigma... B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory Journal Article 8. Precise Measurement of the e+e- →π+π-J /ψ Cross Section at Center-of-Mass Energies from 3.77 to 4.60 GeV Physical Review Letters, ISSN 0031-9007, 03/2017, Volume 118, Issue 9, pp. 092001 - 092001 The cross section for the process e^{+}e^{-}→π^{+}π^{-}J/ψ is measured precisely at center-of-mass energies from 3.77 to 4.60 GeV using 9 fb^{-1} of data... Journal Article Physical Review D, ISSN 1550-7998, 08/2013, Volume 88, Issue 3 Journal Article Physical Review D, ISSN 2470-0010, 01/2017, Volume 95, Issue 1 Journal Article 11. Measurement of the ratio of branching fractions and difference in CP asymmetries of the decays B + → J/ψπ + and B + → J/ψK Journal of High Energy Physics, ISSN 1126-6708, 03/2017, Volume 3, Issue 3, pp. 1 - 18 The ratio of branching fractions and the difference in CP asymmetries of the decays B+ -> J/psi pi(+) and B+ -> J/psi K+ are measured using a data sample of pp... B physics | Branching fraction | CP violation | Hadron-Hadron scattering (experiments) | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS B physics | Branching fraction | CP violation | Hadron-Hadron scattering (experiments) | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article
Optimizing Thermal Processes in Carbon Manufacturing with Simulation Guest blogger Bojan Jokanović of SGL Carbon GmbH, one of the world’s leading manufacturers of carbon-based products, discusses the optimization of thermal processes in the carbon industry. Carbon products are used in many industries, including semiconductors, car manufacturing, ceramics, and metallurgy. Properties of graphite including high-temperature stability, good thermal and electric conducting behavior, and high chemical stability make this material unique. However, carbon manufacturing is an energy-intensive industry. We must build digital process chains to optimize processes and minimize costs. The Manufacturing Process of Carbon Products Typically, carbon products such as graphite are made from raw materials such as petroleum or coal tar pitch coke, pitch as a binder, and some additives. To enable a dense material structure and to avoid large defects in the material, it is crucial to optimize the recipe of all fractions, too. After mixing, the next step is forming; e.g., extrusion, isostatic pressing, or similar shaping processes. The graphite manufacturing process. The material is consequently baked at temperatures of 600–1000°C, where the pitch is pyrolyzed. Some gases, like methane, hydrogen, and steam, evolve during this process. As the process begins, the pitch melts and the material becomes viscoplastic and vulnerable. At this process step, the material has very low stiffness and strength, and the gas pressure coming from the pyrolysis can cause deformation and cracks. After baking, the material is hard and contains a porous structure that is, in the next step, typically impregnated either by resin or pitch. The time for the impregnation depends on the product permeability and the autoclave conditions, pressure, and temperature. Such impregnated products are again pyrolyzed and finally graphitized. Graphitization is a high-temperature process where temperatures of 3000°C are achieved. During the graphitization process, the material is typically exposed to a high thermal load, mostly through electric heating and commonly through Joule heating. In this step, the graphite crystal structure is obtained and the material becomes electrically and thermally conductive as well as soft and easily machinable. Simulating Thermal Processes for Carbon with COMSOL Multiphysics® For the simulation of most of these processes, we use the COMSOL Multiphysics® software, since it gives us the flexibility to implement our own models and its scaling and parallelization functionality enable fast calculation of these phenomena. Consider the baking part of the graphite manufacturing process. We use chemical reaction kinetics to calculate the gas evolution rate. The gas needs to be transported through the material subject to Darcy’s law. where \mathbf{u} is the fluid superficial velocity; p is the pressure; \varepsilon is the porosity; {Q_\textrm{m}} is the mass source term (in our case, created gas quantity); \kappa is the material permeability; and \eta and \rho are fluid viscosity and density, respectively. The gases are created in Arrhenius-type kinetics: where {k_{\textrm{0}i}} is the pre-exponential factor, {E_{\textrm{A}i}} is the activation energy, {n_i} is the reaction order, R is the universal gas constant, T is temperature, and {c_i} is the reactant concentration. As a consequence, the gas evolution in the material will be affected through the heating rate. The higher the heating rate, the faster the gas evolution. However, if more gas is created than conducted through the material pores, as described by the material permeability, then the pressure will grow and cause stress and potential damage in the material. The properties are temperature dependent and we evaluate them using experiments in our in-house laboratory. The gases are considered to behave according to the ideal gas law. The material goes through structural changes that are characterized by changes in material properties, such as thermal conductivity, the coefficient of thermal expansion, stiffness, and more. The thermal stress will also exist due to inhomogeneous temperature distribution, causing inhomogeneous structure change and thermal expansion behavior. This thermal stress contributes to the stress coming from the gas and we optimize our baking programs to stay below the limits required for optimal product quality. In the simulation of the graphitization process, we must consider the material changes and the restrictions given by the equipment. Typically, the material’s resistivity decreases tenfold during the process and the rectifier is pushed to its limits. At the beginning of the process, we usually have the maximum voltage limitation. At the end of the process, since the material becomes much more electrically conductive, we achieve the rectifier limit for maximum current. Besides, we must take care of the mechanical stresses, especially since this is the final thermal treatment and the costs of scrap are very high. COMSOL Multiphysics gives us an opportunity to implement these restrictions. In our graphitization model, we use Joule heating physics together with the thermal stress calculation. Besides the anisotropic and temperature-dependent material properties like thermal conductivity, electrical resistivity, and coefficients of the thermal expansion, we can also measure high-temperature stress-strain behavior in our lab, and we implement all of these properties in our model. For the evaluation of stresses, we developed our own stress criterion based on the combination of the principal stresses and the strength of the material along the process. Left: The graphitization furnace (double-mirrored view). Right: The automatic furnace control based on the implemented restrictions. In each time step, all of the restrictions are recalculated and based on the logical expression result, and one of them is selected. Helping Customers with Simulation Not only do we model our processes using COMSOL Multiphysics, but we also use simulation to support our equipment purchases and to analyze customer processes. This helps to better understand customer product requirements and offer them the best value for their money. We also offer our customers modeling and experimental validation in our in-house lab. The image below shows the plurality of the SGL Carbon products employed in high-temperature applications. Typically, simulation can help us determine the right combination of the insulation materials, calculate the temperature within the furnace, and minimize the heat losses through joints. Finally, the mechanical stability and thermal load must also be tested and the heaters and geometry of the charging racks must be optimized. SGL Carbon GmbH products for high-temperature applications, CFRC, and graphite heaters: a CFRC charging rack, hard graphite-felt insulation with radiation-minimizing graphite foil, and a CRFC fan. Courtesy SGL Forum Exhibit, Meitingen, Germany. Due to the multiphysics capabilities, we can extend the model in COMSOL Multiphysics to include induction heating, like in the 3D induction heating example given below. We can calculate the eddy currents in the susceptor, objects within the furnace, temperature homogeneity, and energy losses within the insulation and coil. We can also optimize the circuit frequency to enable the most efficient coupling. At the same time, we calculate the heat transfer through conduction and radiation and make sure that the load is heated homogeneously. A bell jar induction heating furnace with a spherical load. Using Simulation Apps for Technical Sales Support When it comes to customer consulting, simulation is gaining importance and COMSOL apps offer a simplified interface with the possibility to calculate some frequent problems. For instance, graphite heaters can be calculated with a different number of segments, filleted or nonfilleted edges, manipulated furnace power, and hard or soft felts. This gives us a first impression of the temperature and stresses in the graphite heater and felt. Certainly, professional skills are needed to make a robust model that converges for a wide range of boundary conditions and runs in the background of the app. The users, who are not familiar with the modeling aspect, don’t have to bother with it and can use the app without difficulties. Left: An app for the graphite heater insulated with graphite felt. Right: A felt calculation app. In the second presented case, the technical sales team advises customers about proper insulation assembly. The app enables the selection of felts and support, and the user can vary the number of layers and their thickness according to customer limitations. Users can also set the furnace temperature. The app calculates the heat losses through the insulation and the outer temperature of the felt. The proper boundary conditions on the outer side are automatically set. The modeling group in charge of central innovation offers unique expertise within modeling as a consultancy service to external customers in many industries. About the Author Bojan Jokanović works in the modeling group of SGL Carbon GmbH, Germany. He obtained a PhD from the Technical University in Clausthal, working on the modeling of the oxidation of coated carbon composites. His first experience with COMSOL was in 2002, when he started with the simulation of heat transfer during cyclic loading. His main professional interest and specialization is in the fields of process modeling and simulation, as well as optimization, both of physical and business processes. Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Fit the Matern Cluster Point Process by Minimum Contrast Using Pair Correlation Fits the Matern Cluster point process to a point pattern dataset by the Method of Minimum Contrast using the pair correlation function. Usage matclust.estpcf(X, startpar=c(kappa=1,scale=1), lambda=NULL, q = 1/4, p = 2, rmin = NULL, rmax = NULL, ..., pcfargs=list()) Arguments X Data to which the Matern Cluster model will be fitted. Either a point pattern or a summary statistic. See Details. startpar Vector of starting values for the parameters of the Matern Cluster process. lambda Optional. An estimate of the intensity of the point process. q,p Optional. Exponents for the contrast criterion. rmin, rmax Optional. The interval of \(r\) values for the contrast criterion. … Optional arguments passed to optimto control the optimisation algorithm. See Details. pcfargs Optional list containing arguments passed to pcf.pppto control the smoothing in the estimation of the pair correlation function. Details This algorithm fits the Matern Cluster point process model to a point pattern dataset by the Method of Minimum Contrast, using the pair correlation function. The argument X can be either a point pattern: An object of class "ppp"representing a point pattern dataset. The pair correlation function of the point pattern will be computed using pcf, and the method of minimum contrast will be applied to this. a summary statistic: An object of class "fv"containing the values of a summary statistic, computed for a point pattern dataset. The summary statistic should be the pair correlation function, and this object should have been obtained by a call to pcfor one of its relatives. The algorithm fits the Matern Cluster point process to X, by finding the parameters of the Matern Cluster model which give the closest match between the theoretical pair correlation function of the Matern Cluster process and the observed pair correlation function. For a more detailed explanation of the Method of Minimum Contrast, see mincontrast. The Matern Cluster point process is described in Moller and Waagepetersen (2003, p. 62). It is a cluster process formed by taking a pattern of parent points, generated according to a Poisson process with intensity \(\kappa\), and around each parent point, generating a random number of offspring points, such that the number of offspring of each parent is a Poisson random variable with mean \(\mu\), and the locations of the offspring points of one parent are independent and uniformly distributed inside a circle of radius \(R\) centred on the parent point, where \(R\) is equal to the parameter scale. The named vector of stating values can use either R or scale as the name of the second component, but the latter is recommended for consistency with other cluster models. The theoretical pair correlation function of the Matern Cluster process is $$ g(r) = 1 + \frac 1 {4\pi R \kappa r} h(\frac{r}{2R}) $$ where the radius R is the parameter scale and $$ h(z) = \frac {16} \pi [ z \mbox{arccos}(z) - z^2 \sqrt{1 - z^2} ] $$ for \(z <= 1\), and \(h(z) = 0\) for \(z > 1\). The theoretical intensity of the Matern Cluster process is \(\lambda = \kappa \mu\). In this algorithm, the Method of Minimum Contrast is first used to find optimal values of the parameters \(\kappa\) and \(R\). Then the remaining parameter \(\mu\) is inferred from the estimated intensity \(\lambda\). If the argument lambda is provided, then this is used as the value of \(\lambda\). Otherwise, if X is a point pattern, then \(\lambda\) will be estimated from X. If X is a summary statistic and lambda is missing, then the intensity \(\lambda\) cannot be estimated, and the parameter \(\mu\) will be returned as NA. The remaining arguments rmin,rmax,q,p control the method of minimum contrast; see mincontrast. The Matern Cluster process can be simulated, using rMatClust. Homogeneous or inhomogeneous Matern Cluster models can also be fitted using the function kppm. The optimisation algorithm can be controlled through the additional arguments "..." which are passed to the optimisation function optim. For example, to constrain the parameter values to a certain range, use the argument method="L-BFGS-B" to select an optimisation algorithm that respects box constraints, and use the arguments lower and upper to specify (vectors of) minimum and maximum values for each parameter. Value An object of class "minconfit". There are methods for printing and plotting this object. It contains the following main components: Vector of fitted parameter values. Function value table (object of class "fv") containing the observed values of the summary statistic ( observed) and the theoretical values of the summary statistic computed from the fitted model parameters. References Moller, J. and Waagepetersen, R. (2003). Statistical Inference and Simulation for Spatial Point Processes. Chapman and Hall/CRC, Boca Raton. Waagepetersen, R. (2007) An estimating function approach to inference for inhomogeneous Neyman-Scott processes. Biometrics 63, 252--258. See Also Aliases matclust.estpcf Examples # NOT RUN { data(redwood) u <- matclust.estpcf(redwood, c(kappa=10, R=0.1)) u plot(u, legendpos="topright")# } Documentation reproduced from package spatstat, version 1.55-1, License: GPL (>= 2)
Forgot password? New user? Sign up Existing user? Log in please comment... Note by Trishit Chandra 4 years, 8 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: My text book named additional mathematics quoted that a and b must be real numbers for all convenience you mentioned. Basically, imaginary numbers or complex numbers posses many possibilities while staying in a function but ought to be an only value when they are not. Example: x^2 = -1, x can be + j or - j but Sqrt (-1) or j itself cannot choose to own both values. Another example: e^x = -1, x can be -3 Pi j, - Pi j, Pi j, 3 Pi j, 5 Pi j and etc. Here, we define principal value for Ln -1 which is j Pi. Complex number undoubtedly have been studied by many people including mathematicians of course and it is found to be something important that unable to be denied by people. Without complex number, there are many logical outcomes that cannot be explained and topics remained incomplete. The most significant example is solution to x^3 + p x + q = 0. We may refuse quadratic formula when j appears, but we cannot do the same for cubic formula. When we realize that something ought to survive, we shall have thousands of reasons to prove them right; on the other hand, we could have ten thousands of reasons instead just to reject them. Towards the surviving path, we ought to study and understand the feature of complex number rather than to go for ways that may find them some flaws. When we do in such a way that j suddenly becomes -j, we ought to realize its nature and also the meaning of a principal value. In other words, we shall find NO TROUBLE at all when we don't do in a way that purposely try to reveal their nature of many values. Sincerely, I personally find that complex numbers are as true as they can be. Just try our best to cope with their features. Then, we shall find that Complex Number is the way towards all truth. Log in to reply Indeed. A lot of times, we memorize rules without recalling what are the conditions under which it holds. In this case, as you pointed out, the rule only applies for real numbers, and cannot be applied to complex numbers. This is why in Rules of Exponents, we added in a warning about when these rules hold: Other common instance of forgetting conditions is applying AM-GM to negative numbers. thank you sir Lu Chee Ket for helping me to clear my doubt and also thanks to calvin lin. I meant not all convenience are applicable if not real number, not meant for not applicable to complex number. I concluded from analysis that when there comes with fractional index with simplest co-prime, example (-1)^(5/ 3) or (-1)^(7/ 2), always take the effect of its denominator before numerator. Without list of black and white one by one, we just need to emphasize that we conclude according to whatever reasonable. I think 0^0 = 1 should be included in the list. As its limit tells, 0^0 = 0^(1 - 1) = 0/ 0 doesn't deny the fact for it to be 1. 0/ 0 includes 1 but generally something instead of invalid. Indeterminate is just something but we cannot know one of them in general to satisfy particular need to be sole definable value. Thanks anyway for providing the list for me to do some revision and thinking. Proof by contradiction I can write - i2=−1×−1 i^2 = \sqrt{-1}\times\sqrt{-1}i2=−1×−1 According to you , i2=−1×−1=−1×−1=1 i^2 = \sqrt{-1}\times\sqrt{-1} = \sqrt{-1\times-1} = 1i2=−1×−1=−1×−1=1 - this is a contradiction as i2=−1i^2 = -1i2=−1 yes exactly i've the confusion with the contradiction. Good for doubting. This is the spirit that we should request to ourselves. If you truly prove it incorrect, then I personally feel happy to admit the fact. This should be our correct attitude. @Lu Chee Ket – thankkkk u sir Lu Chee Ket for encouraging me...i will be highly obliged if you kindly give a link of your book. @Trishit Chandra – Didn't get your exact meaning. Anyway, no book from me that helps in this topic. I only wrote about general number. @Lu Chee Ket – In the first comment you wrote that your text book named additional mathematics quoted.........Thats why I thought that you wrote a book about this topic. But if you write about general number then you can also send me that link. @Trishit Chandra – The only book published contained mistakes despite the concept. I had stopped giving to people since long ago. Upon my own need to work on it, only then I gave to one or two people. I may produce new version in future. I think this has not been the proper moment that I could introduce to you. Not easy to get help to start developing. Anyway, thanks for your concern. Calvin Lin had posted some immediate introduction to questions of your most concern. @Lu Chee Ket – Ok thank you sir. And nice to meet you. @Trishit Chandra – Nice to meet you. Problem Loading... Note Loading... Set Loading...
Let $K$ be $\mathbb{Q}_p$ for some prime $p$ (or more generally an unramified extension $W(\mathbb{F}_q)$ of $\mathbb{Q}_p$). If $\xi \in K$, we can write it in a unique way in the form $\sum a_i p^i$ where each $a_i$ is either zero (and must be such for all but finitely many $i<0$) or a root of unity (i.e., a $(q-1)$-th root of unity), collectively known as the Teichmüller representatives. It is now tempting, if $L$ is an arbitrary field containing the $(q-1)$-th roots of unity, to define a (Laurent) power series $f_\xi \in L((t))$ associated to $\xi$ by: $f_\xi = \sum a_i t^i$, assuming we have chosen an isomorphism between the $(q-1)$-th roots of unity in $K$ and those in $L$. My question is basically whether anything intelligent can be said about $f_\xi$, say, when $\xi$ is a rational, apart from the obvious things like the fact that $f_\xi(p) = \xi$ if we take $L=K$ (with the identity map on roots of unity). For definiteness, let me concentrate on the first non-trivial case (because already it seems very hard to say anything at all): take $p=5$ and $\xi=2$. Letting $\zeta$ be the $4$-th root of $1$ in $\mathbb{Q}_5$ that is congruent to $2$ mod $5$, we have $$2 = \zeta - 1 \cdot 5 - \zeta \cdot 5^2 + 0 \cdot 5^3 - 1 \cdot 5^4 + \zeta \cdot 5^5 - 1 \cdot 5^6 - 1 \cdot 5^7 + 1 \cdot 5^8 + \zeta \cdot 5^9 + \zeta \cdot 5^{10} - 1 \cdot 5^{11} + 0 \cdot 5^{12} - \zeta \cdot 5^{13} - 1 \cdot 5^{14} + O(5^{15})$$ and I am asking about the formal series $$2 + 4 \cdot t + 3 \cdot t^2 + 0 \cdot t^3 + 4 \cdot t^4 + 2 \cdot t^5 + 4 \cdot t^6 + 4 \cdot t^7 + 1 \cdot t^8 + 2 \cdot t^9 + 2 \cdot t^{10} + 4 \cdot t^{11} + 0 \cdot t^{12} + 3 \cdot t^{13} + 4 \cdot t^{14} + \cdots \in \mathbb{F}_5[[t]]$$ (the sequence of coefficients does not appear in the OEIS, which is mildly surprising) or about $$i - 1 \cdot z - i \cdot z^2 + 0 \cdot z^3 - 1 \cdot z^4 + i \cdot z^5 - 1 \cdot z^6 - 1 \cdot z^7 + 1 \cdot z^8 + i \cdot z^9 + i \cdot z^{10} - 1 \cdot z^{11} + 0 \cdot z^{12} - i \cdot z^{13} - 1 \cdot z^{14} + \cdots \in \mathbb{C}[[z]]$$ Here are some examples of questions which I think are interesting: is the former algebraic/automatic? does the latter satisfy some nontrivial differential equation? can it be extended holomorphically anywhere beyond the unit disk? does it take interesting values at interesting points? On a related line, we could consider the $5$-adic quantity $$- \zeta - 1 \cdot 5 + \zeta \cdot 5^2 + 0 \cdot 5^3 - 1 \cdot 5^4 - \zeta \cdot 5^5 + \cdots$$ obtained by exchanging $\pm\zeta$ in the expansion of $2$ above: in other words, it is the value $f_2(5)$ where $f_2$ has been defined using the unique non-identity automorphism of the $4$-th roots of unity; can anything intelligent be said about that quantity? (e.g., is it rational? experimentally, it doesn't look like it is). Edit: I should have recalled that, for $\xi$ rational, the sequence of coefficients can be computed from Witt polynomials. Namely, assuming $p$ does not divide the denominator of $\xi$, we let $A_0 = \xi$ and by induction $A_n = (\xi - \sum_{i=0}^{n-1} (A_i)^{p^{n-i}}\cdot p^i)/p^n$: then the $A_i$ are rationals and $p$-adic integers, and $a_i$ is (the root of unity which coincides with) $A_i$ mod $p$. Unfortunately, when seen as integers (if $\xi$ is an integer), the $A_n$ grow extremely rapidly in absolute value, and I don't see how that can be made useful.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
I have to prove that if $\sum_{n=1}^{\infty} a_{n}$ is a convergent series with positive real numbers, then $\sum_{n=1}^{\infty} (a_{n})^\frac{n}{n+1}$ converges. I also wonder if the converse is true. Any suggestion, hint will be very welcome. Thanks. I have to prove that if $\sum_{n=1}^{\infty} a_{n}$ is a convergent series with positive real numbers, then $\sum_{n=1}^{\infty} (a_{n})^\frac{n}{n+1}$ converges. I also wonder if the converse is true. Any suggestion, hint will if $a_n \geq \frac{1}{2^{n+1}} \Rightarrow a_n^{\frac{n}{n+1}} =\frac{a_n}{a_n^{\frac{1}{n+1}}} \leq 2a_n$ if $a_n \leq \frac{1}{2^{n+1}}$ then $a_n^{\frac{n}{n+1} } \leq \frac{1}{2^n}$. Therefore $a_n^{\frac{n}{n+1} } \leq 2a_n + \frac{1}{2^n}$ and by comparison test you are done.
Difference between revisions of "Algebraic Geometry Seminar Fall 2016" (→Fall 2016 Schedule) (→Fall 2016 Schedule) Line 66: Line 66: |December 9 |December 9 |[https://sites.google.com/a/umich.edu/robert-m-walker/ Robert Walker] (Michigan) |[https://sites.google.com/a/umich.edu/robert-m-walker/ Robert Walker] (Michigan) − | + | |Daniel |Daniel |} |} Revision as of 14:15, 23 November 2016 The seminar meets on Fridays at 2:25 pm in Van Vleck B305. Here is the schedule for the previous semester. Contents Algebraic Geometry Mailing List Please join the AGS Mailing List to hear about upcoming seminars, lunches, and other algebraic geometry events in the department (it is possible you must be on a math department computer to use this link). Fall 2016 Schedule date speaker title host(s) September 16 Alexander Pavlov (Wisconsin) Betti Tables of MCM Modules over the Cones of Plane Cubics local September 23 PhilSang Yoo (Northwestern) Classical Field Theories for Quantum Geometric Langlands Dima October 7 Botong Wang (Wisconsin) Enumeration of points, lines, planes, etc. local October 14 Luke Oeding (Auburn) Border ranks of monomials Steven October 28 Adam Boocher (Utah) Bounds for Betti Numbers of Graded Algebras Daniel November 4 Lukas Katthaen Finding binomials in polynomial ideals Daniel November 11 Daniel Litt (Columbia) Arithmetic restrictions on geometric monodromy Jordan November 18 David Stapleton (Stony Brook) Hilbert schemes of points and their tautological bundles Daniel December 2 Rohini Ramadas (Michigan) Dynamics on the moduli space of pointed rational curves Daniel and Jordan December 9 Robert Walker (Michigan) Uniform Asymptotic Growth on Symbolic Powers of Ideals Daniel Abstracts Alexander Pavlov Betti Tables of MCM Modules over the Cones of Plane Cubics Graded Betti numbers are classical invariants of finitely generated modules over graded rings describing the shape of a minimal free resolution. We show that for maximal Cohen-Macaulay (MCM) modules over a homogeneous coordinate rings of smooth Calabi-Yau varieties X computation of Betti numbers can be reduced to computations of dimensions of certain Hom groups in the bounded derived category D(X). In the simplest case of a smooth elliptic curve embedded into projective plane as a cubic we use our formula to get explicit answers for Betti numbers. In this case we show that there are only four possible shapes of the Betti tables up to a shifts in internal degree, and two possible shapes up to a shift in internal degree and taking syzygies. PhilSang Yoo Classical Field Theories for Quantum Geometric Langlands One can study a class of classical field theories in a purely algebraic manner, thanks to the recent development of derived symplectic geometry. After reviewing the basics of derived symplectic geometry, I will discuss some interesting examples of classical field theories, including B-model, Chern-Simons theory, and Kapustin-Witten theory. Time permitting, I will make a proposal to understand quantum geometric Langlands and other related Langlands dualities in a unified way from the perspective of field theory. Botong Wang Enumeration of points, lines, planes, etc. It is a theorem of de Brujin and Erdős that n points in the plane determines at least n lines, unless all the points lie on a line. This is one of the earliest results in enumerative combinatorial geometry. We will present a higher dimensional generalization to this theorem. Let E be a generating subset of a d-dimensional vector space. Let [math]W_k[/math] be the number of k-dimensional subspaces that is generated by a subset of E. We show that [math]W_k\leq W_{d-k}[/math], when [math]k\leq d/2[/math]. This confirms a "top-heavy" conjecture of Dowling and Wilson in 1974 for all matroids realizable over some field. The main ingredients of the proof are the hard Lefschetz theorem and the decomposition theorem. I will also talk about a proof of Welsh and Mason's log-concave conjecture on the number of k-element independent sets. These are joint works with June Huh. Luke Oeding Border ranks of monomials What is the minimal number of terms needed to write a monomial as a sum of powers? What if you allow limits? Here are some minimal examples: [math]4xy = (x+y)^2 - (x-y)^2[/math] [math]24xyz = (x+y+z)^3 + (x-y-z)^3 + (-x-y+z)^3 + (-x+y-z)^3[/math] [math]192xyzw = (x+y+z+w)^4 - (-x+y+z+w)^4 - (x-y+z+w)^4 - (x+y-z+w)^4 - (x+y+z-w)^4 + (-x-y+z+w)^4 + (-x+y-z+w)^4 + (-x+y+z-w)^4[/math] The monomial [math]x^2y[/math] has a minimal expression as a sum of 3 cubes: [math]6x^2y = (x+y)^3 + (-x+y)^3 -2y^3[/math] But you can use only 2 cubes if you allow a limit: [math]6x^2y = \lim_{\epsilon \to 0} \frac{(x^3 - (x-\epsilon y)^3)}{\epsilon}[/math] Can you do something similar with xyzw? Previously it wasn't known whether the minimal number of powers in a limiting expression for xyzw was 7 or 8. I will answer this and the analogous question for all monomials. The polynomial Waring problem is to write a polynomial as linear combination of powers of linear forms in the minimal possible way. The minimal number of summands is called the rank of the polynomial. The solution in the case of monomials was given in 2012 by Carlini--Catalisano--Geramita, and independently shortly thereafter by Buczynska--Buczynski--Teitler. In this talk I will address the problem of finding the border rank of each monomial. Upper bounds on border rank were known since Landsberg-Teitler, 2010 and earlier. We use symmetry-enhanced linear algebra to provide polynomial certificates of lower bounds (which agree with the upper bounds). This work builds on the idea of Young flattenings, which were introduced by Landsberg and Ottaviani, and give determinantal equations for secant varieties and provide lower bounds for border ranks of tensors. We find special monomial-optimal Young flattenings that provide the best possible lower bound for all monomials up to degree 6. For degree 7 and higher these flattenings no longer suffice for all monomials. To overcome this problem, we introduce partial Young flattenings and use them to give a lower bound on the border rank of monomials which agrees with Landsberg and Teitler's upper bound. I will also show how to implement Young flattenings and partial Young flattenings in Macaulay2 using Steven Sam's PieriMaps package. Adam Boocher Let R be a standard graded algebra over a field. The set of graded Betti numbers of R provide some measure of the complexity of the defining equations for R and their syzygies. Recent breakthroughs (e.g. Boij-Soederberg theory, structure of asymptotic syzygies, Stillman's Conjecture) have provided new insights about these numbers and we have made good progress toward understanding many homological properties of R. However, many basic questions remain. In this talk I'll talk about some conjectured upper and lower bounds for the total Betti numbers for different classes of rings. Surprisingly, little is known in even the simplest cases. Lukas Katthaen (Frankfurt) In this talk, I will present an algorithm which, for a given ideal J in the polynomial ring, decides whether J contains a binomial, i.e., a polynomial having only two terms. For this, we use ideas from tropical geometry to reduce the problem to the Artinian case, and then use an algorithm from number theory. This is joint work with Anders Jensen and Thomas Kahle. David Stapleton Fogarty showed in the 1970s that the Hilbert scheme of n points on a smooth surface is smooth. Interest in these Hilbert schemes has grown since it has been shown they arise in hyperkahler geometry, geometric representation theory, and algebraic combinatorics. In this talk we will explore the geometry of certain tautological bundles on the Hilbert scheme of points. In particular we will show that these tautological bundles are (almost always) stable vector bundles. We will also show that each sufficiently positive vector bundles on a curve C is the pull back of a tautological bundle from an embedding of C into the Hilbert scheme of the projective plane.
The monstrous moonshine picture is a sub-graph of Conway’s Big Picture on 218 vertices. These vertices are the classes of lattices needed in the construction of the 171 moonshine groups. That is, moonshine gives us the shape of the picture. (image credit Friendly Monsters) But we can ask to reverse this process. Is the shape of the picture dictated by group-theoretic properties of the monster? That is, can we reconstruct the 218 lattices and their edges starting from say the conjugacy classes of the monster and some simple rules? Look at the the power maps for the monster. That is, the operation on conjugacy classes sending the class of $g$ to that of $g^k$ for all divisors $k$ of the order of $g$. Or, if you prefer, the $\lambda$-ring structure on the representation ring. Rejoice die-hard believers in $\mathbb{F}_1$-theory, rejoice! Here’s the game to play. Let $g$ be a monster element of order $n$ and take $d=gcd(n,24)$. (1) : If $d=8$ and a power map of $g$ gives class $8C$ add $(n|4)$ to your list. (2) : Otherwise, look at the smallest power of $g$ such that the class is one of $12J,8F,6F,4D, 3C,2B$ or $1A$ and add $(n|e)$ where $e$ is the order of that class, or, if $n > 24$ and $e$ is even add $(n | \frac{e}{2})$. A few examples: For class 20E, $d=4$ and the power maps give classes 4D and 2B, so we add $(20|2)$. For class 32B, $d=8$ but the power map gives 8E so we resort to rule (2). Here the power maps give 8E, 4C and 2B. So, the best class is 4C but as $32 > 24$ we add $(32|2)$. For class 93A, $d=3$ and the power map gives 3C and even though $93 > 24$ we add $(93|3)$. This gives us a list of instances $(n|e)$ with $n$ the order of a monster element. For $N=n \times e$ look at all divisors $h$ of $24$ such that $h^2$ divides $N$ and add to your list of lattices those of the form $M \frac{g}{h}$ with $g$ strictly smaller than $h$ and $(g,h)=1$ and $M$ a divisor of $\frac{N}{h^2}$. This gives us a list of lattices $M \frac{g}{h}$, which is an $h$-th root of unity centered as $L=M \times h$ (see this post). If we do this for all lattices in the list we can partition the $L$’s in families according to which roots of unity are centered at $L$. This gives us the moonshine picture. (modulo mistakes I made) The operations we have to do after we have our list of instances $(n|e)$ is pretty straightforward from the rules we used to determine the lattices needed to describe a moonshine group. Perhaps the oddest part in the construction are the rules (1) and (2) and the prescribed conjugacy classes used in them. One way to look at this is that the classes $8C$ and $12J$ (or $24J$) are special. The other classes are just the power-maps of $12J$. Another ‘rationale’ behind these classes may come from the notion of harmonics (see the original Monstrous moonshine paper page 312) of the identity element and the two classes of involutions, 2A (the Fischer involutions) and 2B (the Conway involutions). For 1A these are : 1A,3C For 2A these are : 2A,4B,8C For 2B these are : 2B,4D,6F,8F,12J,24J These are exactly the classes that we used in (1) and (2), if we add the power-classes of 8C. Perhaps I should take some time to write all this down more formally.Leave a Comment
This answer focuses on identifying families of solutions to the problem described in the question. I've made two provisional conjectures in order to make progress with the problem: The result can be stated for three $2n$-gons rather than two $n$-gons and one $2n$-gon. Solutions have mirror symmetry. Or equivalently, in any solution there are two pairs of $2n$-gons which have the same degree of overlap. [This turns out to be false - see 'Solution family 5' below. However, this condition is assumed in Solution families 1-4.] [ Continuation 6: in an overhaul of the notation I've halved $\phi$ and doubled $m$ so that $m$ is always an integer.] If we define the degree of overlap, $j$, between two $2n$-gons $(n>3)$ as the number of edges of one that lie wholly inside the other, then $1 < j < n$. If $$\phi = \frac{\pi}{2n}$$is half the angle subtended at the centre of the $2n$-gon by one of its edges, then the distance between the centres of two overlapping $2n$-gons is $$D_{jn} = 2\cos{j\phi}$$Consider a $2n$-gon P which overlaps a $2n$-gon O with degree $j$. Now bring in a third $2n$-gon, Q, which also overlaps O with degree $j$ but is rotated about the centre of O by an angle $m\phi$ with respect to P, where $m$ is an integer. The distance between the centres of P and Q, which I'll denote by $D_{kn}$ for a reason that will become apparent, is$$D_{kn} = 2D_{jn}\sin{\tfrac{m}{2}\phi} = 4\cos{j\phi} \, \sin{\tfrac{m}{2}\phi}$$ We now demand that P and Q should overlap by an integer degree, $k$, so that$$D_{kn} = 2\cos{k\phi}$$This will ensure that all points of intersection coincide with vertices of the intersecting polygons, and thus provide a configuration satisfying the requirements of the question (with the proviso that the condition does not guarantee that there is a common area of overlap shared by all three polygons). We have omitted mention of the orientation of the polygons, but it is easily shown that this is always such as to achieve the desired overlap. Combining the two expressions for $D_{kn}$ gives the condition $$2\cos{j\phi}\, \sin{\tfrac{m}{2}\phi} = \cos{k\phi}$$or (since $n\phi=\pi/2$)$$2\cos{j\phi}\, \cos{(n-\tfrac{m}{2})\phi} = \cos{k\phi} \tag{1}$$ The configurations we seek are solutions of this equation for integer $n$, $j$, $k$ and $m$. In the first example in the question $n = 12, j = 8, k = 6, m = 12$. In the second example $n = 15, j = 6, k = 10, m = 6$. [ Continuation 6: for solutions under the constraint of conjecture 2, $m$ is always even, but in the more general case $m$ may be odd.] I'll now throw this open to see if anyone can provide a general solution. It seems likely that $j$, $k$ and $m/2$ must be divisors of $2n$ [this turns out to be incorrect], and I have a hunch that the solution will involve cyclotomic polynomials [this turns out to be correct]. Continuation (1) I've now identified 3 families of solutions consistent with conjecture 2 (mirror symmetry), all involving angles of 60 degrees. There may be others. Solution family 1 This family is defined by setting $j=2n/3$. This means that half the angle subtended at the centre of O by its overlapping edges is $\tfrac{\pi}{3}$ radians or 60 degrees. Since $\cos{\tfrac{\pi}{3}} = \tfrac{1}{2}$ it reduces equation 1 to$$\cos{(n-\tfrac{m}{2})\phi} = \cos{k\phi}$$so there are solutions with$$n-\tfrac{m}{2} = k$$(where $\tfrac{m}{2}$ is an integer) subject to $2 \le k \le n-1\,\,$, $1 \le \tfrac{m}{2} \le n-2\,\,$ and $3|n$. The first example in the question belongs to this family. The complete set of solutions for $n=12$ combine to make this pleasing diagram: Solution family 2 This family has $m=2n/3$. This makes $\cos{(n-\tfrac{m}{2})\phi}=\cos{(\pi/3)} = \tfrac{1}{2}$, which reduces equation 1 to$$\cos{j\phi} = \cos{k\phi}$$so (given that $j<n$ and $k<n$)$$j = k$$These solutions have threefold rotational symmetry. The only restriction is that $n$ must be divisible by 3. Example ($n=9, j=k=4, m=6$): Solution family 3 This family is the most interesting of the three, but yields only one solution. It is defined by setting $k=2n/3$ so that $\cos{k\phi}=\cos{\tfrac{\pi}{3}} = \tfrac{1}{2}$. Equation 1 then becomes $$2\cos{j\phi}\,\cos{(n-\tfrac{m}{2})\phi} = \tfrac{1}{2}$$which may be written in the following equivalent forms:$$\cos{(n+\tfrac{m}{2}-j)\phi} + \cos{(n+\tfrac{m}{2}+j)\phi} = -\tfrac{1}{2} \tag{2}$$$$\cos{(n-\tfrac{m}{2}-j)\phi} + \cos{(n-\tfrac{m}{2}+j)\phi} = \tfrac{1}{2} \tag{3}$$Solutions to these equations can be found using the following theorem relating the roots $z_i(N)$ of the $N$th cyclotomic polynomial to the Möbius function $\mu(N)$: $$\sum_{i=1}^{\varphi(N)} {z_i(N)} = \mu(N)$$where $\varphi(N)$ is the Euler totient function (the number of positive integers less than $N$ that are relatively prime to $N$) and $z_i(N)$ are a subset of the $N$th roots of unity.Taking the real part of both sides and using symmetry this becomes:$$\sum_{i=1}^{\varphi(N)/2} { \cos{(p_i(N) \frac{2\pi}{N})} } = \tfrac{1}{2} \mu(N) \tag{4}$$where $p_i(N)$ is the $i$th integer which is coprime with $N$. The Möbius function $\mu(N)$ takes values as follows: $\mu(N) = 1$ if $N$ is a square-free positive integer with an even number of prime factors. $\mu(N) = −1$ if $N$ is a square-free positive integer with an odd number of prime factors. $\mu(N) = 0$ if $N$ has a squared prime factor. Equation 4 thus provides solutions to equations 2 and 3 if $\varphi(N) = 4$, $\mu(N)$ has the appropriate sign and the cosine arguments are matched. The first two conditions are true for only two integers: $N=5$, with $\mu(5)=-1$, $p_1(5) = 1, p_2(5) = 2$ $N=10$, with $\mu(10)=1$, $p_1(10) = 1, p_2(10) = 3$. We first set $N=5$ and look for solutions to equation 2. Matching the cosine arguments requires firstly that$$2j \frac{\pi}{2n} = (p_2(5)-p_1(5))\frac{2\pi}{5}$$from which it follows that$$5j = 2n$$ $n$ must be divisible by 3 to satisfy $k=2n/3$, so the smallest value of $n$ for which solutions are possible is $n=15$, with $k=10$ and $j=6$. All other solutions will be multiples of this one.Matching the cosine arguments also requires that$$(n+\tfrac{m}{2}-j) \frac{\pi}{2n} = p_1(5) \frac{2\pi}{5}$$which implies $m=6$. This is the solution illustrated by the second example in the question. Setting $N=10$ and looking for solutions to equation 3 yields the same solution. Continuation (2) Solution family 4 A fourth family of solutions can be obtained by writing equation 1 as $$\cos{(n+\tfrac{m}{2}-j)\phi} + \cos{(n+\tfrac{m}{2}+j)\phi} + \cos{k\phi} = 0 \tag{5}$$ and viewing this as an instance of equation 4 with $\varphi(N)/2 = 3$ and $\mu(N) = 0$. There are two values of N which satisfy these conditions, $N = 9$ and $N = 18$, which lead to three solutions: For $N = 9$:$$n=9, j=6, k=8, m=2\\n=9, j=4, k=4, m=6$$ For $N=18$:$$n=9, j=2, k=2, m=6$$ However, these are not new solutions. The first is a member of family 1 and the last two are members of family 2. Continuation (3) Solution family 5 Rotating a $2n$-gon about a vertex by an angle $m\phi$ moves its centre by a distance $$2\sin{ \tfrac{m}{2}\phi} = 2\cos{(n-\tfrac{m}{2})\phi} = D_{n-m/2,n}.$$If $m$ is even the rotated $2n$-gon thus overlaps the original $2n$-gon with integer degree $n-\tfrac{m}{2}$, and a third $2n$-gon with a different $m$ may overlap both of these, providing another type of solution to the problem. Solutions of this kind may be constructed for all $n \ge 3$. The diagram below includes the complete set of such solutions for $n=5$. A similar diagram with $n=12$ (but with a centrally placed $2n$-gon of the same size which can only be added when $3|n$) is shown above under Solution family 1. This family of solutions provides exceptions to conjecture 2: not all groups of three $2n$-gons overlapping in this way show mirror symmetry. Continuation (4) If we relax the condition set by conjecture 2, allowing solutions without mirror symmetry, we need an additional parameter, $l$, to specify the degree of overlap between O and P (which is now no longer $j$). The distances between the centres of the three $2n$-gons are now related by the cosine rule: $$D_{nk}^2 = D_{nj}^2 + D_{nl}^2 - 2 D_{nj}D_{nl}\cos{m_k\phi},$$where a subscript $k$ has been added to $m$ to acknowledge the fact that $j$, $l$ and $k$ can be cycled to generate three equations of this form. These can be written$$\\ \cos^2{J} + \cos^2{L} - 2 \cos{J} \cos{L} \cos{M_k} = \cos^2{K} \\ \cos^2{K} + \cos^2{J} - 2 \cos{K} \cos{J} \cos{M_l} = \cos^2{L} \\ \cos^2{L} + \cos^2{K} - 2 \cos{L} \cos{K} \cos{M_j} = \cos^2{J} $$where$$J = j\phi,\, L = l\phi,\, K = k\phi,\\M_j = m_j\phi,\, M_l = m_l\phi,\, M_k = m_k\phi$$ The same result in a slightly different form is derived in the answer provided by @marco trevi. $M_j$, $M_l$ and $M_k$ are the angles of the triangle formed by the centres of the three polygons. Since these sum to $\pi$ we have$$m_j + m_l + m_k = 2n$$ The sine rule gives another set of relations:$$\frac{\cos{J}}{\sin{M_j}} = \frac{\cos{L}} {\sin{M_l}} = \frac{\cos{K}}{\sin{M_k}} $$ In general the $m$ parameters are limited to integer values (as can be seen by considering the symmetry of the overlap between a $2n$-gon and each of its two neighbours). But they are now not necessarily even.
1. The population of a pod of bottlenose dolphins is modeled by the function [latex]A\left(t\right)=8{\left(1.17\right)}^{t}[/latex], where t is given in years. To the nearest whole number, what will the pod population be after 3 years? 2. Find an exponential equation that passes through the points (0, 4) and (2, 9). 3. Drew wants to save $2,500 to go to the next World Cup. To the nearest dollar, how much will he need to invest in an account now with 6.25% APR, compounding daily, in order to reach his goal in 4 years? 4. An investment account was opened with an initial deposit of $9,600 and earns 7.4% interest, compounded continuously. How much will the account be worth after 15 years? 5. Graph the function [latex]f\left(x\right)=5{\left(0.5\right)}^{-x}[/latex] and its reflection across the y-axis on the same axes, and give the y-intercept. 6. The graph shows transformations of the graph of [latex]f\left(x\right)={\left(\frac{1}{2}\right)}^{x}[/latex]. What is the equation for the transformation? 7. Rewrite [latex]{\mathrm{log}}_{8.5}\left(614.125\right)=a[/latex] as an equivalent exponential equation. 8. Rewrite [latex]{e}^{\frac{1}{2}}=m[/latex] as an equivalent logarithmic equation. 9. Solve for x by converting the logarithmic equation [latex]log_{\frac{1}{7}}\left(x\right)=2[/latex] to exponential form. 10. Evaluate [latex]\mathrm{log}\left(\text{10,000,000}\right)[/latex] without using a calculator. 11. Evaluate [latex]\mathrm{ln}\left(0.716\right)[/latex] using a calculator. Round to the nearest thousandth. 12. Graph the function [latex]g\left(x\right)=\mathrm{log}\left(12 - 6x\right)+3[/latex]. 13. State the domain, vertical asymptote, and end behavior of the function [latex]f\left(x\right)={\mathrm{log}}_{5}\left(39 - 13x\right)+7[/latex]. 14. Rewrite [latex]\mathrm{log}\left(17a\cdot 2b\right)[/latex] as a sum. 15. Rewrite [latex]{\mathrm{log}}_{t}\left(96\right)-{\mathrm{log}}_{t}\left(8\right)[/latex] in compact form. 16. Rewrite [latex]{\mathrm{log}}_{8}\left({a}^{\frac{1}{b}}\right)[/latex] as a product. 17. Use properties of logarithm to expand [latex]\mathrm{ln}\left({y}^{3}{z}^{2}\cdot \sqrt[3]{x - 4}\right)[/latex]. 18. Condense the expression [latex]4\mathrm{ln}\left(c\right)+\mathrm{ln}\left(d\right)+\frac{\mathrm{ln}\left(a\right)}{3}+\frac{\mathrm{ln}\left(b+3\right)}{3}[/latex] to a single logarithm. 19. Rewrite [latex]{16}^{3x - 5}=1000[/latex] as a logarithm. Then apply the change of base formula to solve for [latex]x[/latex] using the natural log. Round to the nearest thousandth. 20. Solve [latex]{\left(\frac{1}{81}\right)}^{x}\cdot \frac{1}{243}={\left(\frac{1}{9}\right)}^{-3x - 1}[/latex] by rewriting each side with a common base. 21. Use logarithms to find the exact solution for [latex]-9{e}^{10a - 8}-5=-41[/latex] . If there is no solution, write no solution. 22. Find the exact solution for [latex]10{e}^{4x+2}+5=56[/latex]. If there is no solution, write no solution. 23. Find the exact solution for [latex]-5{e}^{-4x - 1}-4=64[/latex]. If there is no solution, write no solution. 24. Find the exact solution for [latex]{2}^{x - 3}={6}^{2x - 1}[/latex]. If there is no solution, write no solution. 25. Find the exact solution for [latex]{e}^{2x}-{e}^{x}-72=0[/latex]. If there is no solution, write no solution. 26. Use the definition of a logarithm to find the exact solution for [latex]4\mathrm{log}\left(2n\right)-7=-11[/latex] 27. Use the one-to-one property of logarithms to find an exact solution for [latex]\mathrm{log}\left(4{x}^{2}-10\right)+\mathrm{log}\left(3\right)=\mathrm{log}\left(51\right)[/latex] If there is no solution, write no solution. 28. The formula for measuring sound intensity in decibels D is defined by the equation [latex]D=10\mathrm{log}\left(\frac{I}{{I}_{0}}\right)[/latex], where I is the intensity of the sound in watts per square meter and [latex]{I}_{0}={10}^{-12}[/latex] is the lowest level of sound that the average person can hear. How many decibels are emitted from a rock concert with a sound intensity of [latex]4.7\cdot {10}^{-1}[/latex] watts per square meter? 29. A radiation safety officer is working with 112 grams of a radioactive substance. After 17 days, the sample has decayed to 80 grams. Rounding to five significant digits, write an exponential equation representing this situation. To the nearest day, what is the half-life of this substance? 30. Write the formula found in the previous exercise as an equivalent equation with base [latex]e[/latex]. Express the exponent to five significant digits. 31. A bottle of soda with a temperature of 71º Fahrenheit was taken off a shelf and placed in a refrigerator with an internal temperature of 35º F. After ten minutes, the internal temperature of the soda was 63º F. Use Newton’s Law of Cooling to write a formula that models this situation. To the nearest degree, what will the temperature of the soda be after one hour? 32. The population of a wildlife habitat is modeled by the equation [latex]P\left(t\right)=\frac{360}{1+6.2{e}^{-0.35t}}[/latex], where t is given in years. How many animals were originally transported to the habitat? How many years will it take before the habitat reaches half its capacity? 33. Enter the data from the table below into a graphing calculator and graph the resulting scatter plot. Determine whether the data from the table would likely represent a function that is linear, exponential, or logarithmic. x f(x) 1 3 2 8.55 3 11.79 4 14.09 5 15.88 6 17.33 7 18.57 8 19.64 9 20.58 10 21.42 34. The population of a lake of fish is modeled by the logistic equation [latex]P\left(t\right)=\frac{16,120}{1+25{e}^{-0.75t}}[/latex], where t is time in years. To the nearest hundredth, how many years will it take the lake to reach 80% of its carrying capacity? For the following exercises, use a graphing utility to create a scatter diagram of the data given in the table. Observe the shape of the scatter diagram to determine whether the data is best described by an exponential, logarithmic, or logistic model. Then use the appropriate regression feature to find an equation that models the data. When necessary, round values to five decimal places. 35. x f(x) 1 20 2 21.6 3 29.2 4 36.4 5 46.6 6 55.7 7 72.6 8 87.1 9 107.2 10 138.1 36. x f(x) 3 13.98 4 17.84 5 20.01 6 22.7 7 24.1 8 26.15 9 27.37 10 28.38 11 29.97 12 31.07 13 31.43 37. x f(x) 0 2.2 0.5 2.9 1 3.9 1.5 4.8 2 6.4 3 9.3 4 12.3 5 15 6 16.2 7 17.3 8 17.9
How to Use Nodes in LaTeX Using PGF/TikZ YouTube Arrows Operators Functions Miscel. Alphabet Brackets Dots Var. Size EXTRA Textcomp Marvosym Pifont Chemarrow Introduction II Note About Packages I did not separate the the AMS-LATEX symbols from the standard ones. Do not forget the include nusepackagefamsmath,amssymb,latexsymgbefore nbeginfdocumentgto be able to use these symbols. In the section Extra, I have included the packages …... One of the easiest ways of adding an arrow to your document is to simply type it in. As long as the AutoCorrect feature is active, Word will automatically replace certain strings of characters with a single arrow … How to write text through an arrow LaTeX.org 2017-04-10 · In this video, we will learn about 'node' in LaTeX using TikZ package. A node is typically a rectangle or circle or any other simple shape with some text on it.... Arrows Operators Functions Miscel. Alphabet Brackets Dots Var. Size EXTRA Textcomp Marvosym Pifont Chemarrow Introduction II Note About Packages I did not separate the the AMS-LATEX symbols from the standard ones. Do not forget the include nusepackagefamsmath,amssymb,latexsymgbefore nbeginfdocumentgto be able to use these symbols. In the section Extra, I have included the packages … Beamer arrows TikZ example I want to put a left arrow over letter in math mode. I am looking for exactly the reverse of the vector symbol in \vec{x} . I tried to put it with \stackrel{\leftarrow}{x} , but it doesn't look good. how to set reasonable expectations for yourself Many presentations created using LaTeX beamer included mathematical equations and these can be easily included in a presentation and in this post we will consider using the tikz package to add various interesting elements to equations, such as lines between text on a slide and part of an equation. notation What are the correct equilibrium arrows I want to write a few lines of code for converting it from the prior format to the secenond format in Microsoft word or even Latex. I want to put the rate value "K" on the top of --> (arrow) in the second format. Can you help me? how to write a letter apologizing for employee theft Many presentations created using LaTeX beamer included mathematical equations and these can be easily included in a presentation and in this post we will consider using the tikz package to add various interesting elements to equations, such as lines between text on a slide and part of an equation. How long can it take? Is the 'implies' symbol ([math]\implies[/math]) in LaTeX Arrow signs (type ⇅ arrow symbols on your keyboard) fsymbols formatting \Rightarrow with text above it - TeX - LaTeX How to write text through an arrow LaTeX.org How do you do a $\Rightarrow$ with a slash through it in How To Write Arrow In Latex 2017-04-10 · In this video, we will learn about 'node' in LaTeX using TikZ package. A node is typically a rectangle or circle or any other simple shape with some text on it. Latex provides a huge number of different arrow symbols. Arrows would be used within math enviroment. If you want to use them in text just put the arrow command between two $ like this example: $\uparrow$ now you got an up arrow in text. The picture environment allows programming pictures directly in LaTeX. On the one hand, there are rather severe constraints, as the slopes of line segments … The picture environment allows programming pictures directly in LaTeX. On the one hand, there are rather severe constraints, as the slopes of line segments … Please only refer genuine TeX (including, but not limited to, LaTeX) questions to TeX-SX. – Loop Space Apr 21 '12 at 18:59. add a comment 3 Answers active oldest votes. 13 \nRightarrow gives $\nRightarrow$ Added: For a thorough
Once we recognize a need for a linear function to model the data in “Draw and interpret scatter plots,” the natural follow-up question is “what is that linear function?” One way to approximate our linear function is to sketch the line that seems to best fit the data. Then we can extend the line until we can verify the y-intercept. We can approximate the slope of the line by extending it until we can estimate the [latex]\frac{\text{rise}}{\text{run}}[/latex]. Example 2: Finding a Line of Best Fit Find a linear function that fits the data in the table below by “eyeballing” a line that seems to fit. Chirps 44 35 20.4 33 31 35 18.5 37 26 Temperature 80.5 70.5 57 66 68 72 52 73.5 53 Solution On a graph, we could try sketching a line. Using the starting and ending points of our hand drawn line, points (0, 30) and (50, 90), this graph has a slope of and a y-intercept at 30. This gives an equation of where c is the number of chirps in 15 seconds, and T( c) is the temperature in degrees Fahrenheit. The resulting equation is represented in the graph below. Recognizing Interpolation or Extrapolation While the data for most examples does not fall perfectly on the line, the equation is our best guess as to how the relationship will behave outside of the values for which we have data. We use a process known as interpolation when we predict a value inside the domain and range of the data. The process of extrapolation is used when we predict a value outside the domain and range of the data. The graph below compares the two processes for the cricket-chirp data addressed in Example 2. We can see that interpolation would occur if we used our model to predict temperature when the values for chirps are between 18.5 and 44. Extrapolation would occur if we used our model to predict temperature when the values for chirps are less than 18.5 or greater than 44. There is a difference between making predictions inside the domain and range of values for which we have data and outside that domain and range. Predicting a value outside of the domain and range has its limitations. When our model no longer applies after a certain point, it is sometimes called model breakdown. For example, predicting a cost function for a period of two years may involve examining the data where the input is the time in years and the output is the cost. But if we try to extrapolate a cost when x = 50, that is in 50 years, the model would not apply because we could not account for factors fifty years in the future. A General Note: Interpolation and Extrapolation Different methods of making predictions are used to analyze data. The method of interpolationinvolves predicting a value inside the domain and/or range of the data. The method of extrapolationinvolves predicting a value outside the domain and/or range of the data. Model breakdownoccurs at the point when the model no longer applies. Example 3: Understanding Interpolation and Extrapolation Chirps 44 35 20.4 33 31 35 18.5 37 26 Temperature 80.5 70.5 57 66 68 72 52 73.5 53 Use the cricket data above to answer the following questions: Would predicting the temperature when crickets are chirping 30 times in 15 seconds be interpolation or extrapolation? Make the prediction, and discuss whether it is reasonable. Would predicting the number of chirps crickets will make at 40 degrees be interpolation or extrapolation? Make the prediction, and discuss whether it is reasonable. Solution The number of chirps in the data provided varied from 18.5 to 44. A prediction at 30 chirps per 15 seconds is inside the domain of our data, so would be interpolation. Using our model:[latex]\begin{cases}T\left(30\right)=30+1.2\left(30\right)\hfill \\ \text{ }=66\text{degrees}\hfill \end{cases}[/latex] Based on the data we have, this value seems reasonable. The temperature values varied from 52 to 80.5. Predicting the number of chirps at 40 degrees is extrapolation because 40 is outside the range of our data. Using our model:[latex]\begin{cases}40=30+1.2c\hfill \\ 10=1.2c\hfill \\ c\approx 8.33\hfill \end{cases}[/latex] We can compare the regions of interpolation and extrapolation using the graph below. Try It 1 According to the data from the table in Example 3, what temperature can we predict it is if we counted 20 chirps in 15 seconds? Finding the Line of Best Fit Using a Graphing Utility While eyeballing a line works reasonably well, there are statistical techniques for fitting a line to data that minimize the differences between the line and data values. [1] One such technique is called least squares regression and can be computed by many graphing calculators, spreadsheet software, statistical software, and many web-based calculators. [2] Least squares regression is one means to determine the line that best fits the data, and here we will refer to this method as linear regression. How To: Given data of input and corresponding outputs from a linear function, find the best fit line using linear regression. Enter the input in List 1 (L1). Enter the output in List 2 (L2). On a graphing utility, select Linear Regression (LinReg). Example 4: Finding a Least Squares Regression Line Find the least squares regression line using the cricket-chirp data in the table below. Chirps 44 35 20.4 33 31 35 18.5 37 26 Temperature 80.5 70.5 57 66 68 72 52 73.5 53 Solution Enter the input (chirps) in List 1 (L1). Enter the output (temperature) in List 2 (L2). See the table below. L1 44 35 20.4 33 31 35 18.5 37 26 L2 80.5 70.5 57 66 68 72 52 73.5 53 On a graphing utility, select Linear Regression (LinReg). Using the cricket chirp data from earlier, with technology we obtain the equation:[latex]T\left(c\right)=30.281+1.143c[/latex] Q & A Will there ever be a case where two different lines will serve as the best fit for the data? No. There is only one best fit line. Technically, the method minimizes the sum of the squared differences in the vertical direction between the line and the data values. ↵ For example, http://www.shodor.org/unchem/math/lls/leastsq.html ↵