text
stringlengths
256
16.4k
Damping in Structural Dynamics: Theory and Sources If you strike a bowl made of glass or metal, you hear a tone with an intensity that decays with time. In a world without damping, the tone would linger forever. In reality, there are several physical processes through which the kinetic and elastic energy in the bowl dissipate into other energy forms. In this blog post, we will discuss how damping can be represented, and the physical phenomena that cause damping in vibrating structures. How Is Damping Quantified? There are several ways by which damping can be described from a mathematical point of view. Some of the more popular descriptions are summarized below. One of the most obvious manifestations of damping is the amplitude decay during free vibrations, as in the case of a singing bowl. The rate of the decay depends on how large the damping is. It is most common that the vibration amplitude decreases exponentially with time. This is the case when the energy lost during a cycle is proportional to the amplitude of the cycle itself. Let’s start out with the equation of motion for a system with a single degree of freedom (DOF) with viscous damping and no external loads, After division with the mass, m, we get a normalized form, usually written as Here, \omega_0 is the undamped natural frequency and \zeta is called the damping ratio. In order for the motion to be periodic, the damping ratio must be limited to the range 0 \le \zeta < 1. The amplitude of the free vibration in this system will decay with the factor where T 0 is the period of the undamped vibration. Decay of a free vibration for three different values of the damping ratio. Another measure in use is the logarithmic decrement, δ. This is the logarithm of the ratio between the amplitudes of two subsequent peaks, where T is the period. The relation between the logarithmic decrement and the damping ratio is Another case in which the effect of damping has a prominent role is when a structure is subjected to a harmonic excitation at a frequency that is close to a natural frequency. Exactly at resonance, the vibration amplitude tends to infinity, unless there is some damping in the system. The actual amplitude at resonance is controlled solely by the amount of damping. Amplification for a single-DOF system for different frequencies and damping ratios. In some systems, like resonators, the aim is to get as much amplification as possible. This leads to another popular damping measure: the quality factor or Q factor. It is defined as the amplification at resonance. The Q factor is related to the damping ratio by Another starting point for the damping description is to assume that there is a certain phase shift between the applied force and resulting displacement, or between stress and strain. Talking about phase shifts is only meaningful for a steady-state harmonic vibration. If you plot the stress vs. strain for a complete period, you will see an ellipse describing a hysteresis loop. Stress-strain history. You can think of the material properties as being complex-valued. Thus, for uniaxial linear elasticity, the complex-valued stress-strain relation can be written as Here, the real part of Young’s modulus is called the storage modulus, and the imaginary part is called the loss modulus. Often, the loss modulus is described by a loss factor, η, so that Here, E can be identified as the storage modulus E’. You may also encounter another definition, in which E is the ratio between the stress amplitude and strain amplitude, thus in which case The distinction is important only for high values of the loss factor. An equivalent measure for loss factor damping is the loss tangent, defined as The loss angle δ is the phase shift between stress and strain. Damping defined by a loss factor behaves somewhat differently from viscous damping. Loss factor damping is proportional to the displacement amplitude, whereas viscous damping is proportional to the velocity. Thus, it is not possible to directly convert one number into the other. In the figure below, the response of a single-DOF system is compared for the two damping models. It can be seen that viscous damping predicts higher damping than loss factor damping above the resonance and lower damping below it. Comparison of dynamic response for viscous damping (solid lines) and loss factor damping (dashed lines). Usually, the conversion between the damping ratio and loss factor damping is considered at a resonant frequency, and then \eta \approx 2 \zeta. However, this is only true at a single frequency. In the figure below, a two-DOF system is considered. The damping values have been matched at the first resonance, and it is clear that the predictions at the second resonance differ significantly. Comparison of dynamic response for viscous damping and loss factor damping for a two-DOF system. The loss factor concept can be generalized by defining the loss factor in terms of energy. It can be shown that for the material model described above, the energy dissipated during a load cycle is where \varepsilon_a is the strain amplitude. Similarly, the maximum elastic energy during the cycle is The loss factor can thus be written in terms of energy as This definition in terms of dissipated energy can be used irrespective of whether the hysteresis loop actually is a perfect ellipse or not — as long as the two energy quantities can be determined. Sources of Damping From the physical point of view, there are many possible sources of damping. Nature has a tendency to always find a way to dissipate energy. Internal Losses in the Material All real materials will dissipate some energy when strained. You can think of it as a kind of internal friction. If you look at a stress-strain curve for a complete load cycle, it will not trace a perfect straight line. Rather, you will see something that is more like a thin ellipse. Often, loss factor damping is considered a suitable representation for material damping, since experience shows that the energy loss per cycle tends to have rather weak dependencies on frequency and amplitude. However, since the mathematical foundation for loss factor damping is based on complex-valued quantities, the underlying assumption is harmonic vibration. Thus, this damping model can only be used for frequency-domain analyses. The loss factor for a material can have quite a large variation, depending on its detailed composition and which sources you consult. In the table below, some rough estimates are provided. Material Loss Factor, η Aluminum 0.0001–0.02 Concrete 0.02–0.05 Copper 0.001–0.05 Glass 0.0001–0.005 Rubber 0.05–2 Steel 0.0001–0.01 Loss factors and similar damping descriptions are mainly used when the exact physics of the damping in the material is not known or not important. In several material models, such as viscoelasticity, the dissipation is an inherent property of the model. Friction in Joints It is common that structures are joined by, for example, bolts or rivets. If the joined surfaces are sliding relative to each other during the vibration, the energy is dissipated through friction. As long as the value of the friction force itself does not change during the cycle, the energy lost per cycle is more or less frequency independent. In this sense, the friction is similar to internal losses in the material. Bolted joints are common in mechanical engineering. The amount of dissipation that will be experienced in bolted joints can vary quite a lot, depending on the design. If low damping is important, then the bolts should be closely spaced and well-tightened so that macroscopic slip between the joined surfaces is avoided. Sound Emission A vibrating surface will displace the surrounding air (or other surrounding medium) so that sound waves are emitted. These sound waves carry away some energy, which results in the energy loss from the point of view of the structure. A plot of the sound emission in a Tonpilz transducer. Anchor Losses Often, a small component is attached to a larger structure that is not part of the simulation. When the component vibrates, some waves will be induced in the supporting structure and carried away. This phenomenon is often called anchor losses, particularly in the context of MEMS. Thermoelastic Damping Even with pure elastic deformation without dissipation, straining a material will change its temperature slightly. Local stretching leads to a temperature decrease, while compression implies a local heating. Fundamentally, this is a reversible process, so the temperature will return to the original value if the stress is released. Usually, however, there are gradients in the stress field with associated gradients in the temperature distribution. This will cause a heat flux from warmer to cooler regions. When the stress is removed during a later part of the load cycle, the temperature distribution is no longer the same as the one caused by the onloading. Thus, it is not possible to locally return to the original state. This becomes a source of dissipation. The thermoelastic damping effect is mostly important when working with small length scales and high-frequency vibrations. For MEMS resonators, thermoelastic damping may give a significant decrease of the Q factor. Dashpots Sometimes, a structure contains intentional discrete dampers, like the shock absorbers in a wheel suspension. Such components obviously have a large influence on the total damping in a structure, at least with respect to some vibration modes. Seismic Dampers A particular case where much effort is spent on damping is in civil engineering structures in seismically active areas. It is of the utmost importance to reduce the vibration levels in buildings if hit by an earthquake. The purpose of such dampers can be both to isolate a structure from its foundation and to provide dissipation. Further Reading Read the follow-up to this blog post here: How to Model Different Types of Damping in COMSOL Multiphysics® Comments (11) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
№ 9 All Issues Derech V. D. ↓ Abstract Ukr. Mat. Zh. - 2018. - 70, № 8. - pp. 1072-1084 Let $S$ be a finite semigroup. By $\mathrm{S}\mathrm{u}\mathrm{b}(S)$ we denote the lattice of all its subsemigroups. If $A \in \mathrm{S}\mathrm{u}\mathrm{b}(S)$, then by $h(A)$ we denote the height of the subsemigroup $A$ in the lattice $\mathrm{S}\mathrm{u}\mathrm{b}(S)$. A semigroup $S$ is called structurally uniform if, for any $A, B \in \mathrm{S}\mathrm{u}\mathrm{b}(S)$ the condition $h(A) = h(B) implies that A \sim = B$. We present a classification of finite structurally uniform groups and commutative nilsemigroups. Complete classification of finite semigroups for which the inverse monoid of local automorphisms is a permutable semigroup Ukr. Mat. Zh. - 2016. - 68, № 11. - pp. 1571-1578 A semigroup $S$ is called permutable if $\rho \circ \sigma = \sigma \circ \rho$. for any pair of congruences $\rho, \sigma$ on $S$. A local automorphism of semigroup $S$ is defined as an isomorphism between two of its subsemigroups. The set of all local automorphisms of the semigroup $S$ with respect to an ordinary operation of composition of binary relations forms an inverse monoid of local automorphisms. We present a complete classification of finite semigroups for which the inverse monoid of local automorphisms is permutable. Classification of finite nilsemigroups for which the inverse monoid of local automorphisms is permutable semigroup Ukr. Mat. Zh. - 2016. - 68, № 5. - pp. 610-624 A semigroup $S$ is called permutable if $\rho \circ \sigma = \sigma \circ \rho$ for any pair of congruences $\rho$, $\sigma$ on $S$. A local automorphism of the semigroup $S$ is defined as an isomorphism between two subsemigroups of this semigroup. The set of all local automorphisms of a semigroup $S$ with respect to an ordinary operation of composition of binary relations forms an inverse monoid of local automorphisms. In the proposed paper, we present a classification of all finite nilsemigroups for which the inverse monoid of local automorphisms is permutable. Полугруппа $S$ называется перестановочной, если для любой пары конгруэнций $\rho$, $\sigma$ на $S$ имеет место равенство $\rho \circ \sigma = \sigma \circ \rho$. Classification of Finite Commutative Semigroups for Which the Inverse Monoid of Local Automorphisms is a ∆-Semigroup Ukr. Mat. Zh. - 2015. - 67, № 7. - pp. 867-873 A semigroup $S$ is called a ∆-semigroup if the lattice of its congruences forms a chain relative to the inclusion. A local automorphism of the semigroup $S$> is called an isomorphism between its two subsemigroups. The set of all local automorphisms of the semigroup $S$ relative to the ordinary operation of composition of binary relations forms an inverse monoid of local automorphisms. We present a classification of finite commutative semigroups for which the inverse monoid of local automorphisms is a ∆-semigroup. Ukr. Mat. Zh. - 2014. - 66, № 4. - pp. 445–457 Let G be an arbitrary group of bijections on a finite set. By I( G), we denote the set of all injections each of which is included in a bijection from G. The set I( G) forms an inverse monoid with respect to the ordinary operation of composition of binary relations. We study different properties of the semi-group I( G). In particular, we establish necessary and sufficient conditions for the inverse monoid I( G) to be permutable (i.e., ξ ○ φ = φ ○ ξ for any pair of congruences on I( G)). In this case, we describe the structure of each congruence on I( G). We also describe the stable orderings on I( A n A n Ukr. Mat. Zh. - 2013. - 65, № 6. - pp. 780–786 Let G be an arbitrary group of bijections on a finite set and let I( G) denote the set of all partial injective transformations each of which is included in a bijection from G. The set I( G) is a fundamental factorizable inverse semigroup. We study various properties of the semigroup I( G). In particular, we describe the automorphisms of I( G) and obtain necessary and sufficient conditions for each stable order on I( G) to be fundamental or antifundamental. Classification of finite commutative semigroups for which the inverse monoid of local automorphisms is permutable Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 176-184 We give a classification of finite commutative semigroups for which the inverse monoid of local automorphisms is permutable. Structure of a finite commutative inverse semigroup and a finite bundle for which the inverse monoid of local automorphisms is permutable Ukr. Mat. Zh. - 2011. - 63, № 9. - pp. 1218-1226 For a semigroup $S$, the set of all isomorphisms between subsemigroups of $S$ is an inverse monoid with respect to composition, which is denoted by $P A(S)$ and is called the monoid of local automorphisms of $S$. A semigroup $S$ is called permutable if, for any pair of congruences $p, \sigma$ on $S$, one has $p \circ \sigma = \sigma \circ p$. We describe the structure of a finite commutative inverse semigroup and a finite band whose monoids of local automorphisms are permutable. Structure of finite inverse semigroup with zero, in which every stable order is fundamental or antifundamental Ukr. Mat. Zh. - 2010. - 62, № 1. - pp. 29 - 39 We find necessary and sufficient conditions for any stable order on a finite inverse semigroup with zéro to be fondamental or antifundamental. Structure of a Munn semigroup of finite rank every stable order of which is fundamental or antifundamental Ukr. Mat. Zh. - 2009. - 61, № 1. - pp. 52-60 We describe the structure of a Munn semigroup of finite rank every stable order of which is fundamental or antifundamental. Ukr. Mat. Zh. - 2008. - 60, № 8. - pp. 1035–1041 We consider maximal stable orders on semigroups that belong to a certain class of inverse semigroups of finite rank. Characterization of the semilattice of idempotents of a finite-rank permutable inverse semigroup with zero Ukr. Mat. Zh. - 2007. - 59, № 10. - pp. 1353–1362 We give a characterization of the semilattice of idempotents of a finite-rank permutable inverse semigroup with zero. Ukr. Mat. Zh. - 2006. - 58, № 6. - pp. 742–746 A semigroup any two congruences of which commute as binary relations is called a permutable semigroup. We describe the structure of a permutable Munn semigroup of finite rank. Ukr. Mat. Zh. - 2005. - 57, № 4. - pp. 469–473 We describe the structure of any congruence of a permutable inverse semigroup of finite rank. Ukr. Mat. Zh. - 2004. - 56, № 3. - pp. 346-351 We find necessary and sufficient conditions for any two congruences on an antigroup of finite rank to be permutable.
Difference between revisions of "Group cohomology of elementary abelian group of prime-square order" (Created page with "Suppose <math>p</math> is a prime number. We are interested in the elementary abelian group of prime-square order <math>E_{p^2} = (\mathbb{Z}/p\mathbb{Z})^2 = \mathbb{Z}/...") (→Over the integers) (34 intermediate revisions by the same user not shown) Line 1: Line 1: + + + + + Suppose <math>p</math> is a [[prime number]]. We are interested in the [[elementary abelian group of prime-square order]] <math>E_{p^2} = (\mathbb{Z}/p\mathbb{Z})^2 = \mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z}</math>. Suppose <math>p</math> is a [[prime number]]. We are interested in the [[elementary abelian group of prime-square order]] <math>E_{p^2} = (\mathbb{Z}/p\mathbb{Z})^2 = \mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z}</math>. − ==Homology groups== + + + + + + + + + + + + + ==Homology groups == + + ===Over the integers=== ===Over the integers=== − <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};\mathbb{Z}) = \left\lbrace \begin{array}{rl} (\mathbb{Z}/p\mathbb{Z})^{(q + 3)/2} & \qquad q = 1,3,5,\dots \\ (\mathbb{Z}/p\mathbb{Z})^{q/2}, & q = 2,4,6,\dots \\ \mathbb{Z}, & \qquad q = 0 \\\end{array}</math> + + + <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};\mathbb{Z}) = \left\lbrace \begin{array}{rl} (\mathbb{Z}/p\mathbb{Z})^{(q + 3)/2} & \qquad q = 1,3,5,\dots \\ (\mathbb{Z}/p\mathbb{Z})^{q/2}, & q = 2,4,6,\dots \\ \mathbb{Z}, & \qquad q = 0 \\\end{array} + + + + </math> The first few homology groups are given below: The first few homology groups are given below: Line 12: Line 37: ! <math>q</math> !! <math>0</math> !! <math>1</math> !! <math>2</math> !! <math>3</math> !! <math>4</math> !! <math>5</math> ! <math>q</math> !! <math>0</math> !! <math>1</math> !! <math>2</math> !! <math>3</math> !! <math>4</math> !! <math>5</math> |- |- − | <math>H_q</math> || <math>\mathbb{Z}</math> || <math>(\mathbb + | <math>H_q</math> || <math>\mathbb{Z}</math> || <math>(\mathbbZ}/p\mathbb{Z})^2 = E_{p^2}</math> || <math>\mathbb{Z}/p\mathbb{Z}</math> || <math>(\mathbb{Z}/p\mathbb{Z})^3 = E_{p^3}</math> || <math>(\mathbb{Z}/p\mathbb{Z})^2 = E_{p^2}</math> || <math>(\mathbb{Z}/p\mathbb{Z})^4 = E_{p^4}</math> |- |- | rank of <math>H_q</math> as an elementary abelian <math>p</math>-group || -- || 2 || 1 || 3 || 2 || 4 | rank of <math>H_q</math> as an elementary abelian <math>p</math>-group || -- || 2 || 1 || 3 || 2 || 4 Line 21: Line 46: The homology groups with coefficients in an abelian group <math>M</math> are given as follows: The homology groups with coefficients in an abelian group <math>M</math> are given as follows: − <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};M) = \left\lbrace\begin{array}{rl} (M/pM)^{(q+3)/2} \oplus (\operatorname{Ann}_M(p))^{(q-1)/2}, & \qquad q = 1,3,5,\dots\\ (M/ + <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};M) = \left\lbrace\begin{array}{rl} (M/pM)^{(q+3)/2} \oplus (\operatorname{Ann}_M(p))^{(q-1)/2}, & \qquad q = 1,3,5,\dots\\ (M/)^{q/2} \oplus (\operatorname{Ann}_M(p))^{(q+2)/2}, & \qquad q = 2,4,6,\dots \\ M, & \qquad q = 0 \\\end{array}\right.</math> − Here, <math>M/pM</math> is the quotient of <math>M</math> by <math> + Here, <math>M/pM</math> is the quotient of <math>M</math> by <math>= \{ px \mid x \in M \}</math> and <math>\operatorname{Ann}_M(p) = \{ x \in M \mid px = 0 \}</math>. These homology groups can be computed in terms of the homology groups over integers using the [[universal coefficients theorem for group homology]]. These homology groups can be computed in terms of the homology groups over integers using the [[universal coefficients theorem for group homology]]. + + + + + + + + + + + + + + + + + + ==Cohomology groups for trivial group action== ==Cohomology groups for trivial group action== + + ===Over the integers=== ===Over the integers=== Line 34: Line 79: <math>H^q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};\mathbb{Z}) = \left\lbrace \begin{array}{rl} (\mathbb{Z}/p\mathbb{Z})^{(q-1)/2}, & q = 1,3,5,\dots \\ (\mathbb{Z}/p\mathbb{Z})^{(q+2)/2}, & q = 2,4,6,\dots \\ \mathbb{Z}, & q = 0 \\\end{array}\right.</math> <math>H^q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};\mathbb{Z}) = \left\lbrace \begin{array}{rl} (\mathbb{Z}/p\mathbb{Z})^{(q-1)/2}, & q = 1,3,5,\dots \\ (\mathbb{Z}/p\mathbb{Z})^{(q+2)/2}, & q = 2,4,6,\dots \\ \mathbb{Z}, & q = 0 \\\end{array}\right.</math> + + + + The first few cohomology groups are given below: The first few cohomology groups are given below: Line 54: Line 103: These can be deduced from the homology groups with coefficients in the integers using the [[dual universal coefficients theorem for group cohomology]]. These can be deduced from the homology groups with coefficients in the integers using the [[dual universal coefficients theorem for group cohomology]]. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Latest revision as of 21:34, 24 October 2011 Contents This article gives specific information, namely, group cohomology, about a family of groups, namely: elementary abelian group of prime-square order. View group cohomology of group families | View other specific information about elementary abelian group of prime-square order Particular cases Homology groups for trivial group action FACTS TO CHECK AGAINST(homology group for trivial group action): First homology group: first homology group for trivial group action equals tensor product with abelianization Second homology group: formula for second homology group for trivial group action in terms of Schur multiplier and abelianization|Hopf's formula for Schur multiplier General: universal coefficients theorem for group homology|homology group for trivial group action commutes with direct product in second coordinate|Kunneth formula for group homology Over the integers The homology groups below can be computed using the homology groups for the group of prime order (see group cohomology of finite cyclic groups) and combining it with the Kunneth formula for group homology. The even and odd cases can be combined giving the following alternative description: The first few homology groups are given below: rank of as an elementary abelian -group -- 2 1 3 2 4 Over an abelian group The homology groups with coefficients in an abelian group are given as follows: Here, is the quotient of by and . These homology groups can be computed in terms of the homology groups over integers using the universal coefficients theorem for group homology. Important case types for abelian groups Case on Conclusion about odd-indexed homology groups, i.e., Conclusion about even-indexed homology groups, i.e., is uniquely -divisible, i.e., every element of can be divided uniquely by . This includes the case that is a field of characteristic not . all zero groups all zero groups is -torsion-free, i.e., no nonzero element of multiplies by to give zero. is -divisible, but not necessarily uniquely so, e.g., , any natural number is a finite abelian group isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of is a finitely generated abelian group all isomorphic to where is the rank for the -Sylow subgroup of the torsion part of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of all isomorphic to where is the rank for the -Sylow subgroup of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of Cohomology groups for trivial group action FACTS TO CHECK AGAINST(cohomology group for trivial group action): First cohomology group: first cohomology group for trivial group action is naturally isomorphic to group of homomorphisms Second cohomology group: formula for second cohomology group for trivial group action in terms of Schur multiplier and abelianization In general: dual universal coefficients theorem for group cohomology relating cohomology with arbitrary coefficientsto homology with coefficients in the integers. |Cohomology group for trivial group action commutes with direct product in second coordinate | Kunneth formula for group cohomology Over the integers The cohomology groups with coefficients in the integers are given as below: The odd and even cases can be combined as follows: The first few cohomology groups are given below: 0 rank of as an elementary abelian -group -- 0 2 1 3 2 Over an abelian group The cohomology groups with coefficients in an abelian group are given as follows: Here, is the quotient of by and . These can be deduced from the homology groups with coefficients in the integers using the dual universal coefficients theorem for group cohomology. Important case types for abelian groups Case on Conclusion about odd-indexed cohomology groups, i.e., Conclusion about even-indexed homology groups, i.e., is uniquely -divisible, i.e., every element of can be divided by uniquely. This includes the case that is a field of characteristic not 2. all zero groups all zero groups is -torsion-free, i.e., no nonzero element of multiplies by to give zero. is -divisible, but not necessarily uniquely so, e.g., , any natural number is a finite abelian group isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of is a finitely generated abelian group all isomorphic to where is the rank for the -Sylow subgroup of the torsion part of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of all isomorphic to where is the rank for the -Sylow subgroup of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of Tate cohomology groups for trivial group action PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE] Growth of ranks of cohomology groups Over the integers With the exception of the zeroth homology group and cohomology group, the homology groups and cohomology groups over the integers are all elementary abelian -groups. For the homology groups, the rank (i.e., dimension as a vector space over the field of elements) is a function of that is a sum of a linear function (of slope 1/2) and a periodic function (of period 2). The same is true for the cohomology groups, although the precise description of the periodic function differs. For homology groups, choosing the periodic function so as to have mean zero, we get that the linear function is and the periodic function is . For cohomology groups, choosing the periodic function so as to have mean zero, we get that the linear function is and the periodic function is . Note that: The intercept for the cohomology groups is 1/4, as opposed to the intercept of 3/4 for the homology groups. This is explained by the somewhat slower start of cohomology groups on account of being torsion-free. The periodic parts for homology groups and cohomology groups are negatives of each other, indicating an opposing pattern that is explained by looking at the dual universal coefficients theorem for group cohomology. Over the prime field If we take coefficients in the prime field , then the ranks of the homology and cohomology groups both grow as linear functions of . The linear function in both cases is . Note that in this case, the homology groups and cohomology groups are vector spaces over and the cohomology group is the vector space dual of the homology group. Note that there is no periodic part when we are working over the prime field.
Difference between revisions of "Group cohomology of elementary abelian group of prime-square order" (→Over the integers) (→Over the integers) (32 intermediate revisions by the same user not shown) Line 1: Line 1: + + + + + Suppose <math>p</math> is a [[prime number]]. We are interested in the [[elementary abelian group of prime-square order]] <math>E_{p^2} = (\mathbb{Z}/p\mathbb{Z})^2 = \mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z}</math>. Suppose <math>p</math> is a [[prime number]]. We are interested in the [[elementary abelian group of prime-square order]] <math>E_{p^2} = (\mathbb{Z}/p\mathbb{Z})^2 = \mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z}</math>. − ==Homology groups== + + + + + + + + + + + + + ==Homology groups == + + ===Over the integers=== ===Over the integers=== − <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};\mathbb{Z}) = \left\lbrace \begin{array}{rl} (\mathbb{Z}/p\mathbb{Z})^{(q + 3)/2} & \qquad q = 1,3,5,\dots \\ (\mathbb{Z}/p\mathbb{Z})^{q/2}, & q = 2,4,6,\dots \\ \mathbb{Z}, & \qquad q = 0 \\\end{array}\right.</math> + + + <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};\mathbb{Z}) = \left\lbrace \begin{array}{rl} (\mathbb{Z}/p\mathbb{Z})^{(q + 3)/2} & \qquad q = 1,3,5,\dots \\ (\mathbb{Z}/p\mathbb{Z})^{q/2}, & q = 2,4,6,\dots + + + + \\ \mathbb{Z}, & \qquad q = 0 \\\end{array}\right.</math> The first few homology groups are given below: The first few homology groups are given below: Line 21: Line 46: The homology groups with coefficients in an abelian group <math>M</math> are given as follows: The homology groups with coefficients in an abelian group <math>M</math> are given as follows: − <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};M) = \left\lbrace\begin{array}{rl} (M/pM)^{(q+3)/2} \oplus (\operatorname{Ann}_M(p))^{(q-1)/2}, & \qquad q = 1,3,5,\dots\\ (M/ + <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};M) = \left\lbrace\begin{array}{rl} (M/pM)^{(q+3)/2} \oplus (\operatorname{Ann}_M(p))^{(q-1)/2}, & \qquad q = 1,3,5,\dots\\ (M/)^{q/2} \oplus (\operatorname{Ann}_M(p))^{(q+2)/2}, & \qquad q = 2,4,6,\dots \\ M, & \qquad q = 0 \\\end{array}\right.</math> − Here, <math>M/pM</math> is the quotient of <math>M</math> by <math> + Here, <math>M/pM</math> is the quotient of <math>M</math> by <math>= \{ px \mid x \in M \}</math> and <math>\operatorname{Ann}_M(p) = \{ x \in M \mid px = 0 \}</math>. These homology groups can be computed in terms of the homology groups over integers using the [[universal coefficients theorem for group homology]]. These homology groups can be computed in terms of the homology groups over integers using the [[universal coefficients theorem for group homology]]. + + + + + + + + + + + + + + + + + + ==Cohomology groups for trivial group action== ==Cohomology groups for trivial group action== + + ===Over the integers=== ===Over the integers=== Line 34: Line 79: <math>H^q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};\mathbb{Z}) = \left\lbrace \begin{array}{rl} (\mathbb{Z}/p\mathbb{Z})^{(q-1)/2}, & q = 1,3,5,\dots \\ (\mathbb{Z}/p\mathbb{Z})^{(q+2)/2}, & q = 2,4,6,\dots \\ \mathbb{Z}, & q = 0 \\\end{array}\right.</math> <math>H^q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};\mathbb{Z}) = \left\lbrace \begin{array}{rl} (\mathbb{Z}/p\mathbb{Z})^{(q-1)/2}, & q = 1,3,5,\dots \\ (\mathbb{Z}/p\mathbb{Z})^{(q+2)/2}, & q = 2,4,6,\dots \\ \mathbb{Z}, & q = 0 \\\end{array}\right.</math> + + + + The first few cohomology groups are given below: The first few cohomology groups are given below: Line 54: Line 103: These can be deduced from the homology groups with coefficients in the integers using the [[dual universal coefficients theorem for group cohomology]]. These can be deduced from the homology groups with coefficients in the integers using the [[dual universal coefficients theorem for group cohomology]]. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Latest revision as of 21:34, 24 October 2011 Contents This article gives specific information, namely, group cohomology, about a family of groups, namely: elementary abelian group of prime-square order. View group cohomology of group families | View other specific information about elementary abelian group of prime-square order Particular cases Homology groups for trivial group action FACTS TO CHECK AGAINST(homology group for trivial group action): First homology group: first homology group for trivial group action equals tensor product with abelianization Second homology group: formula for second homology group for trivial group action in terms of Schur multiplier and abelianization|Hopf's formula for Schur multiplier General: universal coefficients theorem for group homology|homology group for trivial group action commutes with direct product in second coordinate|Kunneth formula for group homology Over the integers The homology groups below can be computed using the homology groups for the group of prime order (see group cohomology of finite cyclic groups) and combining it with the Kunneth formula for group homology. The even and odd cases can be combined giving the following alternative description: The first few homology groups are given below: rank of as an elementary abelian -group -- 2 1 3 2 4 Over an abelian group The homology groups with coefficients in an abelian group are given as follows: Here, is the quotient of by and . These homology groups can be computed in terms of the homology groups over integers using the universal coefficients theorem for group homology. Important case types for abelian groups Case on Conclusion about odd-indexed homology groups, i.e., Conclusion about even-indexed homology groups, i.e., is uniquely -divisible, i.e., every element of can be divided uniquely by . This includes the case that is a field of characteristic not . all zero groups all zero groups is -torsion-free, i.e., no nonzero element of multiplies by to give zero. is -divisible, but not necessarily uniquely so, e.g., , any natural number is a finite abelian group isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of is a finitely generated abelian group all isomorphic to where is the rank for the -Sylow subgroup of the torsion part of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of all isomorphic to where is the rank for the -Sylow subgroup of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of Cohomology groups for trivial group action FACTS TO CHECK AGAINST(cohomology group for trivial group action): First cohomology group: first cohomology group for trivial group action is naturally isomorphic to group of homomorphisms Second cohomology group: formula for second cohomology group for trivial group action in terms of Schur multiplier and abelianization In general: dual universal coefficients theorem for group cohomology relating cohomology with arbitrary coefficientsto homology with coefficients in the integers. |Cohomology group for trivial group action commutes with direct product in second coordinate | Kunneth formula for group cohomology Over the integers The cohomology groups with coefficients in the integers are given as below: The odd and even cases can be combined as follows: The first few cohomology groups are given below: 0 rank of as an elementary abelian -group -- 0 2 1 3 2 Over an abelian group The cohomology groups with coefficients in an abelian group are given as follows: Here, is the quotient of by and . These can be deduced from the homology groups with coefficients in the integers using the dual universal coefficients theorem for group cohomology. Important case types for abelian groups Case on Conclusion about odd-indexed cohomology groups, i.e., Conclusion about even-indexed homology groups, i.e., is uniquely -divisible, i.e., every element of can be divided by uniquely. This includes the case that is a field of characteristic not 2. all zero groups all zero groups is -torsion-free, i.e., no nonzero element of multiplies by to give zero. is -divisible, but not necessarily uniquely so, e.g., , any natural number is a finite abelian group isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of is a finitely generated abelian group all isomorphic to where is the rank for the -Sylow subgroup of the torsion part of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of all isomorphic to where is the rank for the -Sylow subgroup of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of Tate cohomology groups for trivial group action PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE] Growth of ranks of cohomology groups Over the integers With the exception of the zeroth homology group and cohomology group, the homology groups and cohomology groups over the integers are all elementary abelian -groups. For the homology groups, the rank (i.e., dimension as a vector space over the field of elements) is a function of that is a sum of a linear function (of slope 1/2) and a periodic function (of period 2). The same is true for the cohomology groups, although the precise description of the periodic function differs. For homology groups, choosing the periodic function so as to have mean zero, we get that the linear function is and the periodic function is . For cohomology groups, choosing the periodic function so as to have mean zero, we get that the linear function is and the periodic function is . Note that: The intercept for the cohomology groups is 1/4, as opposed to the intercept of 3/4 for the homology groups. This is explained by the somewhat slower start of cohomology groups on account of being torsion-free. The periodic parts for homology groups and cohomology groups are negatives of each other, indicating an opposing pattern that is explained by looking at the dual universal coefficients theorem for group cohomology. Over the prime field If we take coefficients in the prime field , then the ranks of the homology and cohomology groups both grow as linear functions of . The linear function in both cases is . Note that in this case, the homology groups and cohomology groups are vector spaces over and the cohomology group is the vector space dual of the homology group. Note that there is no periodic part when we are working over the prime field.
I’m frequently told that probabilities are the limit of relative frequencies for an infinite number of repetitions. It sounds nice: it defines a difficult concept – probabilities – in terms of a simple one – frequencies – and even gives us a way to measure probabilities, if we fudge the “infinite” part a bit. The problem with this definition? It is not true. First of all, this limit does not exist. If one makes an infinite sequence of zeroes and ones by throwing a fair coin (fudging away this pesky infinity again), calling the result of the $i$th throw $s_i$, the relative frequency after $n$ throws is \[ f_n = \frac1n\sum_{i=1}^{n}s_i.\] What should then $\lim_{n\to\infty}f_n$ be? $1/2$? Why? All sequences of zeros and ones are equally possible – they are even equally probable! What is wrong with choosing the sequence $s = (0,0,0,\ldots)$? Or even the sequence $(0,1,1,0,0,0,0,1,1,1,1,1,1,1,1,\ldots)$, whose frequencies do not converge to any number, but eternally oscillate between $0$ and $1$? If for some reason one chooses a nice1 sequence like $s=(0,1,0,1,0,1,\ldots)$, for which the limit does converge to $1/2$, what is wrong with reordering it to obtain $s’ = (s_1,s_3,s_2,s_5,s_7,s_4,\ldots)$ instead, with limit $1/3$? No, no, no, you complain. It is true that all sequences are equiprobable, but most of them have limiting frequency $1/2$. Moreover, it is a theorem that the frequencies converge – it is the law of large numbers! How can you argue against a theorem? Well, what do you mean by “most”? This is already a probabilistic concept! And according to which measure? It cannot be a fixed measure, otherwise it would say that the limiting frequency is always $1/2$, independently of the single-throw probability $p$. On the other hand, if one allows it to depend on $p$, one can indeed define a measure on the set of infinite sequences such that “most” sequences have limiting frequency $p$. A probability measure. So you’re not explaining the single-throw probability in terms of the limiting frequencies, but rather in terms of the probabilities of the limiting frequencies. Which is kind of a problem, if “probability” is what you wanted to explain in the first place. The same problem happens with the law of large numbers. Its statement is that \[\forall \epsilon >0 \quad \lim_{n\to\infty}\text{Pr}(|f_n -p|\ge \epsilon) = 0,\] so it only says that the probability of observing a frequency different than $p$ goes to $0$ as the number of trial goes to infinity. But enough with mocking frequentism. Much more eloquent dismissals have already been written, several times over, and as the Brazilian saying goes, one shouldn’t kick a dead dog. Rather, I want to imagine a world where frequentism is true. What would it take? Well, the most important thing is to make the frequencies converge to the probability in the infinite limit. One also needs, though, the frequencies to be a good approximation to the probability even for a finite number of trials, otherwise empiricism goes out of the window. My idea, then, is to allow the frequencies to fluctuate within some error bars, but never beyond. One could, for example, take the $5\sigma$ standard for scientific discoveries that particle physics use, and declare it to be a fundamental law of Nature: it is only possible to observe a frequency $f_n$ if \[f_n \in \left(p-5\frac{\sigma}{\sqrt{n}},p+5\frac{\sigma}{\sqrt{n}}\right).\] Trivially, then, for large $\lim_{n\to\infty}f_n = p$, and even better, if we want to measure some probability within error $\epsilon$, we only need $n > \sigma^2/\epsilon^2$ trials, so for example 2500 throws are enough to tomograph any coin within error $10^{-2}$. In this world, the gambler’s fallacy is not a fallacy, but a law of Nature. If one starts throwing a fair coin and observes 24 heads in row, it is literally impossible to observe another heads in the next throw. It’s as if there is a purpose pushing the frequencies towards the mean. It captures well our intuition about randomness. It is also completely insane: 25 heads are impossible only in the start of a sequence. If before them one had obtained 24 tails, 25 heads are perfectly fine. Also, it’s not as if 25 heads are impossible because their probability is too low. The probability of 24 heads, one tails, and another heads is even lower. Even worse, if the probability you’re trying to tomograph is the one of obtaining 24 heads followed by one tail, then the frequency $f_1$ must be inside the interval \[[0,2^{-25}+\sqrt{2^{-25}(1-2^{-25})}]\approx [0,2^{-12.5}],\]which is only possible if $f_1 = 0$. That is, it is impossible to observe tails after observing 24 heads, as it would make $f_1=1$, but it is also impossible to observe heads. So in this world Nature would need to keep track not only of all the coin throws, but also which statistics you are calculating about them, and also find a way to keep you from observing contradictions, presumably by not allowing any coin to be thrown at all.
Catherine Therese J. Quiñones The virial theorem is a general theorem relating the potential energy (V) and the kinetic energy (T) in a bound system. A simple physical example is a small object orbiting around another object bound by a force as in the case of a hydrogen atom. The average kinetic energy and potential energies of a system of particle that interact by Coulomb forces are related by [eq] \langle T \rangle = -\frac{1}{2} \langle V \rangle [/eq] (1) Since, the Hamiltonian H of the given system is [eq] \langle H \rangle = \langle T \rangle + \langle V \rangle = E_n [/eq] (2) Thus substituting Eqn (1) to Eqn (2) yields, [eq] -\frac{1}{2}\langle V \rangle + \langle V \rangle = E_n [/eq] (3) [eq] \frac{\langle V \rangle}{2} = E_n [/eq] (4) Now, we will derive the expectation value of [eq]\frac{1}{r}[/eq] in the unperturbed state of the of a hydrogen atom. We can use the virial theorem to easily solve the expectation value since the system can be considered a bound system with the electron orbiting around the proton which is bound by the Coulombic force. For a hydrogen atom, the potential energy is expressed as [eq] V = -\frac{e^2}{4\pi\epsilon_0} \frac{1}{r} [/eq] (5) where [eq]e[/eq] is the charge of the electron and the proton, [eq]r[/eq] represents the separation distance between the two charges and [eq]\epsilon_0[/eq] is the permittivity of free space . The negative sign indicates that the force is attractive. The allowed energies [eq]E_n[/eq] is given by [eq]E_n = – \Bigg[\frac{m}{2\hbar^2}\Bigg(\frac{e^2}{4\pi\epsilon_0}\Bigg)^2\Bigg]\Bigg{\frac{1}{n^2}\Bigg} [/eq] (6) where [eq]m[/eq] is the mass of the particle, [eq]\hbar[/eq] is Planck’s constant over [eq]2\pi[/eq] and [eq]n= 0,1,2,3,..[/eq] which indicates the quantization of the energy level. The solution is very straight forward. All we need is to plug in eqn (5) and (6) to eqn (4). Hence, [eq]-\frac{e^2}{4\pi\epsilon_0}\Big\langle\frac{1}{r}\Big\rangle = -2\Bigg[ \frac{m}{2\hbar^2}\Bigg(\frac{e^2}{4\pi\epsilon_0}\Bigg)^2\Bigg] \Bigg{\frac{1}{n^2}\Bigg} [/eq] (7) [eq]\Big\langle\frac{1}{r}\Big\rangle=\Bigg(\frac{me^2}{4\pi\epsilon_0\hbar^2}\Bigg)\frac{1}{n^2}[/eq] (8) Note that the term inside the parenthesis is just [eq]\frac{1}{a_0}[/eq], where [eq]a_0[/eq] is the Bohr radius . Hence we can write the expectation value of [eq]\frac{1}{r}[/eq] as, [eq]\Big\langle\frac{1}{r}\Big\rangle = \frac{1}{a_0n^2}[/eq] (9) Thus we have derived the expectation value, [eq]\Big\langle\frac{1}{r}\Big\rangle[/eq], of the hydrogen atom in the unperturbed state using the virial theorem.
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional... no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right? For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right. Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present? Like can we think $H=C_q \times C_r$ or something like that from the given data? When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities? When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$. And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$. In this case how can I write G using notations/symbols? Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$? First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.? As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations? There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$. A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$ If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword There is good motivation for such a definition here So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$ It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$. Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative I don't know how to interpret this coarsely in $\pi_1(S)$ @anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale. @ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math. Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap... I can probably guess that they are using symmetries and permutation groups on graphs in this course. For example, orbits and studying the automorphism groups of graphs. @anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix! Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case @chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$ could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
A solution is defined as a homogeneous mixture of two or more components existing in a single phase. In this description, the focus will be on liquid solutions because within the realm of biology and chemistry, liquid solutions play an important role in multiple processes. Without the existence of solutions, a cell would not be able to carry out glycolysis and other signaling cascades necessary for cell growth and development. Chemists, therefore, have studied the processes involved in solution chemistry in order to further the understanding of the solution chemistry in nature. Introduction The mixing of solutions is driven by entropy, opposed to being driven by enthalpy. While an ideal gas by definition does not have interactions between particles, an ideal solution assumes there are interactions. Without the interactions, the solution would not be in a liquid phase. Rather, ideal solutions are defined as having an enthalpy of mixing or enthalpy of solution equal to zero (ΔH mixing or ΔH solution = 0). This is because the interactions between two liquids, A-B, is the average of the A-A interactions and the B-B interactions. In an ideal solution the average A-A and B-B interactions are identical so there is no difference between the average A-B interactions and the A-A/B-B interactions. Since in biology and chemistry the average interactions between A and B are not always equivalent to the interactions of A or B alone, the enthalpy of mixing is not zero. Consequently, a new term is used to describe the concentration of molecules in solution. Activity, \(a_1\), is the effective concentration that takes into account the deviation from ideal behavior, with the activity of an ideal solution equal to one. An activity coefficient, \( \gamma_1\), is utilized to convert from the solute’s mole fraction, \(x_1\), (as a unit of concentration, mole fraction can be calculated from other concentration units like molarity, molality, or percent by weight) to activity, \(a_1\). \[ a_1=\gamma_1x_1 \tag{1}\] Debye-Hückel Formula The Debye-Hückel formula is used to calculate the activity coefficient. \[ \log \gamma_\pm = - \dfrac{1.824 \times 10^6} { \left( \epsilon T \right)^{3/2}} | z_+ z_- | \sqrt I \tag{2}\] This form of the Debye-Hückel equation is used if the solvent is water at 298 K. \[ \log \gamma_\pm = - 0.509 | z_+ z_- | \sqrt I \tag{3}\] \(\gamma_\pm\) mean ionic activity coefficent \(z_+\) catonic charge of the electrolyte for \( \gamma_\pm \) \(z_-\) anionic charge of the electrolyte for \( \gamma_\pm \) \(I\) ionic strength \(\epsilon\) relative dielectric constant for the solution \(T\) temperature of the electrolyte solution Example 1 Consider a solution of 0.01 M MgCl 2 (aq) with an ionic strength of 0.030 M. What is the mean activity coefficient? SOLUTION \( \log \gamma_\pm = - \displaystyle \frac{1.824 \times 10^6} { \left( \epsilon T \right)^{3/2}} | z_+ z_- | \sqrt I \) \( \log \gamma_\pm = - 0.509 | z_+ z_- | \sqrt I \) \( \log \gamma_\pm = - \displaystyle \frac{1.824 \times 10^6} { \left( 78.54 \cdot 298 \mathrm {K} \right)^{3/2}} | 2 \cdot 1 | \sqrt {0.0030} \) \( \log \gamma_\pm = - 0.509 | 2 \cdot 1 | \sqrt {0.0030} \; m \) \( \gamma_\pm = 0.67 \) \( \gamma_\pm = 0.67 \) References Chang, Raymond. Physical chemistry for the chemical and biological sciences. 3rd ed. Sausalito, Calif: University Science Books, 2000. Print. Ochiai, E-I. (1990) "Paradox of the Activity Coefficient \(\gamma_\pm\)." J. Chem. Educ. 67: 489. Contributors Shirley Bradley (Hope College), Kent Kammermeier (Hope College)
Creating Symbolic Links The function CreateSymbolicLink allows you to create symbolic links using either an absolute or relative path. Symbolic links can either be absolute or relative links. Absolute links are links that specify each portion of the path name; relative links are determined relative to where relative–link specifiers are in a specified path. Relative links are specified using the following conventions: Dot (. and ..) conventions—for example, "..\" resolves the path relative to the parent directory. Names with no slashes ()—for example, "tmp" resolves the path relative to the current directory. Root relative—for example, "\Windows\System32" resolves to the " current drive:\Windows\System32". directory Current working directory-relative—for example, if the current working directory is "C:\Windows\System32", "C:File.txt" resolves to "C:\Windows\System32\File.txt". NoteIf you specify a current working directory–relative link, it is created as an absolute link, due to the way the current working directory is processed based on the user and the thread. A symbolic link can also contain both junction points and mounted folders as a part of the path name. Symbolic links can point directly to a remote file or directory using the UNC path. Relative symbolic links are restricted to a single volume. Example of an Absolute Symbolic Link In this example, the original path contains a component, ' x', which is an absolute symbolic link. When ' x' is encountered, the fragment of the original path up to and including ' x' is completely replaced by the path that is pointed to by ' x'. The remainder of the path after ' x' is appended to this new path. This now becomes the modified path. X: "C:\alpha\beta\absLink\gamma\file" Link: "absLink" maps to "\\machineB\share" Modified Path: "\\machineB\share\gamma\file" Example of a Relative Symbolic Links In this example, the original path contains a component ' x', which is a relative symbolic link. When ' x' is encountered, ' x' is completely replaced by the new fragment pointed to by ' x'. The remainder of the path after ' x', is appended to the new path. Any dots (..) in this new path replace components that appear before the dots (..). Each set of dots replace the component preceding. If the number of dots (..) exceed the number of components, an error is returned. Otherwise, when all component replacement has finished, the final, modified path remains. X: C:\alpha\beta\link\gamma\file Link: "link" maps to "..\..\theta" Modified Path: "C:\alpha\beta\..\..\theta\gamma\file" Final Path: "C:\theta\gamma\file" Related topics
This post is quite smaller than previous ones as it just states the optimization techniques required to train a neural network. I thought of including a example but many resources are available free online and therefore it is not necessary to reinvent the wheel. So coming straight to the point, Optimization techniques are those which facilitate our model to converge faster and providing much more efficiency and makes our model more accurate. Why Optimize? It’s very simple to explain. Think of you are on a mountain and all you needed is to get down. It’s a very easy task right? Ofcourse yes, if you know the correct direction to move on and how speed your step should be. And this is what exactly Optimization is. To be more concise there are two types of Optimizations. First Order methods ( Jacobian matrix) Second Order methods ( Hessian matrix) First-order methods are used to calculate Jacobian matrix. The Jacobian can be defined as a differentiable function that approximates a given point x (a number in the parameter vector). This is the direct approximation, mapping a given input onto $f$ our function. Which means it is used to state the which direction we should move on. Second-order algorithms calculate the derivative of the Jacobian (i.e., the derivative of the matrix of derivatives), which is called Hessian. Second-order methods are used for controlling the step size, which is how much an algorithm adjusts the weights as it tries to learn. They’re also useful in determining which way your gradient is sloping. Momentum Momentum is the First-order method used to speed up training by moving the gradients in the direction we were already going. It can be achieved by two methods namely Regular momentum Nesterov momentum Regular momentum is inspired from the classical mechanics which is the product of mass and velocity and here we define the $\mu$ which is the momentum that helps us move in the direction we were going. So it can be defined as $$v(t-1) = \Delta w(t-1)$$ $$\Delta w(t) = \mu v(t-1) – \alpha \nabla J(t)$$ Nesterov momentum is an advancement to regular momentum where we actually move in such a direction by looking ahead and then correct ourselves if we made a mistake. The mathematics which is involved is a bit more complicated but take it as granted that it is always better than regular momentum since we are correcting ourselves by looking a step ahead. $$v(t) = \mu v(t-1) – \alpha \nabla J(t)$$ $$\Delta w(t) = -\mu v(t-1) + (1+\mu) v(t)$$ The image below gives a better intuition on momentum updates. Adaptive learning rate This comes under the Second-order methods where it adjusts the automatically rather than defining manually by ourselves. It is because if the learning rate is constant, the gradient may end up fluctuating rather than converging. So all we need is to decrease the learning rate as we move on so that it converges better. Several techniques are available to help achieve the adaptive learning rate. learning rate Step Decay Exponential Decay $\frac{1}{t} decay$ AdaGrad AdaDelta RMSprop Adam Step decay: In step decay we generally reduce the learning rate by a factor after certain no.of.steps. So for example we can take this as for every 10 steps the learning_rate /= k, where $k$ is the scaling-factor. Exponential decay In this method we reduce the learning_rate exponentially such that $learning\_rate = learning\_rate e^{-\lambda*t}$ where $\lambda$ is the scaling-factor and $t$ be the iteration number. $\frac{1}{t}$ decay Here the learning rate is adjusted as follows. $learning\_rate = learning\_rate/(1+kt)$ AdaGrad It is the better adaptive method than above as it is based on the weight changes so far. It keeps track of cache which is the square of gradient that allows it to be adaptive with weight changes. Therefore $$cache = gradient^2$$ $$w = w – learning\_rate \frac{gradient}{\sqrt{cache}+\epsilon}$$ $\epsilon$ is introduced so that we do not divide by zero. AdaDelta Adadelta is an extension of that seeks to reduce its aggressive, monotonically decreasing learning rate. Instead of accumulating all past squared gradients, Adagrad restricts the window of accumulated past gradients to some fixed size. Therfore the only change is the gradients associated are the $k$ latest gradients instead of all the gradients so far. Adadelta RMSprop This technique is introduced by which adds Geoff Hinton decayto the cache itself. It is similar to AdaDelta, but the only change being the cache itself decays which increases chances of converging to global minimum faster and efficiently. $$cache = decay\_rate * cache + (1-decay\_rate)*grad^2$$ $$w = w – learning\_rate \frac{gradient}{\sqrt{cache}+\epsilon}$$ Adam It is similar to and AdaDelta and the only advancement here is it keeps track of averages of past gradients as AdaDelta and RMSprop and also tracks the averages of past decayed gradients which further makes more accurate in gradient descent operations. The averages of exponentially decaying past gradients is tuned by the parameter $\beta$. RMSprop Where to move on now? As I said before, the main objective of this blog is to give an intuitive idea of things that are commonly difficult to understand in the field of Machine and Deep learning and Computer Vision. I do not prefer to reinvent the wheel. You better move on to the following links for getting started into the field of deep learning and computer vision and come back here. As the posts from now on will be direct projects that a beginner may find difficult to start on. I’ll try my best to help you understand things but it is better have a look at these links before coming back to this blog. Tensorflow tutorials (Hvass Labs) Official theano tutorials Keras Documentation Convolution Neural Networks (cs231 Stanford) FastAI course Natural Language Processing (University of Michigan) Pyimagesearch (highly recommended for computer vision beginners) What will be covered in the next blog posts? I’m glad you asked!! As said before, I will start writing projects (parts wise) which uses deep learning, computer vision and sometimes natural language processing. You may have observed that this post is very delayed from the previous post. All I’ve done was coded some projects to post in this blog. These are the few I’ve completed coding and ready to post. It takes an image and recognize digits in it. Handwritten digits recognition – Now this just captures the puzzle and solves it. Uses computer vision and deep learning for digit recognition. An extension to above. Sudoku solver (9X9) – : In images and live streaming video. Facial expression detection From data collection and annotating to detecting. Custom Object Detection – Mean Shift,Cam Shift and Lukas&Kande Object tracker with occlusions reduced – Similar to what we see in fashions category in e-commerce. A fashion image search engine – We use Caltech-10 and Caltech-101 for this purpose and will also show you how to use pre-trained models. Object Recognition – From collecting feature words to detecting aspects and extracting sentiments (NLP) Aspect based sentiment Analysis – From data collection to detecting (NLP) Fake Review Detector – These are a few I’ve completed the coding part and there will be more. Remember that I’ll be posting simple one page posts in between which can be considered as tweaks or utilities which are short and exciting. I’m planning to implement RCNN and Deep Ranking (http://users.eecs.northwestern.edu/%7Ejwa368/pdfs/deep_ranking.pdf) completely from scratch in or keras . Also many more to come. caffe Thank you, Have a nice day…
The moment magnitude scale was introduced in 1979 by Thomas C. Hanks and Hiroo Kanamori as a successor to the Richter scale and is used by seismologists to compare the energy released by earthquakes. [1] The moment magnitude $ M_\mathrm{w} $ is a dimensionless number defined by $ M_\mathrm{w} = {2 \over 3}\left(\log_{10} \frac{M_0}{\mathrm{N}\cdot \mathrm{m}} - 9.1\right) = {2 \over 3}\left(\log_{10} \frac{M_0}{\mathrm{dyn}\cdot \mathrm{cm}} - 16.1\right) $ where $ M_0 $ is the seismic moment. The division by N m has the effect of indicating that the seismic moment is to be expressed in newton meters before the logarithm is taken; see ISO 31-0. An increase of 1 step on this logarithmic scale corresponds to a 10 1.5 = 31.6 times increase in the amount of energy released, and an increase of 2 steps corresponds to a 10³ = 1000 times increase in energy. The constants in the equation are chosen so that estimates of moment magnitude roughly agree with estimates using other scales, such as the Local Magnitude scale, M L, commonly called the Richter magnitude scale. One advantage of the moment magnitude scale is that, unlike other magnitude scales, it does not saturate at the upper end. That is, there is no particular value beyond which all large earthquakes have about the same magnitude. For this reason, moment magnitude is now the most often used estimate of large earthquake magnitudes. [2] The symbol for the moment magnitude scale is $ M_\mathrm{w} $, with the subscript w meaning mechanical work accomplished. The United States Geological Survey does not use this scale for earthquakes with a magnitude of less than 3.5. ReferencesEdit ↑ Hanks, Thomas C.; Kanamori, Hiroo (05/1979). "Moment magnitude scale". Journal of Geophysical Research 84(B5): 2348–2350. doi:10.1029/JB084iB05p02348. Retrieved on 2007-10-06.</cite> </li> ↑ Boyle, Alan (May 12, 2008). "Quakes by the numbers". MSNBC. Retrieved on 2008-05-12. “That original scale has been tweaked through the decades, and nowadays calling it the "Richter scale" is an anachronism. The most common measure is known simply as the moment magnitude scale.” </li></ol>
Berry-Esseen's Central Limit Theorem for Non-causal Linear Processes in Hilbert Space Berry-Esseen's Central Limit Theorem for Non-causal Linear Processes in Hilbert Space Abstract Let $H$ be a real separable Hilbert space and $(a_k)_{k\in\mathbb{Z}}$ a sequence of bounded linear operators from $H$ to $H$. We consider the linear process $X$ defined for any $k$ in $\mathbb{Z}$ by $X_k=\sum_{j\in\mathbb{Z}}a_j(\varepsilon_{k-j})$ where $(\varepsilon_k)_{k\in\mathbb{Z}}$ is a sequence of i.i.d. centered $H$-valued random variables. We investigate the rate of convergence in the CLT for $X$ and in particular we obtain the usual Berry-Esseen's bound provided that $\sum_{j\in\mathbb{Z}}\vert j\vert\|a_j\|_{{\mathcal L}(H)}< +\infty$ and $\varepsilon_0$ belongs to $L_H^{\infty}$.
Current browse context: math.PR Change to browse by: References & Citations Bookmark(what is this?) Mathematics > Probability Title: Projections of planar Mandelbrot measures (Submitted on 30 May 2016) Abstract: Let $\mu$ be a planar Mandelbrot measure and $\pi_*\mu$ its orthogonal projection on one of the main axes. We study the thermodynamic and geometric properties of $\pi_*\mu$. We first show that $\pi_*\mu$ is exactly dimensional, with $\dim(\pi_*\mu)=\min(\dim(\mu),\dim(\nu))$, where~$\nu$ is the Bernoulli product measure obtained as the expectation of $\pi_*\mu$. We also prove that $\pi_*\mu$ is absolutely continuous with respect to $\nu$ if and only if $\dim(\mu)>\dim(\nu)$, and find sufficient conditions for the equivalence of these measures. Our results provides a new proof of Dekking-Grimmett-Falconer formula for the Hausdorff and box dimension of the topological support of $\pi_*\mu$, as well as a new variational interpretation. We obtain the free energy function $\tau_{\pi_*\mu}$ of $\pi_*\mu$ on a wide subinterval $[0,q_c)$ of $\mathbb{R}_+$. For $q\in[0,1]$, it is given by a variational formula which sometimes yields phase transitions of order larger than~1. For $q>1$, it is given by $\min(\tau_\nu,\tau_\mu)$, which can exhibit first order phase transitions. This is in contrast with the analyticity of $\tau_\mu$ over $[0,q_c)$. Also, we prove the validity of the multifractal formalism for $\pi_*\mu$ at each $\alpha\in (\tau_{\pi_*\mu}'(q_c-),\tau_{\pi_*\mu}'(0+)]$. Submission historyFrom: Julien Barral [view email] [v1]Mon, 30 May 2016 01:27:08 GMT (637kb)
Nowadays we can associate to a topological space $X$ a category called the fundamental (or Poincare) $\infty$-groupoid given by taking $Sing(X)$. There are many different categories that one can associate to a space $X$. For example, one could build the small category whose object set is the set of points with only the identity morphisms from a point to itself. It is claimed that the classifying space of this category returns the space: $BX=X$ The inspiration for these examples comes from three primary sources: Graeme Segal's famous 1968 paper Classifying Spaces and Spectral Sequences, Raoul Bott's Mexico notes (taken by Lawrence Conlon) Lectures on characteristic classes and foliations, and a 1995 pre-print called Morse Theory and Classifying Spaces by Ralph Cohen, G. Segal and John Jones. In each of these papers there is a notion of a topological category. It is not just a category enriched in Top, since the set of objects can have non-discrete topology. Here is the definition that I can gleam from these articles: A topological category consists of a pair of spaces $(Obj,Mor)$ with four continuous structure maps: $i:Obj\to Mor$, which sends an object to the identity morphism $s:Mor\to Obj$, which gives the source of an arrow $t:Mor\to Obj$, which gives the target of an arrow $\circ:Mor\times_{t,s}Mor\to Mor$, which is composition. Were $i$ is a section of both $s$ and $t$, and all the axioms of a small category hold. Is the appropriate modern terminology to describe this a Segal Space? What would Lurie call it? Based on reading Chris Schommer-Pries MO post and elsewhere this seems to be true. Would the modern definition of the above be a Segal Space where the Segal maps are identities? Also, why do we demand that the topology on objects be discrete for Segal Categories? Is there something wrong with allowing the object sets to have topologies?
With \(U\), \(A\), \(H\) and \(G\) in hand we have potentials as a functions of whichever variable pair we want: \(S\) and \(V\), to \(T\) and \(P\). Additional Legendre transforms will provide us with further potentials in case we have other variables (such as surface area \(A\), length \(L\), magnetic moment \(M\), etc.). Thermodynamic problems always involve computing a variable of interest. It may be a derivative if it is an intensive variable, or even a second derivative (higher derivatives are rarely of interest). Example 1 \[\left( \dfrac{\partial G}{\partial P} \right)_T=V\] or 2 \[ \left( \dfrac{\partial^2 G}{\partial P^2} \right)_T= \left( \dfrac{\partial V}{\partial P} \right)_T = \kappa V\] The solution procedure is thus: Select the derivative or variable to be computed; Select the potential representation that makes it easiest, or corresponds to variables you already have in hand. Manipulate the thermodynamic derivative you know to get the one you want. Easy as 1-2-3! We now turn to two methods to manipulate the thermodynamic derivations: 6.1 Tool 1: Maxwell relations This tool is useful if two variables are NOT conjugate (e.g. \(T\) and \(V\) as opposed to \(T\) and \(S\)) If \(z = z(x,y)\), then its total differential is: \[ dz = \left( \dfrac{\partial z}{\partial x} \right)_y dx + \left( \dfrac{\partial z}{\partial y} \right)_x dy\] but \[ \dfrac{\partial^2 z}{\partial x \partial y} = \dfrac{\partial^2 z}{\partial y \partial x}\] so \[ \dfrac{\partial}{\partial y} \left( \dfrac{\partial z}{\partial x} \right)_y = \dfrac{\partial}{\partial x} \left( \dfrac{\partial z}{\partial y} \right)_x\] Maxwell’s relations must hold true if \(z\) is a state function, as we saw in chapter 1. They basically state that the cross-second derivatives of a function must be identical. Example 6.1 \[ dG = -SdT + VdP + \mu dn \] \[ - \left(\dfrac{\partial S}{\partial P}\right)_{T,n} = - \left(\dfrac{\partial V}{\partial T}\right)_{P,n}= \kappa V \] or \[ - \left(\dfrac{\partial V}{\partial n}\right)_{P,T} = - \left(\dfrac{\partial \mu}{\partial P}\right)_{n,T}\] All four of the quantities in parenthesis are second derivatives of \(G\). We can verify the second line above for an ideal gas: \[ - \left(\dfrac{\partial \mu}{\partial P}\right)_{n,T} = \dfrac{\partial}{\partial P} RT\ln P = \dfrac{RT}{P} \] and \[ - \left(\dfrac{\partial V}{\partial n}\right)_{P,T} = \dfrac{\partial}{\partial n} \dfrac{nRT}{P} = \dfrac{RT}{P} \] Note: As discussed in chapter 7, only three second derivatives not involving \(n_i\) are independent; many sets can be chosen; the conventional one is \(\alpha\), \(\kappa\), and \(c_P\), all defined at constant \(T\) or \(P\) (Gibbs ensemble). You can preview them at the beginning of chapter 7. 6.2 Tool 2: Jacobi Determinants To reduce thermodynamic formulas to a minimal number of derivatives (e.g. \(\alpha\), \(\kappa\), and \(c_P\)), the Jacobi determinants can be used in addition to the Maxwell relations. The idea is that you can reduce ANY derivative or variable you want to functions of \(T\), \(P\), \(n\) (if working in the Gibbs ensemble, other combinations otherwise) and second derivatives \(\alpha\), \(\kappa\), and \(c_P\)(others in different ensembles). The Jacobi determinant is defined as \[ \dfrac{\partial (x,y,...z)}{\partial (u,v,...w)} \equiv \text{det} \begin{vmatrix} \left(\dfrac{\partial x}{\partial u}\right) & \left(\dfrac{\partial x}{\partial v}\right) & \left(\dfrac{\partial x}{\partial w}\right) \\ \left(\dfrac{\partial y}{\partial u}\right) & \left(\dfrac{\partial y}{\partial u}\right) & \left(\dfrac{\partial y}{\partial w}\right) \\ \left(\dfrac{\partial z}{\partial u}\right) & \left(\dfrac{\partial z}{\partial u}\right) & left(\dfrac{\partial z}{\partial u}\right) \\ \end{vmatrix}\] Jacobians are useful for any variables transformation \((u, v, … w) \rightarrow (x, y, … z)\). Any thermodynamic derivative can be expressed as follows: \[ \left( \dfrac{\partial x}{\partial u}\right)_{v...w} = \dfrac{\partial (x,y,...z)}{\partial (u,v,...w)}\] because \( \dfrac{\partial v}{\partial u} = 0\) and \(\dfrac{\partial v}{\partial v}=1\) etc. in the above determinant will leave only the desired derivative when the determinant is multiplied out. The derivatives can then be manipulated by using the Jacobian identities: i) \[\dfrac{\partial (x,y,...z)}{\partial (u,v,...w)} = - \dfrac{\partial (y,x,...z)}{\partial (u,v,...w)} = - \dfrac{\partial (x,y,...z)}{\partial (v,u,...w)} \] via the properties of row or column permutation of determinants. ii) \[\dfrac{\partial (x,y,...z)}{\partial (u,v,...w)} = \dfrac{\partial (y,x,...z)}{\partial (r,s,...t)} \cdot \dfrac{\partial (r,s,...t)}{\partial (v,u,...w)} \] For example: iii) \[ \dfrac{\partial(x,y,...z)}{\partial(u,v,...,w)} =\dfrac{1}{\dfrac{\partial(u,v,...,w)}{\partial(x,y,...,z)}}\] Proof \[det(M^{-1})= \dfrac{1}{det{M}}\] This concludes all the formal thermodynamic tools: From \(\Delta S > 0\) (from Postulate 2), to the Euler formula, Legendre Transforms, and finally Maxwell relations and Jacobians, you have all the problem-solving tools thermodynamics has to offer to tackle equilibrium problems. Armed with the Maxwell and Jacobian relations, we can reduce any expression to \(\alpha\), \(\kappa\), and \(c_p\) (plus concentration or other derivatives, depending on the extensive variables of the system). Let us do a few examples: 6.3 Applications Example 6.1: Heat Capacity at Constant Volume Evaluated the heat capacity at constant volume in terms of heat capacity at constant pressure. \[c_b \equiv \left(\dfrac{dq}{dT}\right)_V = \dfrac{T}{n}\left(\dfrac{\partial S}{\partial T}\right)_V \tag{1}\] Step 1: is is there a Maxwell relation? \[ = \dfrac{T}{n} \dfrac{\partial(S,V)}{\partial(T,V)} \tag{2}\] Step 2: we want \(P\) held constant in the Jacobian, not \(V\), so split the Jacobian by chain rule \[ \dfrac{T}{n} \dfrac{\partial(S,V)}{\partial(T,P)} \dfrac{\partial(T,P)}{\partial(T,V)} \tag{3}\] Step 3: Flip columns in the second Jacobian and evaluate \( (\partial P/ \partial V)_T\), the isothermal compressibility \(\kappa\) \[= \dfrac{-T}{nV\kappa}\dfrac{\partial(S,V)}{\partial(T,P)}\tag{4}\] Step 4: Can we use Maxwell or Jacobians? \[ = -\dfrac{T}{nV\kappa} \left\{ \left( \dfrac{\partial S}{\partial T} \right)_P \left(\dfrac{\partial V}{\partial P} \right)_T - \left( \dfrac{\partial S}{\partial P} \right)_T \left(\dfrac{\partial V}{\partial T} \right)_P \right\} \tag{5}\] Step 5: We know three of the derivatives in terms of second derivatives of the Gibbs free energy. The missing one, \(\partial S/\partial P)_T = ?\) can be obtained using Maxwell: \(? = - (\partial V/\partial T_P\), just the \[= -\dfrac{T}{nV\kappa} \left\{ \left( \dfrac{nC_p}{T} \right) \left(- \dfrac{V}{\kappa} \right) - ? (V\alpha) \right\}\] Step 6: Collect everything and note that \(V\), \(T\), \(n\) and \(\kappa\) are all positive, as is \(\alpha^2\), so \(c_v\) must always be smaller than \(c_P\). This should make intuitive sense: if I keep a system at constant pressure and put heat in it, it will also expand, ‘losing’ some of the heat as work. Thus the temperature will not go up as much, so the heat capacity is bigger at constant pressure. \[ =c_p -\dfrac{VT\alpha^2}{n\kappa}\] \[\Rightarrow c_p \gt c_v\] Example 6.2: Change in Enthalpy at Constant P Calculate the change in enthalpy at constant pressure. \[ \left( \dfrac{\partial H}{\partial T}\right)_{P,n} = \left( \dfrac{\partial H}{\partial S}\right)_{P,n} \left( \dfrac{\partial S}{\partial T}\right)_{P,n} = T \left( \dfrac{\partial S}{\partial T}\right)_{P,n} = T \dfrac{nC_p}{T} = nc_p\] \[ Rightarrow n\int _{T_0}^{T} dq = \int _{T_0}^{T} dH = \int _{T_0}^{T} nc_p (T') dT' = H(T)-H(T_0)\] Thus \(dH = n c_ dT\). We derived this earlier by waving hands and using \(dq_{rev}\), but here it is, formally. Example 6.3 Isentropic Quasistatic Expansion In an isentropic quasistatic expansion, (\(S\) = constant, so \(dq_{rev}/T -dS =0\)). Do the homework problem before you check out the solution below! This describes the pressure-temperature relationship in a molecular beam, where no heat can flow into the expanding gas, so its temperature must drop as the pressure drops. What happens is that at the nozzle of the molecular beam, collisions preferentially eject particles forward, preserving their average kinetic energy, but making a stream of particles with a narrower relative velocity distribution. The relevant derivative is \((\partial T/\partial P)_S\), which looks a bit odd at first, but we are fully equipped to handle derivatives at constant entropy: \[ 0 =dS = \dfrac{dq_{rev}}{T}\] \[ dT= \left ( \dfrac{\partial T}{\partial P} \right)_S dP \] \[\Rightarrow \dfrac{\partial(T,S)}{\partial(P,S)} \dfrac{\partial(T,S)}{\partial(P,T)}\dfrac{\partial(P,T)}{\partial(P,S)} = - \dfrac{\partial(S,T)}{\partial(P,S)} \cdot \dfrac{1}{\dfrac{\partial(S,P)}{\partial(T,P)}}=\dfrac{-\left(\dfrac{\partial S}{\partial P}\right)_T}{\left(\dfrac{\partial S}{\partial T}\right)_P}\] For the numerator, \( \left ( \dfrac{\partial S}{\partial P} \right)_T = \left ( \dfrac{\partial V}{\partial T} \right)_P = \alpha V\) by the Maxwell relations. Note that just because \(S\) is constrained to be held constant in \((\partial T/ \partial P)_S\) does not mean that \(\partial S/ \partial P=0\)! For the denominator \[ \left( \dfrac{\partial S}{\partial T} \right)_P = \dfrac{nc_p}{T} \] \[\Rightarrow dT = \dfrac{\alpha VT}{nc_p} dP \] For an ideal monatomic gas, \( \alpha = \dfrac{1}{T}=\dfrac{nR}{PV}\), and \(c_p=\dfrac{5}{2} R\) so we can write explicitly \[\dfrac{dt}{T}=\dfrac{2}{5}\dfrac{dP}{P}\] or \[ \left(\dfrac{T}{T_0}\right) =\left(\dfrac{P}{P_0}\right)^{2/5}\] after integrating. The temperature drops in a molecular beam, but not as dramatically as the pressure. Example 6.4: Heat Capacity in terms of Energy \[ c_v = \left(\dfrac{dq}{dT}\right)_V = \dfrac{T}{n} \left(\dfrac{\partial S}{\partial T}\right)_V\] with \(T>0\) and \(n>0\) \[ \left(\dfrac{\partial S}{\partial T}\right)_V = \dfrac{\partial(S,V)}{\partial(T,V)}=\dfrac{\partial(S,V)}{\partial(U,V)} \dfrac{\partial(U,V)}{\partial(T,V)} = \left( \dfrac{\partial S}{\partial U} \right)_V \left( \dfrac{\partial U}{\partial T} \right)_V = \dfrac{1}{T} \left( \dfrac{\partial U}{\partial T} \right)_V\] \[ c_v = \dfrac{1}{n} \left( \dfrac{\partial U}{\partial T} \right)_V =\dfrac{T}{n} \left( \dfrac{\partial S}{\partial T} \right)_V \] Again, we showed this before by invoking \(dq_{rev}\) when discussing Legendre Transforms from energy to enthalpy. Examples 6.1 and 6.3 are just the formal way of deriving it using Maxwell’s relations and Jacobians.
Update, trying to explain this in a better way: I mean how to find the result without a calculator. Base 2 Log 16 = 4: simple to figure out: 2 . 2 . 2 . 2 what about Base 2 Log 18 = ?? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community First, you extract out the integer part of the answer; that is, for $\log_2 10$, we have $$ \log_2 10 = 3+\log_2 \frac{10}8 = 3+\log_2 \left(1+\frac14\right) $$ Now, we need a good approximation for the remaining logarithm. A straight-forward method to calculate this is to use the Taylor expansion of $\log(1+x)$, but we can do somewhat better using a "Pade Approximant". A neat approximant that's fairly accurate is $$ \log(1+x) \approx \frac{x(6+x)}{6+4x} $$ We still have to adjust for base, because the logarithm I've just approximated is the natural logarithm, base $e$... so we end up with $$ \log_2(1+x)\approx\frac{x(6+x)}{(6+4x)\log(2)} $$ where $\log(2)\approx 0.693$ So we work out that $$\begin{align} \log_2 10 &\approx 3+\frac{\frac14(6+\frac14)}{(6+4\frac14)\times 0.693}\\ &=3+\frac{\frac{25}{16}}{7\times 0.693}\\ &=3+\frac{25}{77.616}\\ &\approx 3+0.322098536 \approx 3.3221 \end{align}$$ As you can see, it's quite close to the right answer. Let’s calculate by hand $L = \log_2(18)$. Be $L_0 = c_0 \in \mathbb{N}$ the biggest integer that won’t exceed $L$ our first guess. It is obvious that: $$c_0 = \lfloor \log_2(18) \rfloor = 4$$ Let $L_1 = L_0 + 2^{-1}c_{-1}$ where $c_{-1} \in \mathbb{N}$ our second guess: we’re splitting the interval of interest $[L_0, L_0 + 2^0)$ in half and trying to guess where the answer lies (we know for sure that it must be between $4$ and $5$). The previous relation implies that:$$c_{-1} = 2(L_1 - L_0)$$Recalling that any $c_{i}$ should be an integer we should not expect that letting $L_1 = L$ will be enough. If we are lucky then $c_{-1}$ resolves to an integer, and we’re done. If not so then we want to maximize $c_{-1}$ assigning to it the closest integer to the actual real value of that computation, which is:$$c_{-1} = \lfloor 2(\log_2(18) - c_0) \rfloor = \left\lfloor 2\log_2\left(\frac{18}{16}\right) \right\rfloor = \left\lfloor \log_2\left(\frac{81}{64} \right) \right\rfloor$$Since $1 < 81\,/\,64 < 2$ then $0 < \log_2\left(81\,/\,64\right) < 1$ giving $c_{-1} = 0$ and $L_{1} = L_0 = 4$. Be $L_2 = L_1 + 2^{-2}c_{-2}$, then: $$c_{-2} = 4(L_2 - L_1)$$ Going on as before: $$c_{-2}= \lfloor 4(\log_2(18) - 4) \rfloor = \left\lfloor 4\log_2\left(\frac{18}{16}\right) \right\rfloor = \left\lfloor \log_2\left(\frac{6561}{4096} \right) \right\rfloor = 0$$ Evaluate $c_{-3}$ as: $$c_{-3}= \lfloor 8(\log_2(18) - 4) \rfloor = \left\lfloor 8\log_2\left(\frac{9}{8}\right) \right\rfloor = \left\lfloor \log_2\left(\frac{9^8}{8^8}\right) \right\rfloor$$ If $9^8 > 2\cdot 8^8$ then $c_{-3}$ won’t be $0$. Now: $$9^8={9^2}^4={81^2}^2 = \cdots = 43046721$$ $$2\cdot 8^8=2 \cdot {8^2}^4=2 \cdot {64^2}^2 = \cdots = 33554432$$ We were right, $9^8 > 2\cdot 8^8$ is true; it is also true that $9^8 < 4\cdot 8^8$ just looking at the numbers. This means: $$c_{-3} = 1$$ Finally $L_3$ is: $$L_3 = c_0 + 2^{-1}c_{-1} + 2^{-2}c_{-2} + 2^{-3}c_{-3} = 4 + 0.125 = 4.125$$ This approach guarantees that $L_3 < L < L_3 + 2^{-3}$ giving an absolute error of $2^{-4} = 0.0625$ and a percent error of $1.47\,\%$ (in the worst case). Our best guess $L^\star$ is $$L_3 + 2^{-4} = 4.15625$$ while the actual value $L$ was $\simeq 4.16992$. The actual percent error is:$$\frac{L - L^\star}{L} \simeq 0.33\,\%$$ Here is one way for your specific problem. As you noted, $5^2=25$ and now keep squaring, instead of just multiplying by $5$. There is also a known trick to quickly square numbers ending in $5$, i.e. if $x = a5$ (where $a$ is any positive integer) then $x^2 = (a*(a+1))25$, for example if $5^2=25$ then $a=2$ and $a(a+1)=2\cdot 3=6$ so $$5^4=25^2 = 625$$ and $62\cdot 63 = 3906$ so $$5^8 = 625^2 = 390625,$$ so the answer will be between $8$ and $9$. Your mistake was that $5^3=125 \ne 75$... Generally people don't calculate logs "by hand." People use a computer or a calculator or a table or (gasp) a slide rule. If you are programming the computers or calculator there are infinite series that converge to these functions. If you want a reasonable approximation, you play some games. $10^3 = 1000\\ 2^{10} = 1024\\ \frac {\log 10}{\log 2} \approx \frac {10}3$ Your other one: $\log_2 18 = 2\log_2 3 + 1\\ 3^4 = 81 \\ 4 \log_2 3 \approx 3+\log 10\\ 2 \log_2 3 + 1 = 4 \frac 16$ Which compares to $4.17$ from my calculator. One way you can get a good approximation is to use the Taylor series expansion of the $\log$ function: $$\log(1+x)=\sum\limits_{n=1}^\infty(-1)^{n+1}\frac{x^n}{n}$$ You can use the fact that $|a_n-s_n|<|s_{n+1}|$ to get an idea of how accurate your answer is. You might want to memorize the following logarithms, which can then allow you to approximate most others quite well: (In base $10$) $\log2=0.30$ $\log3=0.48$ $\log5=0.70$ $\log7=0.85$ Now, just remember the following logarithm properties: $\log{ab}=\log{a}+\log{b}, $ $\log{\frac{a}{b}}=\log{a}-\log{b}$, and $\log_ab=\frac{\log_{10}{a}}{\log_{10}{b}}$. Using these identities, you can approximate, relatively, precisely, most logarithms. Example 1: $\log_2{36}=\log_2{3}+\log_2{3}+\log_2{2}+\log_2{2}=\frac{\log3}{\log2}+\frac{\log3}{\log2}+1+1=\frac{0.48}{0.30}+\frac{0.48}{0.30}+1+1=1.60+1.60+1+1=5.20$ The actual answer is around $5.17$, but this is just an approximation. Example 2: $\log_5{100}=\log_5{5}+\log_5{5}+\log_2{5}+\log_2{5}=1+1+\frac{\log2}{\log5}+\frac{\log2}{\log5}=1+1+\frac{0.30}{0.70}+\frac{0.30}{0.70}=1+1+0.43+0.43=2.86$ The actual answer is, when rounded to $2$ decimal places, $2.86.$ Hopefully you can see, because of logarithm properties, why $\log_{10}4,\log_{10}6,\log_{10}8, and \log_{10}9$ do not need to be memorized.
It’s been a while, so let’s include a recap : a (transitive) permutation representation of the modular group $\Gamma = PSL_2(\mathbb{Z}) $ is determined by the conjugacy class of a cofinite subgroup $\Lambda \subset \Gamma $, or equivalently, to a dessin d’enfant. We have introduced a quiver (aka an oriented graph) which comes from a triangulation of the compactification of $\mathbb{H} / \Lambda $ where $\mathbb{H} $ is the hyperbolic upper half-plane. This quiver is independent of the chosen embedding of the dessin in the Dedeking tessellation. (For more on these terms and constructions, please consult the series Modular subgroups and Dessins d’enfants). Why are quivers useful? To start, any quiver $Q $ defines a noncommutative algebra, the path algebra $\mathbb{C} Q $, which has as a $\mathbb{C} $-basis all oriented paths in the quiver and multiplication is induced by concatenation of paths (when possible, or zero otherwise). Usually, it is quite hard to make actual computations in noncommutative algebras, but in the case of path algebras you can just see what happens. Moreover, we can also see the finite dimensional representations of this algebra $\mathbb{C} Q $. Up to isomorphism they are all of the following form : at each vertex $v_i $ of the quiver one places a finite dimensional vectorspace $\mathbb{C}^{d_i} $ and any arrow in the quiver [tex]\xymatrix{\vtx{v_i} \ar[r]^a & \vtx{v_j}}[/tex] determines a linear map between these vertex spaces, that is, to $a $ corresponds a matrix in $M_{d_j \times d_i}(\mathbb{C}) $. These matrices determine how the paths of length one act on the representation, longer paths act via multiplcation of matrices along the oriented path. A necklace in the quiver is a closed oriented path in the quiver up to cyclic permutation of the arrows making up the cycle. That is, we are free to choose the start (and end) point of the cycle. For example, in the one-cycle quiver [tex]\xymatrix{\vtx{} \ar[rr]^a & & \vtx{} \ar[ld]^b \\ & \vtx{} \ar[lu]^c &}[/tex] the basic necklace can be represented as $abc $ or $bca $ or $cab $. How does a necklace act on a representation? Well, the matrix-multiplication of the matrices corresponding to the arrows gives a square matrix in each of the vertices in the cycle. Though the dimensions of this matrix may vary from vertex to vertex, what does not change (and hence is a property of the necklace rather than of the particular choice of cycle) is the trace of this matrix. That is, necklaces give complex-valued functions on representations of $\mathbb{C} Q $ and by a result of Artin and Procesi there are enough of them to distinguish isoclasses of (semi)simple representations! That is, linear combinations a necklaces (aka super-potentials) can be viewed, after taking traces, as complex-valued functions on all representations (similar to character-functions). In physics, one views these functions as potentials and it then interested in the points (representations) where this function is extremal (minimal) : the vacua. Clearly, this does not make much sense in the complex-case but is relevant when we look at the real-case (where we look at skew-Hermitian matrices rather than all matrices). A motivating example (the Yang-Mills potential) is given in Example 2.3.2 of Victor Ginzburg’s paper Calabi-Yau algebras. Let $\Phi $ be a super-potential (again, a linear combination of necklaces) then our commutative intuition tells us that extrema correspond to zeroes of all partial differentials $\frac{\partial \Phi}{\partial a} $ where $a $ runs over all coordinates (in our case, the arrows of the quiver). One can make sense of differentials of necklaces (and super-potentials) as follows : the partial differential with respect to an arrow $a $ occurring in a term of $\Phi $ is defined to be the path in the quiver one obtains by removing all 1-occurrences of $a $ in the necklaces (defining $\Phi $) and rearranging terms to get a maximal broken necklace (using the cyclic property of necklaces). An example, for the cyclic quiver above let us take as super-potential $abcabc $ (2 cyclic turns), then for example $\frac{\partial \Phi}{\partial b} = cabca+cabca = 2 cabca $ (the first term corresponds to the first occurrence of $b $, the second to the second). Okay, but then the vacua-representations will be the representations of the quotient-algebra (which I like to call the vacualgebra) $\mathcal{U}(Q,\Phi) = \frac{\mathbb{C} Q}{(\partial \Phi/\partial a, \forall a)} $ which in ‘physical relevant settings’ (whatever that means…) turn out to be Calabi-Yau algebras. But, let us return to the case of subgroups of the modular group and their quivers. Do we have a natural super-potential in this case? Well yes, the quiver encoded a triangulation of the compactification of $\mathbb{H}/\Lambda $ and if we choose an orientation it turns out that all ‘black’ triangles (with respect to the Dedekind tessellation) have their arrow-sides defining a necklace, whereas for the ‘white’ triangles the reverse orientation makes the arrow-sides into a necklace. Hence, it makes sense to look at the cubic superpotential $\Phi $ being the sum over all triangle-sides-necklaces with a +1-coefficient for the black triangles and a -1-coefficient for the white ones. Let’s consider an index three example from a previous post [tex]\xymatrix{& & \rho \ar[lld]_d \ar[ld]^f \ar[rd]^e & \\ i \ar[rrd]_a & i+1 \ar[rd]^b & & \omega \ar[ld]^c \\ & & 0 \ar[uu]^h \ar@/^/[uu]^g \ar@/_/[uu]_i &}[/tex] In this case the super-potential coming from the triangulation is $\Phi = -aid+agd-cge+che-bhf+bif $ and therefore we have a noncommutative algebra $\mathcal{U}(Q,\Phi) $ associated to this index 3 subgroup. Contrary to what I believed at the start of this series, the algebras one obtains in this way from dessins d’enfants are far from being Calabi-Yau (in whatever definition). For example, using a GAP-program written by Raf Bocklandt Ive checked that the growth rate of the above algebra is similar to that of $\mathbb{C}[x] $, so in this case $\mathcal{U}(Q,\Phi) $ can be viewed as a noncommutative curve (with singularities). However, this is not the case for all such algebras. For example, the vacualgebra associated to the second index three subgroup (whose fundamental domain and quiver were depicted at the end of this post) has growth rate similar to that of $\mathbb{C} \langle x,y \rangle $… I have an outlandish conjecture about the growth-behavior of all algebras $\mathcal{U}(Q,\Phi) $ coming from dessins d’enfants : the algebra sees what the monodromy representation of the dessin sees of the modular group (or of the third braid group). I can make this more precise, but perhaps it is wiser to calculate one or two further examples…
This vignette illustrates the basic functionality of the package by simulating a few stochastic processes and estimating their parameters from regularly spaced data. SuperGauss A one-dimensional fractional Brownian motion (fBM) \(X_t = X(t)\) is a continuous Gaussian process with \(E[X_t] = 0\) and \(\mathrm{cov}(X_t, X_s) = \tfrac 1 2 (|t|^{2H} + |s|^{2H} - |t-s|^{2H})\), for \(0 < H < 1\). fBM is not stationary but has stationary increments, such that \((X_{t+\Delta t} - X_t) \stackrel{D}{=} (X_{s+\Delta t} - X_s)\) for any \(s,t\). As such, its covariance function is completely determined its mean squared displacement (MSD) \[ \mathrm{\scriptsize MSD}_X(t) = E[(X_t - X_0)^2] = |t|^{2H}. \] When the Hurst parameter \(H = \tfrac 1 2\), fBM reduces to ordinary Brownian motion. The following R code generates 5 independent fBM realizations of length \(N = 3000\) with \(H = 0.3\). The timing of the “superfast” method (Wood and Chan 1994) provided in this package is compared to that of a “fast” method (e.g., Brockwell and Davis 1991) and to the usual method (Cholesky decomposition of an unstructured variance matrix). require(SuperGauss)N <- 3000 # number of observationsdT <- 1/60 # time between observations (seconds)H <- .3 # Hurst parametertseq <- (0:N)*dT # times at which to sample fBMnpaths <- 5 # number of fBM paths to generate# to generate fbm, generate its increments, which are stationarymsd <- fbm.msd(tseq = tseq[-1], H = H)acf <- msd2acf(msd = msd) # convert msd to acf# superfast methodsystem.time({ dX <- rSnorm(n = npaths, acf = acf, fft = TRUE)}) ## user system elapsed ## 0.006 0.000 0.007 ## user system elapsed ## 0.02 0.00 0.02 ## user system elapsed ## 3.810 0.026 3.840 # convert increments to position measurementsXt <- apply(rbind(0, dX), 2, cumsum)# plotclrs <- c("black", "red", "blue", "orange", "green2")par(mar = c(4.1,4.1,.5,.5))plot(0, type = "n", xlim = range(tseq), ylim = range(Xt), xlab = "Time (s)", ylab = "Position (m)")for(ii in 1:npaths) { lines(tseq, Xt[,ii], col = clrs[ii], lwd = 2)} Suppose that \(\boldsymbol{X}= (X_{0},\ldots,X_{N})\) are equally spaced observations of an fBM process with \(X_i = X(i \Delta t)\), and let \(\Delta\boldsymbol{X}= (\Delta X_{0},\ldots,\Delta X_{N-1})\) denote the corresponding increments, \(\Delta X_i = X_{i+1} - X_i\). Then the loglikelihood function for \(H\) is \[\ell(H \mid \Delta\boldsymbol{X}) = -\tfrac 1 2 \big(\Delta\boldsymbol{X}' \boldsymbol{V}_H^{-1} \Delta\boldsymbol{X}+ \log |\boldsymbol{V}_H|\big),\] where \(V_H\) is a Toeplitz matrix, \[\boldsymbol{V}_H= [\mathrm{cov}(\Delta X_i, \Delta X_j)]_{0 \le i,j < N} = \begin{bmatrix} \gamma_0 & \gamma_1 & \cdots & \gamma_{N-1} \\ \gamma_1 & \gamma_0 & \cdots & \gamma_{N-2} \\ \vdots & \vdots & \ddots & \vdots \\ \gamma_{N-1} & \gamma_{N-2} & \cdots & \gamma_0 \end{bmatrix}.\] Thus, each evaluation of the loglikelihood requires the inverse and log-determinant of a Toeplitz matrix, which scales as \(\mathcal O(N^2)\) with the Durbin-Levinson algorithm. The package implements an extended version of the Generalized Schur algorithm of Ammar and Gragg (1988), which scales these computations as \(\mathcal O(N \log^2 N)\). With careful memory management and extensive use of the SuperGauss FFTW SuperGauss Toeplitz matrix class The bulk of the likelihood calculations in are handled by the SuperGauss Toeplitz matrix class. A Toeplitz object is created as follows: ## Toeplitz matrix of size 3000 ## acf: 0.0857 -0.0208 -0.00421 -0.00228 -0.0015 -0.00109 ... ## Toeplitz matrix of size 3000 ## acf: NULL Its primary methods are illustrated below: ## [1] TRUE ## [1] -1.942890e-15 1.221245e-15 ## [1] -2.479794e-12 2.799538e-12 ## [1] -7642.578 -7642.578 The following code shows how to obtain the maximum likelihood of \(H\) and its standard error for a given fBM path. For speed comparisons, the optimization is done both using the superfast Generalized Schur algorithm and the fast Durbin-Levinson algorithm. dX <- diff(Xt[,1]) # obtain the increments of a given pathN <- length(dX)# autocorrelation of fBM incrementsfbm.acf <- function(H) { msd <- fbm.msd(1:N*dT, H = H) msd2acf(msd)}# loglikelihood using generalized Schur algorithmToep <- Toeplitz(n = N) # pre-allocate memoryloglik.GS <- function(H) { Toep$setAcf(acf = fbm.acf(H)) dSnorm(X = dX, acf = Toep, log = TRUE)}# loglikelihood using Durbin-Levinson algorithmloglik.DL <- function(H) { dSnormDL(X = dX, acf = fbm.acf(H), log = TRUE)}# superfast methodsystem.time({ GS.mle <- optimize(loglik.GS, interval = c(.01, .99), maximum = TRUE)}) ## user system elapsed ## 0.028 0.001 0.029 ## user system elapsed ## 0.316 0.000 0.316 ## GS DL ## 0.3008318 0.3008318 ## Loading required package: numDeriv ## mle se ## 0.300831841 0.003323498 ReferenceClasses In order to effectively manage memory in the underlying C++ code, the Toeplitz class is implemented using R’s “Reference Classes”. Among other things, this means that when a Toeplitz object is passed to a function, the function does not make a copy of it: all modifications to the object inside the object are reflected on the object outside the function as well, as in the following example: ## Toeplitz matrix of size 3000 ## acf: 0.0167 0 1.73e-18 -3.47e-18 0 6.94e-18 ... ## Toeplitz matrix of size 3000 ## acf: 0.0167 0 1.73e-18 -3.47e-18 0 6.94e-18 ... ## [1] -422.5354 ## Toeplitz matrix of size 3000 ## acf: 0.0857 -0.0208 -0.00421 -0.00228 -0.0015 -0.00109 ... ## Toeplitz matrix of size 3000 ## acf: 0.0857 -0.0208 -0.00421 -0.00228 -0.0015 -0.00109 ... In addition to the superfast algorithm for Gaussian likelihood evaluations , provides such algorithms for the loglikelihood gradient and Hessian functions, leading to superfast versions of many inference algorithms such as Newton-Raphson and Hamiltonian Monte Carlo. An example of the former is given below using the two-parameter exponential autocorrelation model \[\mathrm{\scriptsize ACF}_X(t \mid \lambda, \sigma) = \sigma^2 \exp(- |t/\lambda|).\] SuperGauss # autocorrelation functionexp.acf <- function(t, lambda, sigma) sigma^2 * exp(-abs(t/lambda))# gradient, returned as a 2-column matrixexp.acf.grad <- function(t, lambda, sigma) { ea <- exp.acf(t, lambda, 1) cbind(abs(t)*(sigma/lambda)^2 * ea, # d_acf/d_lambda 2*sigma * ea) # d_acf/d_sigma}# Hessian, returned as an array of size length(t) x 2 x 2exp.acf.hess <- function(t, lambda, sigma) { ea <- exp.acf(t, lambda, 1) sl2 <- sigma/lambda^2 hess <- array(NA, dim = c(length(t), 2, 2)) hess[,1,1] <- sl2^2*(t^2 - 2*abs(t)*lambda) * ea # d2_acf/d_lambda^2 hess[,1,2] <- 2*sl2 * abs(t) * ea # d2_acf/(d_lambda d_sigma) hess[,2,1] <- hess[,1,2] # d2_acf/(d_sigma d_lambda) hess[,2,2] <- 2 * ea # d2_acf/d_sigma^2 hess}# simulate datalambda <- runif(1, .5, 2)sigma <- runif(1, .5, 2)tseq <- (1:N-1)*dTacf <- exp.acf(t = tseq, lambda = lambda, sigma = sigma)Xt <- rSnorm(acf = acf)Toep <- Toeplitz(n = N) # storage space# negative loglikelihood function of theta = (lambda, sigma)# include attributes for gradient and Hessianexp.negloglik <- function(theta) { lambda <- theta[1] sigma <- theta[2] # acf, its gradient, and Hessian Toep$setAcf(acf = exp.acf(tseq, lambda, sigma)) # use the Toeplitz class dacf <- exp.acf.grad(tseq, lambda, sigma) d2acf <- exp.acf.hess(tseq, lambda, sigma) nll <- -1 * dSnorm(X = Xt, acf = Toep, log = TRUE) attr(nll, "gradient") <- -1 * Snorm.grad(X = Xt, acf = Toep, dacf = dacf) attr(nll, "hessian") <- -1 * Snorm.hess(X = Xt, acf = Toep, dacf = dacf, d2acf = d2acf) nll}# optimizationsystem.time({ mle.fit <- nlm(f = exp.negloglik, p = c(1,1), hessian = TRUE)}) ## user system elapsed ## 0.628 0.062 0.690 ## lambda sigma## true 1.5923286 0.6122976## est 1.6487305 0.6290641## se 0.4323472 0.0820681 Ammar, G.S. and Gragg, W.B., 1988. Superfast solution of real positive definite Toeplitz systems. SIAM Journal on Matrix Analysis and Applications, 9 (1), 61–76. Brockwell, P.J. and Davis, R.A., 1991. Time series: Theory and methods. Springer: New York. Frigo, M. and Johnson, S.G., 2005. The design and implementation of FFTW3. Proceedings of the IEEE, 93 (2), 216–231. Wood, A.T. and Chan, G., 1994. Simulation of stationary gaussian processes in \([0, 1]^d\). Journal of Computational and Graphical Statistics, 3 (4), 409–432.
The Monster is the largest of the 26 sporadic simple groups and has order 808 017 424 794 512 875 886 459 904 961 710 757 005 754 368 000 000 000 = 2^46 3^20 5^9 7^6 11^2 13^3 17 19 23 29 31 41 47 59 71. It is not so much the size of its order that makes it hard to do actual calculations in the monster, but rather the dimensions of its smallest non-trivial irreducible representations (196 883 for the smallest, 21 296 876 for the next one, and so on). In characteristic two there is an irreducible representation of one dimension less (196 882) which appears to be of great use to obtain information. For example, Robert Wilson used it to prove that The Monster is a Hurwitz group. This means that the Monster is generated by two elements g and h satisfying the relations $g^2 = h^3 = (gh)^7 = 1 $ Geometrically, this implies that the Monster is the automorphism group of a Riemann surface of genus g satisfying the Hurwitz bound 84(g-1)=#Monster. That is, g=9619255057077534236743570297163223297687552000000001=42151199 * 293998543 * 776222682603828537142813968452830193 Or, in analogy with the Klein quartic which can be constructed from 24 heptagons in the tiling of the hyperbolic plane, there is a finite region of the hyperbolic plane, tiled with heptagons, from which we can construct this monster curve by gluing the boundary is a specific way so that we get a Riemann surface with exactly 9619255057077534236743570297163223297687552000000001 holes. This finite part of the hyperbolic tiling (consisting of #Monster/7 heptagons) we’ll call the empire of the monster and we’d love to describe it in more detail. Look at the half-edges of all the heptagons in the empire (the picture above learns that every edge in cut in two by a blue geodesic). They are exactly #Monster such half-edges and they form a dessin d’enfant for the monster-curve. If we label these half-edges by the elements of the Monster, then multiplication by g in the monster interchanges the two half-edges making up a heptagonal edge in the empire and multiplication by h in the monster takes a half-edge to the one encountered first by going counter-clockwise in the vertex of the heptagonal tiling. Because g and h generated the Monster, the dessin of the empire is just a concrete realization of the monster. Because g is of order two and h is of order three, the two permutations they determine on the dessin, gives a group epimorphism $C_2 \ast C_3 = PSL_2(\mathbb{Z}) \rightarrow \mathbb{M} $ from the modular group $PSL_2(\mathbb{Z}) $ onto the Monster-group. In noncommutative geometry, the group-algebra of the modular group $\mathbb{C} PSL_2 $ can be interpreted as the coordinate ring of a noncommutative manifold (because it is formally smooth in the sense of Kontsevich-Rosenberg or Cuntz-Quillen) and the group-algebra of the Monster $\mathbb{C} \mathbb{M} $ itself corresponds in this picture to a finite collection of ‘points’ on the manifold. Using this geometric viewpoint we can now ask the question What does the Monster see of the modular group? To make sense of this question, let us first consider the commutative equivalent : what does a point P see of a commutative variety X? Evaluation of polynomial functions in P gives us an algebra epimorphism $\mathbb{C}[X] \rightarrow \mathbb{C} $ from the coordinate ring of the variety $\mathbb{C}[X] $ onto $\mathbb{C} $ and the kernel of this map is the maximal ideal $\mathfrak{m}_P $ of $\mathbb{C}[X] $ consisting of all functions vanishing in P. Equivalently, we can view the point $P= \mathbf{spec}~\mathbb{C}[X]/\mathfrak{m}_P $ as the scheme corresponding to the quotient $\mathbb{C}[X]/\mathfrak{m}_P $. Call this the 0-th formal neighborhood of the point P. This sounds pretty useless, but let us now consider higher-order formal neighborhoods. Call the affine scheme $\mathbf{spec}~\mathbb{C}[X]/\mathfrak{m}_P^{n+1} $ the n-th forml neighborhood of P, then the first neighborhood, that is with coordinate ring $\mathbb{C}[X]/\mathfrak{m}_P^2 $ gives us tangent-information. Alternatively, it gives the best linear approximation of functions near P. The second neighborhood $\mathbb{C}[X]/\mathfrak{m}_P^3 $ gives us the best quadratic approximation of function near P, etc. etc. These successive quotients by powers of the maximal ideal $\mathfrak{m}_P $ form a system of algebra epimorphisms $\ldots \frac{\mathbb{C}[X]}{\mathfrak{m}_P^{n+1}} \rightarrow \frac{\mathbb{C}[X]}{\mathfrak{m}_P^{n}} \rightarrow \ldots \ldots \rightarrow \frac{\mathbb{C}[X]}{\mathfrak{m}_P^{2}} \rightarrow \frac{\mathbb{C}[X]}{\mathfrak{m}_P} = \mathbb{C} $ and its inverse limit $\underset{\leftarrow}{lim}~\frac{\mathbb{C}[X]}{\mathfrak{m}_P^{n}} = \hat{\mathcal{O}}_{X,P} $ is the completion of the local ring in P and contains all the infinitesimal information (to any order) of the variety X in a neighborhood of P. That is, this completion $\hat{\mathcal{O}}_{X,P} $ contains all information that P can see of the variety X. In case P is a smooth point of X, then X is a manifold in a neighborhood of P and then this completion $\hat{\mathcal{O}}_{X,P} $ is isomorphic to the algebra of formal power series $\mathbb{C}[[ x_1,x_2,\ldots,x_d ]] $ where the $x_i $ form a local system of coordinates for the manifold X near P. Right, after this lengthy recollection, back to our question what does the monster see of the modular group? Well, we have an algebra epimorphism $\pi~:~\mathbb{C} PSL_2(\mathbb{Z}) \rightarrow \mathbb{C} \mathbb{M} $ and in analogy with the commutative case, all information the Monster can gain from the modular group is contained in the $\mathfrak{m} $-adic completion $\widehat{\mathbb{C} PSL_2(\mathbb{Z})}_{\mathfrak{m}} = \underset{\leftarrow}{lim}~\frac{\mathbb{C} PSL_2(\mathbb{Z})}{\mathfrak{m}^n} $ where $\mathfrak{m} $ is the kernel of the epimorphism $\pi $ sending the two free generators of the modular group $PSL_2(\mathbb{Z}) = C_2 \ast C_3 $ to the permutations g and h determined by the dessin of the pentagonal tiling of the Monster’s empire. As it is a hopeless task to determine the Monster-empire explicitly, it seems even more hopeless to determine the kernel $\mathfrak{m} $ let alone the completed algebra… But, (surprise) we can compute $\widehat{\mathbb{C} PSL_2(\mathbb{Z})}_{\mathfrak{m}} $ as explicitly as in the commutative case we have $\hat{\mathcal{O}}_{X,P} \simeq \mathbb{C}[[ x_1,x_2,\ldots,x_d ]] $ for a point P on a manifold X. Here the details : the quotient $\mathfrak{m}/\mathfrak{m}^2 $ has a natural structure of $\mathbb{C} \mathbb{M} $-bimodule. The group-algebra of the monster is a semi-simple algebra, that is, a direct sum of full matrix-algebras of sizes corresponding to the dimensions of the irreducible monster-representations. That is, $\mathbb{C} \mathbb{M} \simeq \mathbb{C} \oplus M_{196883}(\mathbb{C}) \oplus M_{21296876}(\mathbb{C}) \oplus \ldots \ldots \oplus M_{258823477531055064045234375}(\mathbb{C}) $ with exactly 194 components (the number of irreducible Monster-representations). For any $\mathbb{C} \mathbb{M} $-bimodule $M $ one can form the tensor-algebra $T_{\mathbb{C} \mathbb{M}}(M) = \mathbb{C} \mathbb{M} \oplus M \oplus (M \otimes_{\mathbb{C} \mathbb{M}} M) \oplus (M \otimes_{\mathbb{C} \mathbb{M}} M \otimes_{\mathbb{C} \mathbb{M}} M) \oplus \ldots \ldots $ and applying the formal neighborhood theorem for formally smooth algebras (such as $\mathbb{C} PSL_2(\mathbb{Z}) $) due to Joachim Cuntz (left) and Daniel Quillen (right) we have an isomorphism of algebras $\widehat{\mathbb{C} PSL_2(\mathbb{Z})}_{\mathfrak{m}} \simeq \widehat{T_{\mathbb{C} \mathbb{M}}(\mathfrak{m}/\mathfrak{m}^2)} $ where the right-hand side is the completion of the tensor-algebra (at the unique graded maximal ideal) of the $\mathbb{C} \mathbb{M} $-bimodule $\mathfrak{m}/\mathfrak{m}^2 $, so we’d better describe this bimodule explicitly. Okay, so what’s a bimodule over a semisimple algebra of the form $S=M_{n_1}(\mathbb{C}) \oplus \ldots \oplus M_{n_k}(\mathbb{C}) $? Well, a simple S-bimodule must be either (1) a factor $M_{n_i}(\mathbb{C}) $ with all other factors acting trivially or (2) the full space of rectangular matrices $M_{n_i \times n_j}(\mathbb{C}) $ with the factor $M_{n_i}(\mathbb{C}) $ acting on the left, $M_{n_j}(\mathbb{C}) $ acting on the right and all other factors acting trivially. That is, any S-bimodule can be represented by a quiver (that is a directed graph) on k vertices (the number of matrix components) with a loop in vertex i corresponding to each simple factor of type (1) and a directed arrow from i to j corresponding to every simple factor of type (2). That is, for the Monster, the bimodule $\mathfrak{m}/\mathfrak{m}^2 $ is represented by a quiver on 194 vertices and now we only have to determine how many loops and arrows there are at or between vertices. Using Morita equivalences and standard representation theory of quivers it isn’t exactly rocket science to determine that the number of arrows between the vertices corresponding to the irreducible Monster-representations $S_i $ and $S_j $ is equal to $dim_{\mathbb{C}}~Ext^1_{\mathbb{C} PSL_2(\mathbb{Z})}(S_i,S_j)-\delta_{ij} $ Now, I’ve been wasting a lot of time already here explaining what representations of the modular group have to do with quivers (see for example here or some other posts in the same series) and for quiver-representations we all know how to compute Ext-dimensions in terms of the Euler-form applied to the dimension vectors. Right, so for every Monster-irreducible $S_i $ we have to determine the corresponding dimension-vector $~(a_1,a_2;b_1,b_2,b_3) $ for the quiver $\xymatrix{ & & & & \vtx{b_1} \\ \vtx{a_1} \ar[rrrru]^(.3){B_{11}} \ar[rrrrd]^(.3){B_{21}} \ar[rrrrddd]_(.2){B_{31}} & & & & \\ & & & & \vtx{b_2} \\ \vtx{a_2} \ar[rrrruuu]_(.7){B_{12}} \ar[rrrru]_(.7){B_{22}} \ar[rrrrd]_(.7){B_{23}} & & & & \\ & & & & \vtx{b_3}} $ Now the dimensions $a_i $ are the dimensions of the +/-1 eigenspaces for the order 2 element g in the representation and the $b_i $ are the dimensions of the eigenspaces for the order 3 element h. So, we have to determine to which conjugacy classes g and h belong, and from Wilson’s paper mentioned above these are classes 2B and 3B in standard Atlas notation. So, for each of the 194 irreducible Monster-representations we look up the character values at 2B and 3B (see below for the first batch of those) and these together with the dimensions determine the dimension vector $~(a_1,a_2;b_1,b_2,b_3) $. For example take the 196883-dimensional irreducible. Its 2B-character is 275 and the 3B-character is 53. So we are looking for a dimension vector such that $a_1+a_2=196883, a_1-275=a_2 $ and $b_1+b_2+b_3=196883, b_1-53=b_2=b_3 $ giving us for that representation the dimension vector of the quiver above $~(98579,98304,65663,65610,65610) $. Okay, so for each of the 194 irreducibles $S_i $ we have determined a dimension vector $~(a_1(i),a_2(i);b_1(i),b_2(i),b_3(i)) $, then standard quiver-representation theory asserts that the number of loops in the vertex corresponding to $S_i $ is equal to $dim(S_i)^2 + 1 – a_1(i)^2-a_2(i)^2-b_1(i)^2-b_2(i)^2-b_3(i)^2 $ and that the number of arrows from vertex $S_i $ to vertex $S_j $ is equal to $dim(S_i)dim(S_j) – a_1(i)a_1(j)-a_2(i)a_2(j)-b_1(i)b_1(j)-b_2(i)b_2(j)-b_3(i)b_3(j) $ This data then determines completely the $\mathbb{C} \mathbb{M} $-bimodule $\mathfrak{m}/\mathfrak{m}^2 $ and hence the structure of the completion $\widehat{\mathbb{C} PSL_2}_{\mathfrak{m}} $ containing all information the Monster can gain from the modular group. But then, one doesn’t have to go for the full regular representation of the Monster. Any faithful permutation representation will do, so we might as well go for the one of minimal dimension. That one is known to correspond to the largest maximal subgroup of the Monster which is known to be a two-fold extension $2.\mathbb{B} $ of the Baby-Monster. The corresponding permutation representation is of dimension 97239461142009186000 and decomposes into Monster-irreducibles $S_1 \oplus S_2 \oplus S_4 \oplus S_5 \oplus S_9 \oplus S_{14} \oplus S_{21} \oplus S_{34} \oplus S_{35} $ (in standard Atlas-ordering) and hence repeating the arguments above we get a quiver on just 9 vertices! The actual numbers of loops and arrows (I forgot to mention this, but the quivers obtained are actually symmetric) obtained were found after laborious computations mentioned in this post and the details I’ll make avalable here. Anyone who can spot a relation between the numbers obtained and any other part of mathematics will obtain quantities of genuine (ie. non-Inbev) Belgian beer…
I am reading a book and I'm trying to understand the concept of quasi Fermi levels. For example, A steady state of Electron Hole pairs are created at the rate of $10^{13}$ cm$^{-3}$$10^{13}\ \mathrm{cm}^{-3}$ per $\mu s$$\mu$s in a sample of Siliconsilicon. The equilibrium concentration of electrons in the sample is $n_0 = 10^{14}$ cm$^{-3}$$n_0 = 10^{14}\ \mathrm{cm}^{-3}$. Also, it gives $\tau_n = \tau_p = 2\,\mu s$$\tau_n = \tau_p = 2\ \mu\mathrm{s}$. I am not sure what this is but I think this is the average recombination time. The result is that the new levels of carrier concentrations (under the described steady state) are $n = 2.0 \times 10^{14}$ ($n_0 = 1.0 \times 10^{14}$) $p = 2.0 \times 10^{14}$ ($p_0 = 2.25 \times 10^6$) I follow until here but I get a bit confused after this. The book goes onto say that this results in two different virtual Fermi levels which are at: $F_n-E_i = 0.233$ eV$F_n-E_i = 0.233\ \mathrm{eV}$ $E_i-F_p = 0.186$ eV$E_i-F_p = 0.186\ \mathrm{eV}$ The equilibrium fermi level ($E_F)$$E_F$) being at $E_F-E_i=0.228$ eV $E_F-E_i=0.228\ \mathrm{eV}$ My question: Why are there two different quasi fermi levels now created? Why do we not consider two different ones at equilibrium conditions? Why is it that due to a steady state input of electron hole pairs that we now consider two quasi Fermi levels? What is the relevance of these new quasi fermi levels? I am reading a book and I'm trying to understand the concept of quasi Fermi levels. For example, A steady state of Electron Hole pairs are created at the rate of $10^{13}$ cm$^{-3}$ per $\mu s$ in a sample of Silicon. The equilibrium concentration of electrons in the sample is $n_0 = 10^{14}$ cm$^{-3}$. Also, it gives $\tau_n = \tau_p = 2\,\mu s$. I am not sure what this is but I think this is the average recombination time. The result is that the new levels of carrier concentrations (under the described steady state) are $n = 2.0 \times 10^{14}$ ($n_0 = 1.0 \times 10^{14}$) $p = 2.0 \times 10^{14}$ ($p_0 = 2.25 \times 10^6$) I follow until here but I get a bit confused after this. The book goes onto say that this results in two different virtual Fermi levels which are at: $F_n-E_i = 0.233$ eV $E_i-F_p = 0.186$ eV The equilibrium fermi level ($E_F)$ being at $E_F-E_i=0.228$ eV My question: Why are there two different quasi fermi levels now created? Why do we not consider two different ones at equilibrium conditions? Why is it that due to a steady state input of electron hole pairs that we now consider two quasi Fermi levels? What is the relevance of these new quasi fermi levels? I am reading a book and I'm trying to understand the concept of quasi Fermi levels. For example, A steady state of Electron Hole pairs are created at the rate of $10^{13}\ \mathrm{cm}^{-3}$ per $\mu$s in a sample of silicon. The equilibrium concentration of electrons in the sample is $n_0 = 10^{14}\ \mathrm{cm}^{-3}$. Also, it gives $\tau_n = \tau_p = 2\ \mu\mathrm{s}$. I am not sure what this is but I think this is the average recombination time. The result is that the new levels of carrier concentrations (under the described steady state) are $n = 2.0 \times 10^{14}$ ($n_0 = 1.0 \times 10^{14}$) $p = 2.0 \times 10^{14}$ ($p_0 = 2.25 \times 10^6$) I follow until here but I get a bit confused after this. The book goes onto say that this results in two different virtual Fermi levels which are at: $F_n-E_i = 0.233\ \mathrm{eV}$ $E_i-F_p = 0.186\ \mathrm{eV}$ The equilibrium fermi level ($E_F$) being at $E_F-E_i=0.228\ \mathrm{eV}$ My question: Why are there two different quasi fermi levels now created? Why do we not consider two different ones at equilibrium conditions? Why is it that due to a steady state input of electron hole pairs that we now consider two quasi Fermi levels? What is the relevance of these new quasi fermi levels?
The angle between two radii of a circle is known as the central angle of the circle. The two points of the circle, where the radii intersects in the circle (Note – The other end of the radii meets at the center of the circle), forms a segment of the Circle called the Arc Length. Formula to Find the Central Angle of a Circle – Solved Examples Example 1:Find the central angle, where the arc length measurement is about 20 cm and the length of the radius measures 10 cm? Solution: Given r = 10 cm Arc length = 20 cm The formula of central angle is, Central Angle $\theta$ = $\frac{Arc Length \times 360}{2 \times\pi \times r}$ Central Angle $\theta$ = $\frac{20 \times 360}{2 \times 3.14 \times 10}$ Central Angle $\theta$ = $\frac{7200}{62.8}$ = 114.64° Example 2: If the central angle of a circle is 82.4° and the arc length formed is 23 cm then find out the radius of the circle. Solution: Given Arc length = 23 cm The formula of central angle is, Central Angle $\theta$ = $\frac{Arc\;Length \times 360}{2\times\pi \times r}$ 82.4° = $\frac{23 \times 360}{2\times\pi \times r}$ 82.4° = $\frac{8280}{6.28\times r}$ r= $\frac{8280}{6.28\times 82.4}$ r = 16 cm
№ 9 All Issues Romanyuk A. S. Estimates of some approximating characteristics of the classes of periodic functions of one and many variables Ukr. Mat. Zh. - 2019. - 71, № 8. - pp. 1102-1115 UDC 517.5 We obtain the exact-order estimates for some approximating characteristics of the classes $\mathbb{W}^{\boldsymbol{r}}_{p,\boldsymbol{\alpha}}$ and $\mathbb{B}^{\boldsymbol{r}}_{p,\theta}$ of periodic functions of one and many variables in the norm of the space $B_{\infty, 1}.$ Ukr. Mat. Zh. - 2019. - 71, № 2. - pp. 147-150 Approximating characteristics of the classes of periodic multivariate functions in the space $B_{∞,1}$ Ukr. Mat. Zh. - 2019. - 71, № 2. - pp. 271-282 We obtain the exact-order estimates of the Kolmogorov widths and entropy numbers for the classes $W^{r}_{p,\alpha}$ and $B^r _{p,\theta}$ in the norm of the $B_{\infty ,1} -space. Kolmogorov widths and bilinear approximations of the classes of periodic functions of one and many variables Ukr. Mat. Zh. - 2018. - 70, № 2. - pp. 224-235 We obtain the exact order estimates for the Kolmogorov widths of the classes $W^g_p$ of periodic functions of one variable generated by the integral operators with kernels $g(x, y)$ from the Nikol’skii – Besov classes $B^r_{p,\theta}$. We also study the behavior of bilinear approximations to the classes $W^r_{p,\alpha}$ of periodic multivariate functions with bounded mixed derivative in the spaces $L_{q_1,q_2}$ for some relations between the parameters $r_1, p, q_1, q_2$. Ukr. Mat. Zh. - 2017. - 69, № 5. - pp. 579 Ukr. Mat. Zh. - 2017. - 69, № 5. - pp. 670-681 We establish the exact-order estimates for the trigonometric widths of Nikol’skii – Besov $B^r_{\infty ,\theta}$ and Sobolev $W^r_{\infty, \alpha} $ classes of periodic multivariable functions in the space $L_q,\; 1 < q < \infty$. The behavior of the linear widths of Nikol’skii – Besov $B^r_{\infty ,\theta}$ classes in the space $L_q$ is investigated for certain relations between the parameters $p$ and $q$. Ukr. Mat. Zh. - 2016. - 68, № 10. - pp. 1403-1417 We establish the exact order estimates for the entropy numbers of the Nikol’skii – Besov classes of periodic functions of two variables in the space $L_\infty$ and use the obtained results in estimating the lower bounds for Kolmogorov, linear, and trigonometric widths. We also study the behavior of similar approximating characteristics of the classes $B_{p,θ}^r$ of periodic functions of many variables in the spaces $L_1$ and $B_{1,1}$. Estimates for the best bilinear approximations of the classes $B^r_{p,\theta}$ and singular numbers of integral operators Ukr. Mat. Zh. - 2016. - 68, № 9. - pp. 1240-1250 We obtain the exact-order estimates for the best bilinear approximations of the Nikol‘ski–Besov classes $B^r_{p,\theta}$ of periodic functions of several variables. We also find the orders for singular numbers of the integral operators with kernels from the classes $B^r_{p,\theta}$. Estimation of the Entropy Numbers and Kolmogorov Widths for the Nikol’skii–Besov Classes of Periodic Functions of Many Variables Ukr. Mat. Zh. - 2015. - 67, № 11. - pp. 1540-1556 We establish order estimates for the entropy numbers of the Nikol’skii–Besov classes $B_{p,θ}^r$ of periodic functions of many variables in the space $L_q$ with certain relations between the parameters $p$ and $q$. By using the obtained lower estimates of the entropy numbers, we establish the exact-order estimates for the Kolmogorov widths of the same classes of functions in the space $L_1$. Babenko V. F., Davydov O. V., Kofanov V. A., Parfinovych N. V., Pas'ko A. N., Romanyuk A. S., Ruban V. I., Samoilenko A. M., Shevchuk I. A., Shumeiko A. A., Timan M. P., Trigub R. M., Vakarchuk S. B., Velikin V. L. Ukr. Mat. Zh. - 2015. - 67, № 7. - pp. 995-999 Ukr. Mat. Zh. - 2014. - 66, № 7. - pp. 970–982 We establish order estimates for the linear widths of the classes $B_{p,θ}^r$ of periodic functions of many variables in the space $L_q$ for some relationships between the parameters $p, q$, and $θ$. Ukr. Mat. Zh. - 2013. - 65, № 12. - pp. 1681–1699 We obtain upper bounds for the values of the best bilinear approximations in the Lebesgue spaces of periodic functions of many variables from the Besov-type classes. In special cases, it is shown that these bounds are order exact. Ukr. Mat. Zh. - 2013. - 65, № 8. - pp. 1141-1144 International conference "Theory of approximation of functions and its applications" dedicated to the 70 th birthday of the corresponding member of NASU Professor O. I. Stepanets (1942 - 2007) Ukr. Mat. Zh. - 2012. - 64, № 10. - pp. 1438-1440 Ukr. Mat. Zh. - 2012. - 64, № 5. - pp. 579-581 Ukr. Mat. Zh. - 2012. - 64, № 5. - pp. 685-697 We obtain exact-order estimates for the best bilinear approximations of Nikol'skii-Besov classes in the spaces of functions $L_q (\pi_{2d})$. Asymptotic estimates for the best trigonometric and bilinear approximations of classes of functions of several variables Ukr. Mat. Zh. - 2010. - 62, № 4. - pp. 536–551 We obtain exact order estimates for the best $M$-term trigonometric approximations of the Besov classes $B_{∞,θ}^r$ in the space $L_q$. We also determine the exact orders of the best bilinear approximations of the classes of functions of $2d$ variables generated by functions of d variables from the classes $B_{∞,θ}^r$ with the use of translation of arguments. Ukr. Mat. Zh. - 2009. - 61, № 10. - pp. 1348-1366 We obtain exact order estimates for trigonometric and orthoprojection widths of the Besov classes $B^r_{p,θ}$ and Nikol’skii classes $Hr p$ of periodic functions of many variables in the space $L_q$ for certain relations between the parameters $p$ and $q$. Ukr. Mat. Zh. - 2009. - 61, № 4. - pp. 513-523 Exact-order estimates are obtained for the best orthogonal trigonometric approximations of the Besov $(B_{p,θ}^r)$ and Nukol’skii $(H_p^r )$ classes of periodic functions of many variables in the metric of $L_q , 1 ≤ p, q ≤ ∞$. We also establish the orders of the best approximations of functions from the same classes in the spaces $L_1$ and $L_{∞}$ by trigonometric polynomials with the corresponding spectrum. Bogolyubov Readings-2008. International Conference "Differential Equations, Theory of Functions and Applications" (on the occasion of the 70th anniversary of academician AM Samoilenko) Ukr. Mat. Zh. - 2008. - 60, № 12. - pp. 1722 Ukr. Mat. Zh. - 2007. - 59, № 12. - pp. 1722-1724 Best approximations of the classes $B_{p,\,\theta}^{r}$ of periodic functions of many variables in uniform metric Ukr. Mat. Zh. - 2006. - 58, № 10. - pp. 1395–1406 We obtain estimates exact in order for the best approximations of the classes $B_{\infty,\,\theta}^{r}$ of periodic functions of two variables in the metric of $L_{\infty}$ by trigonometric polynomials whose spectrum belongs to a hyperbolic cross. We also investigate the best approximations of the classes $B_{p,\,\theta}^{r},\quad 1 \leq p < \infty$, of periodic functions of many variables in the metric of $L_{\infty}$ by trigonometric polynomials whose spectrum belongs to a graded hyperbolic cross. Ukr. Mat. Zh. - 2002. - 54, № 5. - pp. 579-580 Ukr. Mat. Zh. - 2002. - 54, № 5. - pp. 670-680 We investigate problems related to the approximation by linear methods and the best approximations of the classes $B_{p,{\theta }}^r ,\; 1 ≤ p ≤ ∞$ in the space $L_{∞}$. Estimates for Approximation Characteristics of the Besov Classes $B_ r^{p,θ}$ of Periodic Functions of Many Variables in the Space $L_q. I$ Ukr. Mat. Zh. - 2001. - 53, № 9. - pp. 1224-1231 We obtain order estimates for the approximation of the classes $B_ r^{p,θ}$ of periodic functions of many variables in the space $L_q$ by using operators of orthogonal projection and linear operators satisfying certain conditions. Ukr. Mat. Zh. - 2001. - 53, № 7. - pp. 996-1001 We obtain an order estimate for the Kolmogorov width of the Besov classes $B_{p,{\theta }}^r$ of periodic functions of many variables in the space $L_q$ for $2 < p < q < ∞$, which complements the result obtained earlier by the author. Ukr. Mat. Zh. - 2001. - 53, № 6. - pp. 820-829 We obtain order estimates for linear widths of the Besov classes \(B_{p,{\theta }}^r\) of periodic functions of many variables in the space L q for certain values of parameters p and q different from those considered in the first part of the work. Ukr. Mat. Zh. - 2001. - 53, № 5. - pp. 647-661 We obtain order estimates for linear widths of the Besov classes \(B_{p,\theta}^r\) of periodic functions of many variables in the space L q for certain values of the parameters p and q. International conference on the theory of approximation of functions and its applications dedicated to the memory of V. K. Dzyadyk Ukr. Mat. Zh. - 1999. - 51, № 9. - pp. 1296–1297 Ukr. Mat. Zh. - 1997. - 49, № 11. - pp. 1584 On estimates of approximation characteristics of the Besov classes of periodic functions of many variables Ukr. Mat. Zh. - 1997. - 49, № 9. - pp. 1250–1261 We obtain order estimates for some approximate characteristics of the Besov classes B p,ϑ r Approximation of classes of functions of many variables by their orthogonal projections onto subspaces of trigonometric polynomials Ukr. Mat. Zh. - 1996. - 48, № 1. - pp. 80-89 In the space L q, 1< q<∞ we establish estimates for the orders of the best approximations of the classes of functions of many variables B 1,θ r B p,α r Ukr. Mat. Zh. - 1995. - 47, № 8. - pp. 1097–1111 We obtain order estimates for the best trigonometric and bilinear approximations for the classes B p,θ r On the best approximations and Kolmogorov widths of besov classes of periodic functions of many variables Ukr. Mat. Zh. - 1995. - 47, № 1. - pp. 79–92 Order estimates are obtained for the best approximations of the classes B 1, θ r L q q<∞ and classes B ∞, θ r B p, θ r p≤∞, in the metric of L ∞ is studied. On Kolmogorov widths of classes $B^r_{p, \theta}$ of periodic functions of many variables with low smoothness in the space $L_q$ Ukr. Mat. Zh. - 1994. - 46, № 7. - pp. 915–926 The best trigonometric and bilinear approximations for functions of many variables from the classes $B^r_{p, \theta}$. II Ukr. Mat. Zh. - 1993. - 45, № 10. - pp. 1411–1423 The best trigonometric approximations and the Kolmogorov diameters of the Besov classes of functions of many variables Ukr. Mat. Zh. - 1993. - 45, № 5. - pp. 663–675 Ukr. Mat. Zh. - 1991. - 43, № 10. - pp. 1398–1408
It’s been a while, so let’s include a recap : a (transitive) permutation representation of the modular group $\Gamma = PSL_2(\mathbb{Z}) $ is determined by the conjugacy class of a cofinite subgroup $\Lambda \subset \Gamma $, or equivalently, to a dessin d’enfant. We have introduced a quiver (aka an oriented graph) which comes from a triangulation of the compactification of $\mathbb{H} / \Lambda $ where $\mathbb{H} $ is the hyperbolic upper half-plane. This quiver is independent of the chosen embedding of the dessin in the Dedeking tessellation. (For more on these terms and constructions, please consult the series Modular subgroups and Dessins d’enfants). Why are quivers useful? To start, any quiver $Q $ defines a noncommutative algebra, the path algebra $\mathbb{C} Q $, which has as a $\mathbb{C} $-basis all oriented paths in the quiver and multiplication is induced by concatenation of paths (when possible, or zero otherwise). Usually, it is quite hard to make actual computations in noncommutative algebras, but in the case of path algebras you can just see what happens. Moreover, we can also see the finite dimensional representations of this algebra $\mathbb{C} Q $. Up to isomorphism they are all of the following form : at each vertex $v_i $ of the quiver one places a finite dimensional vectorspace $\mathbb{C}^{d_i} $ and any arrow in the quiver [tex]\xymatrix{\vtx{v_i} \ar[r]^a & \vtx{v_j}}[/tex] determines a linear map between these vertex spaces, that is, to $a $ corresponds a matrix in $M_{d_j \times d_i}(\mathbb{C}) $. These matrices determine how the paths of length one act on the representation, longer paths act via multiplcation of matrices along the oriented path. A necklace in the quiver is a closed oriented path in the quiver up to cyclic permutation of the arrows making up the cycle. That is, we are free to choose the start (and end) point of the cycle. For example, in the one-cycle quiver [tex]\xymatrix{\vtx{} \ar[rr]^a & & \vtx{} \ar[ld]^b \\ & \vtx{} \ar[lu]^c &}[/tex] the basic necklace can be represented as $abc $ or $bca $ or $cab $. How does a necklace act on a representation? Well, the matrix-multiplication of the matrices corresponding to the arrows gives a square matrix in each of the vertices in the cycle. Though the dimensions of this matrix may vary from vertex to vertex, what does not change (and hence is a property of the necklace rather than of the particular choice of cycle) is the trace of this matrix. That is, necklaces give complex-valued functions on representations of $\mathbb{C} Q $ and by a result of Artin and Procesi there are enough of them to distinguish isoclasses of (semi)simple representations! That is, linear combinations a necklaces (aka super-potentials) can be viewed, after taking traces, as complex-valued functions on all representations (similar to character-functions). In physics, one views these functions as potentials and it then interested in the points (representations) where this function is extremal (minimal) : the vacua. Clearly, this does not make much sense in the complex-case but is relevant when we look at the real-case (where we look at skew-Hermitian matrices rather than all matrices). A motivating example (the Yang-Mills potential) is given in Example 2.3.2 of Victor Ginzburg’s paper Calabi-Yau algebras. Let $\Phi $ be a super-potential (again, a linear combination of necklaces) then our commutative intuition tells us that extrema correspond to zeroes of all partial differentials $\frac{\partial \Phi}{\partial a} $ where $a $ runs over all coordinates (in our case, the arrows of the quiver). One can make sense of differentials of necklaces (and super-potentials) as follows : the partial differential with respect to an arrow $a $ occurring in a term of $\Phi $ is defined to be the path in the quiver one obtains by removing all 1-occurrences of $a $ in the necklaces (defining $\Phi $) and rearranging terms to get a maximal broken necklace (using the cyclic property of necklaces). An example, for the cyclic quiver above let us take as super-potential $abcabc $ (2 cyclic turns), then for example $\frac{\partial \Phi}{\partial b} = cabca+cabca = 2 cabca $ (the first term corresponds to the first occurrence of $b $, the second to the second). Okay, but then the vacua-representations will be the representations of the quotient-algebra (which I like to call the vacualgebra) $\mathcal{U}(Q,\Phi) = \frac{\mathbb{C} Q}{(\partial \Phi/\partial a, \forall a)} $ which in ‘physical relevant settings’ (whatever that means…) turn out to be Calabi-Yau algebras. But, let us return to the case of subgroups of the modular group and their quivers. Do we have a natural super-potential in this case? Well yes, the quiver encoded a triangulation of the compactification of $\mathbb{H}/\Lambda $ and if we choose an orientation it turns out that all ‘black’ triangles (with respect to the Dedekind tessellation) have their arrow-sides defining a necklace, whereas for the ‘white’ triangles the reverse orientation makes the arrow-sides into a necklace. Hence, it makes sense to look at the cubic superpotential $\Phi $ being the sum over all triangle-sides-necklaces with a +1-coefficient for the black triangles and a -1-coefficient for the white ones. Let’s consider an index three example from a previous post [tex]\xymatrix{& & \rho \ar[lld]_d \ar[ld]^f \ar[rd]^e & \\ i \ar[rrd]_a & i+1 \ar[rd]^b & & \omega \ar[ld]^c \\ & & 0 \ar[uu]^h \ar@/^/[uu]^g \ar@/_/[uu]_i &}[/tex] In this case the super-potential coming from the triangulation is $\Phi = -aid+agd-cge+che-bhf+bif $ and therefore we have a noncommutative algebra $\mathcal{U}(Q,\Phi) $ associated to this index 3 subgroup. Contrary to what I believed at the start of this series, the algebras one obtains in this way from dessins d’enfants are far from being Calabi-Yau (in whatever definition). For example, using a GAP-program written by Raf Bocklandt Ive checked that the growth rate of the above algebra is similar to that of $\mathbb{C}[x] $, so in this case $\mathcal{U}(Q,\Phi) $ can be viewed as a noncommutative curve (with singularities). However, this is not the case for all such algebras. For example, the vacualgebra associated to the second index three subgroup (whose fundamental domain and quiver were depicted at the end of this post) has growth rate similar to that of $\mathbb{C} \langle x,y \rangle $… I have an outlandish conjecture about the growth-behavior of all algebras $\mathcal{U}(Q,\Phi) $ coming from dessins d’enfants : the algebra sees what the monodromy representation of the dessin sees of the modular group (or of the third braid group). I can make this more precise, but perhaps it is wiser to calculate one or two further examples…
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
The usual argument to show that the group of all orientation-preserving symmetries of the Klein quartic is the simple group $L_2(7)$ of order $168$ goes like this: There are two families of $7$ truncated cubes on the Klein quartic. The triangles of one of the seven truncated cubes in the first family have as center the dots, all having the same colour. The triangles of one of the truncated cubes in the second family correspond to the squares all having the same colour. If you compare the two colour schemes, you’ll see that every truncated cube in the first family is disjoint from precisely $3$ truncated cubes in the second family. That is, we can identify the truncated cubes of the first family with the points in the Fano plane $\mathbb{P}^2(\mathbb{F}_2)$, and those of the second family with the lines in that plane. The Klein quartic consists of $24$ regular heptagons, so its rotation symmetry group consists of $24 \times 7 = 168$ rotations,each preserving the two families of truncated cubes. This is exactly the same number as there are isomorphisms of the Fano plane, $PGL_3(\mathbb{F}_2) = L_2(7)$. Done! For more details, check John Baez’ excellent page on the Klein quartic, or the Buckyball curve post. Here’s another ‘look-and-see’ proof, starting from Klein’s own description of his quartic. Look at the rotation $g$, counter-clockwise with angle $2\pi / 7$ fixing the center of the central blue heptagon, and a similar rotation $h$ fixing the center of one of the neighbouring red heptagons. The two vertices of the edge shared by the blue and red heptagon are fixed by $g.h$ and $h.g$, respectively, so these rotations must have order three (there are $3$ heptagons meeting in the vertex). That is, the rotation symmetry group $G$ of the Klein quartic has order $168$, and contains two elements $g$ and $h$ of order $7$, such that the subgroup generated by them contains elements of order $3$. This is enough to prove that the $G$ must be simple and therefore isomorphic to $L_2(7)$! The following elegant proof is often attributed to Igor Dolgachev. If $G$ isn’t simple there is a maximal normal subgroup $N$ with $G/N$ simple . The only non-cyclic simple group having less elements that $168$ is $A_5$ but this cannot be $G/N$ as $60$ does not divide $168$. So, $G/N$ must be cyclic of order $2,3$ or $7$ (the only prime divisors of $168=2^3.3.7$). Order $2$ is not possible as any group $N$ of order $84=2^2.3.7$ can just have one Sylow $7$-subgroup. Remember that the number of $7$-Sylows of $N$ must divide $2^2.3=12$ and must be equal to $1$ modulo $7$. And $G$ (and therefore $N$) has at least two different cyclic subgroups of order $7$. Order $3$ is impossible as this would imply that the normal subgroup $N$ of order $2^3.7=56$ must contain all $7$-Sylows of $G$, and thus also an element of order $3$. But, $3$ does not divide $56$. Order $7$ is a bit more difficult to exclude. This would mean that there is a normal subgroup $N$ of order $2^3.3=24$. $N$ (being normal) must contain all Sylow $2$-subgroups of $G$ of which there are either $1$ or $3$ (the order of $N$ is $2^3.3=24$). If there is just one $S$ it should be a normal subgroup with $G/S$ (of order $21$) having a (normal) unique Sylow $7$-subgroup, but then $G$ would have a normal subgroup of index $3$, which we have excluded. The three $2$-Sylows are conjugated and so the conjugation morphism \[ G \rightarrow S_3 \] is non-trivial and must have image strictly larger than $C_3$ (otherwise, $G$ would have a normal subgroup of index $3$), so must be surjective. But, $S_3$ has a normal subgroup of index $2$ and pulling this back, $G$ must also have a normal subgroup of index two, which we have excluded. Done!Leave a Comment
Consider two systems (1 and 2) in thermal contact such that \(N_2\) \(\gg\) \(N_1\) \(E_2\) \(\gg E_1\) \(N\) \(N_1 + N_2; E = E_1 + E_2 \) \(\text {dim} (x_1)\) \(\gg\) \(\text {dim} (x_2) \) and the total Hamiltonian is just \(H (x) = H_1 (x_1) + H_2 (x_2) \) Since system 2 is infinitely large compared to system 1, it acts as an infinite heat reservoir that keeps system 1 at a constant temperature \(T\) without gaining or losing an appreciable amount of heat, itself. Thus, system 1 is maintained at canonical conditions, \(N, V, T\). The full partition function \(\Omega (N, V, E )\) for the combined system is the microcanonical partition function \[\Omega(N,V,E) = \int dx \delta(H(x)-E) = \int dx_1 dx_2 \delta (H_1(x_1) + H_2(x_2)-E)\] \[ f(x_1) = \int dx_2 \delta (H_1(x_1)+ H_2(x_2)-E)\] \[ \ln f(x_1) = \ln \int dx_2 \delta (H_1(x_1) + H_2(x_2) - E)\] \(\ln f (x_1)\) \( \ln \int dx_2 \delta (H_2(x_2)-E) + H_1(x_1) \frac {\partial }{ \partial H_1 (x_1)} \ln \int dx_2 \delta (H_1(x_1) + H_2(x_2) - E) \vert _{H_1(x_1)=0}\) \( \ln \int dx_2 \delta (H_2(x_2)-E) -H_1(x_1) \frac {\partial}{\partial E} \ln \int dx_2 \delta (H_2(x_2)-E)\) where, in the last line, the differentiation with respect to \(H_1\) is replaced by differentiation with respect to \(E\). Note that \( \ln \int dx_2 \delta (H_2( _2)-E)\) \(\frac {S_2 (E)}{k}\) \( \frac {\partial}{\partial E} \ln \int dx_2 \delta (H_2(x_2)-E)\) \( \frac {\partial}{\partial E} \frac {S_2(E)}{k} = \frac {1}{kT}\) where \(T\) is the common temperature of the two systems. Using these two facts, we obtain \(\ln f (x_1)\) \(\frac {S_2 (E)}{k} - \frac {H_1 (x_1)}{kT} \) \(f (x_1)\) \(e^{\frac {S_2(E)}{k}}e^{\frac {-H_1(x_1)}{kT}} \) Thus, the distribution function of the canonical ensemble is \[f(x) \propto e^{\frac {-H(x)}{kT}} \] The normalization of the distribution function is the integral: \[\int dxe^{\frac {-H(x)}{kT}} \equiv Q(N,V,T)\] ad hocquantum corrections to the classical result to give \[ Q(N,V,T) = \frac {1}{N!h^{3N}} \int dx e^{-\beta H(x)}\] Hemlholtz free energy \[ A (N, V, T ) = - \frac {1}{\beta} \ln Q (N, V, T ) \] \[ P = -\left ( \frac {\partial A}{\partial V} \right )_{N,T} = kT \left( \frac {\partial \ln Q(N,V,T)}{\partial V} \right )_{N,T}\] To see that this must be the definition of \(A (N, V, T ) \) , recall the definition of \(A\): \[ A = E - TS = \langle H (x) \rangle - TS \] But we saw that \[ S = - \left ( \frac {\partial A}{\partial T } \right ) _{N,V} \] Substituting this in gives \[\frac {\partial A}{\partial T} = \frac {\partial A}{\partial \beta} \frac {\partial \beta }{\partial T} = - \frac {1}{kT^2} \frac {\partial A}{\partial \beta} \] it follows that \[ A = \langle H(x) \rangle + \beta \frac {\partial A}{\partial \beta}\] This is a simple differential equation that can be solved for \(A\). We will show that the solution is \[ A = - \frac {1}{\beta} \ln Q (\beta)\] Note that \[ \beta \frac {\partial A}{\partial \beta} = \frac {1}{\beta} \ln Q (\beta) - \frac {1}{Q} \frac {\partial Q}{\partial \beta} = A - \langle H(x)\rangle\] Substituting in gives, therefore \[ A = \langle H(x)\rangle + A - \langle H(x)\rangle = A\] so this form of \(A\) satisfies the differential equation.Other thermodynamics follow: \[ A = \langle H(x) \rangle - T \frac {\partial A}{\partial T}\] or, noting that Average energy: \(E\) \(\langle H(x)\rangle = \frac {1}{Q} C_N \int dx H(x) e^{-\beta H(x)}\) \(- \frac {\partial}{\partial \beta} \ln Q(N,V,T)\) Pressure \[ P = -\left ( \frac {\partial A}{\partial V} \right )_{N,T} = kT \left ( \frac {\partial \ln Q (N,V,T)}{\partial V} \right )_{N,T}\] Entropy \(S\) \(- \frac {\partial A}{\partial T} = - \frac {\partial A}{\partial \beta} \frac {\partial \beta}{\partial T} = \frac {1}{kT^2} \frac {\partial A}{\partial \beta} \) \(k \beta^2 \frac {\partial}{\partial \beta} \left( -\frac {1}{\beta} \ln Q(N,V,T)\right ) = -k \beta \frac {\partial \ln Q}{\partial \beta} + k \ln Q\) \( k \beta E + k \ln Q = k \ln Q + \frac {E}{T}\)
I am reading this paper entitled "Quantum algorithm for linear systems of equations" and am trying to understand a portion of the algorithm described on page 2 and in more detail in the appendix starting at the bottom of page 10 (section 3. Phase Estimation calculations). Suppose we have a hermitian matrix $A$ of dimension $n$ and a vector $b$ of size $n$ and denote by $|u_j \rangle$ the eigenvectors of $A$, which are also eigenvectors of $e^{iAt}$, and $\lambda_j$ the corresponding eigenvalues. Ultimately this algorithm aims to find $x$ such that $Ax = b$. Assuming we have access to the state $\left|\Psi_{0} \right\rangle = \sqrt{\frac {2} {T} } \sum_{\tau = 0} ^{T - 1} \sin \left(\frac {\pi(\tau + 1/2)} {T}\right) |\tau \rangle$, the algorithm instructs to apply $\sum_{\tau = 0} ^{T - 1} |\tau \rangle \langle \tau |\otimes e^{iA \tau to/T}$ to $|\Psi_{0} \rangle |b \rangle$, the former being referred to as a conditional Hamiltonian evolution. The register $C$ initially contains $|\Psi_0 \rangle $, while the register $I$ contains $|b \rangle$. The author/s then writes "assume the target state is $|u_j \rangle$, so this becomes simply the condition phase $|\Psi_{\lambda_j t_o} \rangle = \sqrt{\frac {2} {T} } \sum_{\tau = 0} ^{T - 1} e^{\frac {i \lambda_j \tau t_{0}} {T}} \sin \left(\frac {\pi(\tau + 1/2)} {T}\right) |\tau \rangle |u_j\rangle$", where now we have a superposition of $|\Psi_0 \rangle$ and the result of applying $e^{iA\tau t_{0}/T}$ to the eigenvector $|u_j \rangle$. My questions are: Are we not applying the conditional Hamiltonian evolution to $|\Psi_0 \rangle |b \rangle$? I know that $b$ can be decomposed mathematically as $b= c_1u_1 + \cdots + c_nu_n$ since these eigenvectors form an orthonormal basis. Why only consider the effect on $|u_j \rangle$? What is going on with the $\sum_{\tau = 0} ^{T - 1} |\tau \rangle \langle \tau|$ portion of the sum in the conditional Hamiltonian evolution? In this answer here, it is described "[...] a control part. It means that the operation will be controlled by the state of the first quantum register (the register C as the exponent tells us)". I understand the "gist" of this statement but any reference to understand it more fully would be much appreciated. Edit: Is $\sum_{\tau = 0} ^{T - 1} |\tau \rangle \langle \tau|$, not simply the identity matrix of dimension $T$? In which case the result after tensoring is a block matrix with $e^{iA\tau t_{0}/T}$ on the main diagonal, and $0_n$ elsewhere. Do I have this right? After this step, a Fourier transform is applied to the register $C$, but at this point what is contained in register $ C$? The state $|\Psi_{\lambda_j t_o} \rangle $ seems to describe the superposition of registers $C,I$. I would be happy to edit/add any information that can help decipher this. Thanks
Exploring Vecten's Wreaths with Linear Algebra Introduction What follows is Long Huynh Huu's development of a result by Hirotaka Ebisui and Thanos Kalogerakis, reported elsewhere earlier. The problem deals with an extension of Vecten's construction of squares on the sides of a triangle and then adding another layer of squares and so on. Hirotaka Ebisui has shown that, if the first layer of squares starts at a right triangle, then a strange identity pops up: Hirotaka Ebisui has named his result the Pythagorean Fivefold Theorem, of which I was kindly informed by Thanos Kalogerakis who observed in private correspondence that the next layer of squares appears to obey the regular Pythagorean relation. Long Huynh Huu took it from there. RE: A Discovery of Hirotaka Ebisui And Thanos Kalogerakis Long Huynh Huu 12 November 2017 Findings It suffices to know the progression of polygons which are the convex hull of every second layer. In the following image you see them in green: Every square shares a side with one polygon. The initial triangle does not need to be a right triangle. In fact we will see that $\textbf{any}$ linear relation between $A,B,C$ will hold for the subsequent triples of squares: $\begin{align} &\lambda_1 A + \lambda_2 B + \lambda_3 C = 0 \\\Rightarrow& \lambda_1 A'' + \lambda_2 B'' + \lambda_3 C'' = 0 \\\Rightarrow& \lambda_1 A'''' + \lambda_2 B'''' + \lambda_3 C'''' = 0 \\\Rightarrow& ... \end{align}$ and similarly $\begin{align} &\mu_1 A' + \mu_2 B' + \mu_3 C' = 0 \\\Rightarrow& \mu_1 A''' + \mu_2 B''' + \mu_3 C''' = 0 \\\Rightarrow& ... \end{align}$ The sizes of the squares grow according to a recursion formula. Proof So let's dive in. The green polygons have six sides which I represent as vectors/complex numbers: $a,b,c,x,y,z \in \mathbb{C}$. For a given green polygon, its $\textbf{successor}$ $(a',b',c',x',y',z')$ is given by the following construction: Without too much headache, we can compute this successor in two steps (just as we constructed the squares in two steps): $\textbf{L}_1:$ The intermediate (blue) polygon in above figure has sides $(x,y,z)$ and $(a',b',c')$: $\begin{align} a' =& iy + a - iz \\ b' =& iz + b - ix \\ c' =& ix + c - iy \\ \end{align}$ Therefore $\begin{pmatrix} a'\\b'\\c'\\x\\y\\z \end{pmatrix} = \underset{L_1}{ \underbrace{ \left[ \begin{array}{cccccc} 1 & 0 & 0 & 0 & i & -i \\ 0 & 1 & 0 & i & -i & 0 \\ 0 & 0 & 1 & -i & 0 & i \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ \end{array} \right]}} \begin{pmatrix} a\\b\\c\\x\\y\\z \end{pmatrix}$ $\textbf{L}_2:$ We get from the intermediate polygon to the outer polygon via $\begin{align} x' =& ib' + x -ic' \\ y' =& ic' + y -ia' \\ z' =& ia' + z -ib' \\ \end{align}$ Therefore $ \begin{pmatrix} a'\\b'\\c'\\x'\\y'\\z' \end{pmatrix} = \underset{L_2}{ \underbrace{ \left[ \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & -i & i & 1 & 0 & 0 \\ -i & i & 0 & 0 & 1 & 0 \\ i & 0 & -i & 0 & 0 & 1 \\ \end{array} \right]}} \begin{pmatrix} a'\\b'\\c'\\x\\y\\z \end{pmatrix}$ Hence to get the outer polygon, we need to compose $L_2$ and $L_1$ and apply this to $(a,b,c,x,y,z)^t$. Before we continue let's define some matrices which will make the following computations easy: $\begin{align} I :=& \begin{pmatrix} 1&0&0\\0&1&0\\0&0&1 \end{pmatrix} \\ E :=& \begin{pmatrix} 1&1&1\\1&1&1\\1&1&1 \end{pmatrix} \\ T :=& \begin{pmatrix} 0&i&-i\\i&-i&0\\-i&0&i \end{pmatrix} \\ -TT =& 3I-E \end{align}$ Now we define our $\textbf{successor matrix}:$ $ S := L_2L_1 = \begin{pmatrix} I&0\\-T&I \end{pmatrix} \begin{pmatrix} I&T\\0&I \end{pmatrix} = \begin{pmatrix} I&T\\-T&4I-E \end{pmatrix}$ As we start with a $\textbf{triangle},$ the first polygon has $a_0,b_0,c_0 = 0 \text{ and } x_0+y_0+z_0 = 0.$ The $n^{th}$ outer polygon is $S^n(0,0,0,x_0,y_0,z_0)^t =: (a_n,b_n,c_n,x_n,y_n,z_n)^t$. One thing you can observe when computing examples is that $w^n := (x_n,y_n,z_n)^t$ and $w^0$ are collinear, and so are $v^n := (a_n,b_n,c_n)^t$ and $v^1.$ Claim There exists sequences $\{\lambda_n\}, \{\mu_n\}$ such that $\begin{pmatrix} v^n\\w^n \end{pmatrix} = \begin{pmatrix} \lambda_n v^1 \\ \mu_n w^0 \end{pmatrix}$ for $n \geq 0$. The sequence is $\textbf{independent of the initial triangle}$ $(x_0,y_0,z_0)$ and is given by $ \begin{pmatrix} \lambda_0 \\ \mu_0 \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \qquad \begin{pmatrix} \lambda_{n+1} \\ \mu_{n+1} \end{pmatrix} = \begin{pmatrix} \lambda_n + \mu_n \\ 3\lambda_n + 4\mu_n \end{pmatrix}$ Proof Clearly $\begin{pmatrix} \lambda_0 v^1 \\ \mu_0 w^0 \end{pmatrix} = \begin{pmatrix} v^0 \\ w^0 \end{pmatrix}$ Note that $Ew^0 = 0$, as the components of $w^0$ add up to zero, and that $v^1 = Tw^0$. The following induction step then verifies the recursion formula: $\begin{align} S\begin{pmatrix}\lambda_n v^1\\\mu_n w^0 \end{pmatrix} &= \begin{pmatrix} \lambda_n v^1 + \mu_n T w^0 \\ -\lambda_n Tv^1 + 4\mu_n w^0 - \mu_n E w^0 \end{pmatrix}\\ &= \begin{pmatrix} \lambda_n v^1 + \mu_n v^1 \\ -\lambda_n T^2w^0 + 4\mu_n w^0 \end{pmatrix}\\ &= \begin{pmatrix} (\lambda_n + \mu_n) v^1 \\ (3\lambda_n + 4\mu_n) w^0 \end{pmatrix}. \end{align}$ This proves my second finding.As a corollary, we can conclude the third one too. For example, $|w^n_1|^2 + |w^n_2|^2 - |w^n_3|^2 = \mu_n^2 (|x_0|^2+|y_0|^2-|z_0|^2),$ so that $|x_0|^2+|y_0|^2 - |z_0|^2 = 0 \Rightarrow |w^n_1|^2 + |w^n_2|^2 - |w^n_3|^2 = 0$ and $ |v^n_1|^2 + |v^n_2|^2 - 5|v^n_3|^2 = \lambda_n^2 (|a_1|^2+|b_1|^2-5|c_1|^2),$ so that $|a_1|^2+|b_1|^2 - 5|c_1|^2 = 0 \Rightarrow |v^n_1|^2 + |v^n_2|^2 - 5|v^n_3|^2=0.$ 65463111
I thought I was done writing about this topic, but it just keeps coming back. The internet just cannot seem to leave this sort of problem alone: I don't know what it is about expressions of the form \(a\div b(c+d)\) that fascinates us as a species, but fascinate it does. I've written about this before (as well as why "PEMDAS" is terrible), but the more I've thought about it, the more sympathy I've found with those in the minority of the debate, and as a result my position has evolved somewhat.oomfies solve this pic.twitter.com/0RO5zTJjKk— em ♥︎ (@pjmdolI) July 28, 2019 So I'm going to go out on a limb, and claim that the answer shouldbe \(1\). Before you walk away shaking your head and saying "he's lost it, he doesn't know what he's talking about", let me assure you that I'm obviouly not denying the left-to-right convention for how to do explicit multiplication and division. Nobody's arguing that.* Rather, there's something much more subtle going on here. What we may be seeing here is evidence of a mathematical "language shift". It's easy to forget that mathematics did not always look as it does today, but has arrived at its current form through very human processes of invention and revision. There's an excellent page by Jeff Miller that catalogues the earliest recorded uses of symbols like the operations and the equals sign -- symbols that seem timeless, symbols we take for granted every day. People also often don't realize that this process of invention and revision still happens to this day. The modern notation for the floor function is a great example that was only developed within the last century. Even today on the internet, you occasionally see discussions in which people debate on how mathematical notation can be improved. (I'm still holding out hope that my alternative notation for logarithms will one day catch on.) Of particular note is the evolution of grouping symbols. We usually think only of parentheses (as well as their variations like square brackets and curly braces) as denoting grouping, but an even earlier symbol used to group expressions was the vinculum-- a horizontal bar found over or under an expression. Consider the following expression: \[3-(1+2)\] If we wrote the same expression with a vinculum, it would look like this: \[3-\overline{1+2}\] Vincula can even be stacked: \[13-\overline{\overline{1+2}\cdot 3}=4\] This may seem like a quaint way of grouping, but it does in fact survive in our notation for fractions and radicals! You can even see both uses in the quadratic formula: \[x=\dfrac{-b\pm\sqrt{b^2-4ac}}{2a}\] Getting back to the original problem, what I think we're seeing is evidence that concatenation-- placing symbols next to each other with no sort of explicit symbol -- has become another way to represent grouping. "But wait", you might say, "concatenation is used to represent multiplication, not grouping!" That's certainly true in many cases, for example in how we write polynomials. However, there are a few places in mathematics that provide evidence that there's more to it than that. First of all, as a beautifully-written Twitter thread by EnchantressOfNumbers (@EoN_tweets) points out, we use concatenation to show a special importance of grouping when we write out certain trigonometric expressions without putting their arguments in parentheses. Consider the following identity: \[\sin 4u=2\sin 2u\cos 2u\] When we write such an equation, we're saying that not only do \(4u\) and \(2u\) represent multiplications, but that this grouping is so tight that they constitute the entire arguments of the sine and cosine functions. In fact, the space between \(\sin 2x\) and \(\cos 2x\) can also be seen as a somewhat looser form of concatention. Then again, so does the space between \(\sin\) and \(x\), which represents a different thing -- the connection of a function to its argument. Perhaps this is why the popular (and amazing) online graphing calculator Desmos is only so permissive when it comes to parsing concatenation: An even more curious case is mixed numbers. When writing mixed numbers, concatenation actually stands for addition, not multiplication. \[3\tfrac{1}{2}=3+\tfrac{1}{2}\] In fact, concatenation actually makes addition come beforemultiplication when we multiply mixed numbers! \[3\tfrac{1}{2}\cdot 5\tfrac{5}{6}=(3+\tfrac{1}{2})\cdot(5+\tfrac{5}{6})=20\tfrac{5}{12}\] Now, you may feel that this example shows how mixed numbers are an inelegance in mathematical notation (and I would agree with you). Even so, I argue that this is evidence that we fundamentallyview concatenation as a way to represent grouping. It just so happens that, since multiplication takes precedence over addition anyway in the absence of other grouping symbols, we use concatenation when we write it. This all stems from a sort of "laziness" in how we write things -- - laying out precedence rules allows us to avoid writing parentheses, and once we've established those precedence rules, we don't even need to write out the multiplication at all. So how does the internet's favorite math problem fit into all this? The most striking feature of the expression \(8\div 2(2+2)\) is that it's written all in one line. Mathematical typesetting is difficult. LaTeX is powerful, but has a steep learning curve, though various other editors have made it a bit easier, such as Microsoft Word's Equation Editor (which has much improved since when I first used it!). Calculators have also recognized this difficulty, which is why TI calculators now have MathPrint templates (though its entry is quite clunky compared to Desmos's "as-you-type" formatting via MathQuill). Even so, all of these input methods exist in very specific applications. What about when you're writing an email? Or sending a text? Or a Facebook message? (If you're wondering "who the heck writes about math in a Facebook message", the answer at least includes "students who are trying to study for a test".) The evolution of these sorts of media has led to the importance of one-line representations of mathematics with easily-accessible symbols. When you don't have the ability (or the time) to neatly typeset a fraction, you're going to find a way to use the tools you've got. And that's even more important as we realize that everybodycan (and should!) engage with mathematics, not just mathematicians or educators. So that might explain why a physics student might type "hbar = h / 2pi", and others would know that this clearly means \(\hbar=\dfrac{h}{2\pi}\) rather than \(\hbar=\dfrac{h}{2}\pi\). Remember, mathematics is not about just answer-getting. It's about communication of those ideas. And when the medium of communication limits how those ideas can be represented, the method of communication often changes to accomodate it. What the infamous problem points out is that while almost nobody has laid out any explicit rules for how to deal with concatenation, we seem to have developed some implicit ones, which we use without thinking about them. We just never had to deal with them until recently, as more "everyday" people communicate mathematics on more "everyday" media. Perhaps it's time that we address this convention explicitly and admit that concatenation really has become a way to represent grouping, just like parentheses or the vinculum. This is akin to taking a more descriptivist, rather than prescriptivist, approach to language: all we would be doing is recognizing that this is alreadyhow we do things everywhere else. Of course, this would throw a wrench in PEMDAS, but that just means we'd need to actually talk about the mathematics behind it rather than memorizing a silly mnemonic. After all, as inane as these internet math problems can be, they've shown that (whether they admit it or not) people really dowant to get to the bottom of mathematics, to truly understand it. I'd say that's a good thing. * If your argument for why the answer is \(16\) starts with "Well, \(2(2+2)\) means \(2\cdot(2+2)\), so...", then you have missed the point entirely.
In many cases, the symmetry of a molecule provides a great deal of information about its quantum states, even without a detailed solution of the Schrödinger equation. A geometrical transformation which turns a molecule into an indistinguishable copy of itself is called a symmetry operation. A symmetry operation can consist of a rotation about an axis, a reflection in a plane, an inversion through a point, or some combination of these. The Ammonia Molecule We shall introduce the concepts of symmetry and group theory by considering a concrete example–the ammonia molecule NH 3. In any symmetry operation on NH 3, the nitrogen atom remains fixed but the hydrogen atoms can be permuted in 3!=6 different ways. The axis of the molecule is called a C 3 axis, since the molecule can be rotated about it into 3 equivalent orientations, \(120^\circ \) apart. More generally, a C n axis has n equivalent orientations, separated by \(2\pi/n\) radians. The axis of highest symmetry in a molecule is called the principal axis. Three mirror planes, designated \(\sigma_1,\sigma_2,\sigma_3\), run through the principal axis in ammonia. These are designated as \(\sigma_v\) or vertical planes of symmetry. Ammonia belongs to the symmetry group designated C 3v, characterized by a three-fold axis with three vertical planes of symmetry. Let us designate the orientation of the three hydrogen atoms in Figure \(\PageIndex{1}\) as {1, 2, 3}, reading in clockwise order from the bottom. A counterclockwise rotation by 120\(^\circ\), designated Figure \(\PageIndex{1}\): Two views of the ammonia molecule. by the operator C 3, produces the orientation {2, 3, 1}. A second counterclockwise rotation, designated \(C_3^2\) , produces {3, 1, 2}. Note that two successive counterclockwise rotations by 120\(^\circ\) is equivalent to one clockwise rotation by 120\(^\circ\), so the last operation could also be designated \(C_3^{-1}\) . The three reflection operations \(\sigma_1,\sigma_2,\sigma_3\), applied to the original configuration {1, 2, 3} produces {1, 3, 2}, {3, 2, 1} and {2, 1, 3}, respectively. Finally, we must include the identity operation, designated E, which leaves an orientation unchanged. The effects of the six possible operations of the symmetry group C 3v can be summarized as follows: \[E\{1,2,3\}=\{1,2,3\} C_3\{1,2,3\}=\{2,3,1\}\] \[C_3^2\{1,2,3\}=\{3,1,2\} \sigma_1\{1,2,3\}=\{1,3,2\}\] \[\sigma_2\{1,2,3\}=\{3,2,1\} \sigma_3\{1,2,3\}=\{2,1,3\}\] We have thus accounted for all 6 possible permutations of the three hydrogen atoms. The successive application of two symmetry operations is equivalent to some single symmetry operation. For example, applying C 3, then \(\sigma_1\) to our starting orientation, we have \[\sigma_1 C_3\{1,2,3\}=\sigma_1\{2,3,1\}=\{2,1,3\}\] But this is equivalent to the single operation \(\sigma_3\). This can be represented as an algebraic relation among symmetry operators \[\sigma_1 C_3=\sigma_3\] Note that successive operations are applied in the order right to left when represented algebraically. For the same two operations in reversed order, we find \[C_3 \sigma_1 \{1,2,3\} = C_3 \{1,3,2\} = \{3,2,1\} = \sigma_2 \{1,2,3\}\] Thus symmetry operations do not, in general commute \[A B \not\equiv B A \label{1}\] although they may commute, for example, \(C_3\) and \(C_3^2\). The algebra of the group \(C_{3v}\) can be summarized by the following multiplication table. \[\begin{matrix} & 1^{st} & E & C_3 & C_3^2 & \sigma_1 &\sigma_2 &\sigma_3 \\ 2^{nd} & & & & & & & \\ E & &E &C_3 &C_3^2 &\sigma_1 &\sigma_2 &\sigma_3 \\ C_3& &C_3 &C_3^2 &E &\sigma_3 &\sigma_1 &\sigma_2 \\ C_3^2& & C_3^2 &E &C_3 &\sigma_2 &\sigma_3 &\sigma_1 \\ \sigma_1& &\sigma_1 &\sigma_2 &\sigma_3 &E &C_3 &C_3^2 \\ \sigma_2 & &\sigma_2 &\sigma_3 &\sigma_1 &C_3^2 &E &C_3 \\ \sigma_3 & &\sigma_3 &\sigma_1 &\sigma_2 &C_3 &C_3^2 &E \end{matrix}\] Notice that each operation occurs once and only once in each row and each column. Group Theory In mathematics, a group is defined as a set of g elements \(\mathcal{G} \equiv \{G_1,G_2...G_h\}\) together with a rule for combination of elements, which we usually refer to as a product. The elements must fulfill the following four conditions. The product of any two elements of the group is another element of the group. That is \(G_iG_j=G_k\) with \(G_k\in\mathcal{G}\) Group multiplication obeys an associative law, \(G_i(G_jG_k)=(G_iG_j)G_k\equiv G_iG_jG_k\) There exists an identity element Esuch that \(EG_i=G_iE=G_i\) for all i. Every element \(G_i\) has a unique inverse \(G_i^{-1}\), such that \(G_iG_i^{-1}=G_i^{-1}G_i=E\) with \(G_i^{-1}\in\mathcal{G}\). The number of elements h is called the order of the group. Thus \(C_{3v}\) is a group of order 6. A set of quantities which obeys the group multiplication table is called a representation of the group. Because of the possible noncommutativity of group elements [cf. Eq (1)], simple numbers are not always adequate to represent groups; we must often use matrices. The group \(C_{3v}\) has three irreducible representations, or IR’s, which cannot be broken down into simpler representations. A trivial, but nonetheless important, representation of any group is the totally symmetric representation, in which each group element is represented by 1. The multiplication table then simply reiterates that \(1\times 1=1\). For \(C_{3v}\) this is called the \(A_1\) representation: \[A_1: E=1,C_3=1,C_3^2=1,\sigma_1=1,\sigma_2=1,\sigma_3=1 \label{2}\] A slightly less trivial representation is \(A_2\): \[A_2: E=1,C_3=1,C_3^2=1,\sigma_1=-1,\sigma_2=-1,\sigma_3=-1 \label{3}\] Much more exciting is the E representation, which requires \(2\times 2\) matrices: \[ E= \begin{pmatrix} 1 &0 \\0 &1 \end{pmatrix} \qquad C_3=\begin{pmatrix} -1/2 &-\sqrt{3}/2 \\ \sqrt{3}/2 &-1/2 \end{pmatrix} \\ C_3^2=\begin{pmatrix} -1/2 &\sqrt{3}/2 \\ -\sqrt{3}/2 &-1/2 \end{pmatrix} \qquad \sigma_1=\begin{pmatrix} -1 &0 \\0 &1 \end{pmatrix} \\ \sigma_2=\begin{pmatrix} 1/2 &-\sqrt{3}/2 \\ -\sqrt{3}/2 &-1/2 \end{pmatrix} \qquad \sigma_3=\begin{pmatrix} 1/2 &\sqrt{3}/2 \\ \sqrt{3}/2 &-1/2 \end{pmatrix} \label{4}\] The operations \(C_3\) and \(C_3^2\) are said to belong to the same class since they perform the same geometric function, but for different orientations in space. Analogously, \(\sigma_1, \sigma_2\) and \(\sigma_3\) are obviously in the same class. E is in a class by itself. The class structure of the group is designated by \(\{E,2C_3,3\sigma_v\}\). We state without proof that the number of irreducible representations of a group is equal to the number of classes. Another important theorem states that the sum of the squares of the dimensionalities of the irreducible representations of a group adds up to the order of the group. Thus, for \(C_{3v}\), we find \(1^2+1^2+2^2=6\). The trace or character of a matrix is defined as the sum of the elements along the main diagonal: \[\chi(M)\equiv\sum_kM_{kk} \label{5}\] For many purposes, it suffices to know just the characters of a matrix representation of a group, rather than the complete matrices. For example, the characters for the E representation of \(C_{3v}\) in Eq (4) are given by \[\chi(E)=2,\quad \chi(C_3)=-1, \quad \chi(C_3^2)=-1, \\ \chi(\sigma_1)=0, \quad \chi(\sigma_2)=0, \quad \chi(\sigma_3)=0 \label{6}\] It is true in general that the characters for all operations in the same class are equal. Thus Eq (6) can be abbreviated to \[\chi(E)=2,\quad \chi(C_3)=-1, \quad \chi(\sigma_v)=0 \label{7}\] For one-dimensional representations, such as \(A_1\) and \(A_2\), the characters are equal to the matrices themselves, so Equations \(\ref{2}\) and \(\ref{3}\) can be read as a table of characters. The essential information about a symmetry group is summarized in its character table. We display here the character table for \(C_{3v}\) \[\begin{matrix} C_{3v} &E &2C_3 &3\sigma_v & & \\\hline A_1 &1 &1 &1 &z &z^2,x^2+y^2 \\A_2 &1 &1 &-1 & & \\E &2 &-1 &0 &(x,y) &(xy,x^2-y^2),(xz,yz) \end{matrix}\] The last two columns show how the cartesian coordinates x, y, z and their products transform under the operations of the group. Group Theory and Quantum Mechanics When a molecule has the symmetry of a group \(\mathcal{G}\), this means that each member of the group commutes with the molecular Hamiltonian \[[\hat G_i,\hat H]=0 \quad i=1...h \label{8}\] where we now explicitly designate the group elements \(G_i\) as operators on wavefunctions. As was shown in Chap. 4, commuting operators can have simultaneous eigenfunctions. A representation of the group of dimension d means that there must exist a set of d degenerate eigenfunctions of \(\hat H\) that transform among themselves in accord with the corresponding matrix representation. For example, if the eigenvalue \(E_n\) is d-fold degenerate, the commutation conditions (Equation \(\ref{2}\)) imply that, for \(i=1...h\), \[\hat G_i \hat H \psi_{nk} = \hat H \hat G_i \psi_{nk}=E_n \hat G_i \psi_{nk} \; \text{for} \;k=1...d \label{9}\] Thus each \(\hat G_i \psi_{nk}\) is also an eigenfunction of \(\hat H\) with the same eigenvalue \(E_n\), and must therefore be represented as a linear combination of the eigenfunctions \(\psi_{nk}\). More precisely, the eigenfunctions transform among themselves according to \[\hat G_i \psi_{nk}=\sum_{m=1}^d D(G_i)_{km}\psi_{nm} \label{10}\] where \(D(G_i)_{km}\) means the \(\{k,m\}\) element of the matrix representing the operator \(\hat G_i\). The character of the identity operation E immediately shows the degeneracy of the eigenvalues of that symmetry. The \(C_{3v}\) character table reveals that \(NH_3\), and other molecules of the same symmetry, can have only nondegenerate and two-fold degenerate energy levels. The following notation for symmetry species was introduced by Mulliken: One dimensional representations are designated either A or B. Those symmetric wrt rotation by \(2\pi/n\) about the \(C_n\) principal axis are labeled A, while those antisymmetric are labeled B. Two dimensional representations are designated E; 3, 4 and 5 dimensional representations are designated T, F and G, respectively. These latter cases occur only in groups of high symmetry: cubic, octahedral and icosohedral. In groups with a center of inversion, the subscripts g and u indicate even and odd parity, respectively. Subscripts 1 and 2 indicate symmetry and antisymmetry, respectively, wrt a \(C_2\) axis perpendicular to \(C_n\), or to a \(\sigma_v\) plane. Primes and double primes indicate symmetry and antisymmetry to a \(\sigma_h\) plane. For individual orbitals, the lower case analogs of the symmetry designations are used. For example, MO’s in ammonia are classified \(a_1,a_2\) or e. For ammonia and other \(C_{3v}\) molecules, there exist three species of eigenfunctions. Those belonging to the classification \(A_1\) are transformed into themselves by all symmetry operations of the group. The 1s, 2s and \(2p_z\) AO’s on nitrogen are in this category. The z-axis is taken as the 3-fold axis. There are no low-lying orbitals belonging to \(A_2\). The nitrogen \(2p_x\) and \(2p_y\) AO’s form a two-dimensional representation of the group \(C_{3v}\). That is to say, any of the six operations of the group transforms either one of these AO’s into a linear combination of the two, with coefficients given by the matrices (4). The three hydrogen 1s orbitals transform like a \(3\times 3\) representation of the group. If we represent the hydrogens by a column vector {H1,H2,H3}, then the six group operations generate the following algebra \[\begin{matrix} E=\begin{pmatrix} 1 &0 &0 \\0 &1 &0 \\0 &0 &1 \end{pmatrix} & C_3=\begin{pmatrix} 0 &1 &0 \\0 &0 &1 \\1 &0 &0 \end{pmatrix} \\ C_3^2=\begin{pmatrix} 0 &0 &1 \\1 &0 &0 \\0 &1 &0 \end{pmatrix} & \sigma_1=\begin{pmatrix} 1 &0 &0 \\0 &0 &1 \\0 &1 &0 \end{pmatrix} \\ \sigma_2=\begin{pmatrix} 0&0 &1 \\0 &1 &0 \\1 &0 &0 \end{pmatrix} & \sigma_3=\begin{pmatrix} 0&1 &0 \\1 &0 &0 \\0 &0 &1 \end{pmatrix} \end{matrix} \label{11}\] Let us denote this representation by \(\Gamma\). It can be shown that \(\Gamma\) is a reducible representation, meaning that by some unitary transformation the \(3 \times 3\) matrices can be factorized into blockdiagonal form with \(2 \times 2\) plus \(1 \times 1\) submatrices. The reducibility of \(\Gamma\) can be deduced from the character table. The characters of the matrices (Equation \(\ref{11}\)) are \[\Gamma: \qquad \chi(E)=3, \quad \chi(C_3)=0, \quad \chi_(\sigma_v)=1 \label{12}\] The character of each of these permutation operations is equal to the number of H atoms left untouched: 3 for the identity, 1 for a reflection and 0 for a rotation. The characters of \(\Gamma\) are seen to equal the sum of the characters of \(A_1\) plus E. This reducibility relation is expressed by writing \[\Gamma=A_1\oplus E \label{13}\] The three H atom 1s functions can be combined into LCAO functions which transform according to the IR’s of the group. Clearly the sum \[\psi=\psi_{1s}(1)+\psi_{1s}(2)+\psi_{1s}(3) \label{14}\] transforms like \(A_1\). The two remaining linear combinations which transform like E must be orthogonal to (Equation \(\ref{14}\)) and to one another. One possible choice is \[\psi'=\psi_{1s}(2)-\psi_{1s}(3), \quad \psi''=2\psi_{1s}(1)-\psi_{1s}(2)-\psi_{1s}(3) \label{15}\] Now, Equation \(\ref{14}\) can be combined with the N 1s, 2s and \(2p_z\) to form MO’s of \(A_1\) symmetry, while Equation \(\ref{15}\) can be combined with the N \(2p_x\) and \(2p_y\) to form MO’s of E symmetry. Note that no hybridization of AO’s is predetermined, it emerges automatically in the results of computation.
I am trying to prove the following Lemma, which seems intuitive, but I still have doubts: Lemma Given a Brownian motion $\{W_t,\mathcal F_t:0\le t \le1\}$, two bounded processes, $\mu$ and $\sigma$, with $\sigma$ continuous and $\sigma_0\neq 0$, such that the integral $$ X_t=\int_0^t \sigma_t dW_t + \int_0^t \mu_t dt$$ exists, then $$\lim_{t\rightarrow 0}P\left(X_t >0\right)=\frac 1 2.$$ Proof Attempt (wrong) Define $g_n := \sqrt{n} X_{1/n}$ and clearly it holds $P(X_{1/n}>0)=P(g_n>0)$, for all $n\in\mathbb N$. By the time change formula for stochastic integrals (Karatzas & Shreve, Th. 3.4.8) and regular integration by substitution we have: $$ g_n = \int_0^1 \sigma_{s/n}dB_s + \frac 1 {\sqrt{n}}\int_0^1 \mu_{s/n}ds \quad\text{ a.s. }$$ where $B_s:=\sqrt{n} W_{s/n}$ is a Brownian motion. Furthermore, by bounded convergence, we have $g_n\stackrel{L^2}\longrightarrow \sigma_0 B_1$: (as noted by @sinusx, this does not work as $B_1$ depends on $n$) $$ \mathbb E\left(g_n-\sigma_0 B_1\right)^2\le\mathbb E\left(\int_0^1 (\sigma_{s/n}-\sigma_0)^2ds+\frac 1 {n}\;\left(\int_0^1\mu_{s/n}ds\right)^2\right)\rightarrow 0$$ It remains to show that $P(g_n>0)=\mathbb E(1_{g_n>0})$ converges to $\mathbb E(1_{\sigma_0 B_1>0})$ $=$ $P(\sigma_0 B_1>0)=\frac 1 2$. By bounded convergence, it suffices to show almost sure convergence for $1_{g_n>0}$: $L^2$ convergence implies almost sure convergence for $g_n$. As the step function is contiuous everywhere except at $0$, we conclude with $P(\lim_{n\rightarrow \infty}g_n=0)=P(B_1=0)=0$. (BTW, this is the main difference between $g_n$ and $X_{1/n}$: $P(X_{1/\infty}=0)=1\neq P(\sigma_0 B_1=0)=0$.) Questions Is the Lemma true? Is the proof correct? Is there an easier proof or does it follow from some other result? Can you point me to similar results? (Meta: Is question suitable for MO?) Edit: Simpler proof, suggested in @Sinusx's answer Define $f_t := \sigma_0\frac{W_t}{\sqrt{t}}$ and $g_t := \frac{X_t}{\sqrt{t}}$. It clearly holds $$ g_t=f_t + \frac{1}{\sqrt{t} }\int _ 0 ^t (\sigma _s - \sigma _0 ) dW _s+ \frac{1}{\sqrt{t} } \int _0 ^t \mu _s ds, $$ implying $g_t - f_t\longrightarrow 0$ in $L^2$ by bounded convergence $$ \mathbb E(g_t-f_t)^2 \le\mathbb E\left(\int_0^1 (\sigma_{st}-\sigma_0)^2ds+t\;(\sup_{s\le t}\mu_s)^2\right)\rightarrow 0, \text{ as } t\rightarrow 0 \tag{1}$$ and thus convergence in probability. Together with $f_t\sim \mathcal N(0,\sigma_0^2)$ (by the scaling property of the Brownian motion) we can apply this condition for convergence in distribution to show $g_t\stackrel{\text{(d)}}\longrightarrow \mathcal N(0,\sigma_0^2)$ as $t\rightarrow 0$. The lemma now follows from $P(X_t>0)=P(g_t>0)\rightarrow \frac{1}{2}$. PS: However, I think (1) still needs a dominator for the $\sigma$ part.
When a manufacturer lists a chemical as ACS Reagent Grade, they must demonstrate that it conforms to specifications set by the American Chemical Society (ACS). For example, the ACS specifications for NaBr require that the concentration of iron be ≤5 ppm. To verify that a production lot meets this standard, the manufacturer collects and analyzes several samples, reporting the average result on the product’s label (Figure 7.1). Figure 7.1 Certificate of analysis for a production lot of NaBr. The result for iron meets the ACS specifications, but the result for potassium does not. If the individual samples do not accurately represent the population from which they are drawn—what we call the target population—then even a careful analysis must yield an inaccurate result. Extrapolating this result from a sample to its target population introduces a determinate sampling error. To minimize this determinate sampling error, we must collect the right sample. Even if we collect the right sample, indeterminate sampling errors may limit the usefulness of our analysis. Equation 7.1 shows that a confidence interval about the mean, X, is proportional to the standard deviation, s, of the analysis \[\mu=\bar X\pm \dfrac{ts}{\sqrt n}\tag{7.1}\] where n is the number of samples and t is a statistical factor that accounts for the probability that the confidence interval contains the true value, μ. Note Each step of an analysis contributes random error that affects the overall standard deviation. For convenience, let’s divide an analysis into two steps—collecting the samples and analyzing the samples—each characterized by a standard deviation. Using a propagation of uncertainty, the relationship between the overall variance, s 2, and the variances due to sampling, s samp 2, and the analytical method, s meth 2, is \[s^2=s_\textrm{samp}^2+s_\textrm{meth}^2\tag{7.2}\] Note Although equation 7.1 is written in terms of a standard deviation, s, a propagation of uncertainty is written in terms of variances, s 2. In this section, and those that follow, we will use both standard deviations and variances to discuss sampling uncertainty. Equation 7.2 shows that the overall variance for an analysis may be limited by either the analytical method or the collecting of samples. Unfortunately, analysts often try to minimize the overall variance by improving only the method’s precision. This is a futile effort, however, if the standard deviation for sampling is more than three times greater than that for the method. 1 Figure 7.2 shows how the ratio s samp/ s meth affects the method’s contribution to the overall variance. As shown by the dashed line, if the sample’s standard deviation is 3× the method’s standard deviation, then indeterminate method errors explain only 10% of the overall variance. If indeterminate sampling errors are significant, decreasing s meth provides only a nominal change in the overall precision. Figure 7.2 The blue curve shows the method’s contribution to the overall variance, s 2, as a function of the relative magnitude of the standard deviation in sampling, s samp, and the method’s standard deviation, s meth. The dashed red line shows that the method accounts for only 10% of the overall variance when s samp = 3 × s meth. Understanding the relative importance of potential sources of indeterminate error is important when considering how to improve the overall precision of the analysis. Example 7.1 A quantitative analysis gives a mean concentration of 12.6 ppm for an analyte. The method’s standard deviation is 1.1 ppm and the standard deviation for sampling is 2.1 ppm. (a) What is the overall variance for the analysis? (b) By how much does the overall variance change if we improve s meth by 10% to 0.99 ppm? (c) By how much does the overall variance change if we improve s samp by 10% to 1.89 ppm? Solution (a) The overall variance is \[s^2=s_\textrm{samp}^2+s_\textrm{meth}^2=\mathrm{(2.1\;ppm)^2+(1.1\;ppm)^2=5.6\;ppm^2}\] (b) Improving the method’s standard deviation changes the overall variance to \[s^2=\mathrm{(2.1\;ppm)^2+(0.99\;ppm)^2=5.4\;ppm^2}\] Improving the method’s standard deviation by 10% improves the overall variance by approximately 4%. (c) Changing the standard deviation for sampling \[s^2=\mathrm{(1.9\;ppm)^2+(1.1\;ppm)^2=4.8\;ppm^2}\] improves the overall variance by almost 15%. As expected, because s samp is larger than s meth, we obtain a bigger improvement in the overall variance when we focus our attention on sampling problems. Practice Exercise 7.1 Suppose you wish to reduce the overall variance in Example 7.1 to 5.0 ppm 2. If you focus on the method, by what percentage do you need to reduce s meth? If you focus on the sampling, by what percentage do you need to reduce s samp? Click here to review your answer to this exercise. To determine which step has the greatest effect on the overall variance, we need to measure both s samp and s meth. The analysis of replicate samples provides an estimate of the overall variance. To determine the method’s variance we analyze samples under conditions where we may assume that the sampling variance is negligible. The sampling variance is determined by difference. Note There are several ways to minimize the standard deviation for sampling. Here are two examples. One approach is to use a standard reference material (SRM) that has been carefully prepared to minimize indeterminate sampling errors. When the sample is homogeneous—as is the case, for example, with aqueous samples—a useful approach is to conduct replicate analyses on a single sample. Example 7.2 The following data were collected as part of a study to determine the effect of sampling variance on the analysis of drug-animal feed formulations. 2 % Drug (w/w) % Drug (w/w) 0.0114 0.0099 0.0105 0.0105 0.0109 0.0107 0.0102 0.0106 0.0087 0.0103 0.0103 0.0104 0.0100 0.0095 0.0098 0.0101 0.0101 0.0103 0.0105 0.0095 0.0097 The data on the left were obtained under conditions where both s samp and s meth contribute to the overall variance. The data on the right were obtained under conditions where s samp is known to be insignificant. Determine the overall variance, and the standard deviations due to sampling and the analytical method. To which factor—sampling or the method—should you turn your attention if you want to improve the precision of the analysis? Solution Using the data on the left, the overall variance, s 2, is 4.71 × 10 –7. (See Chapter 4 for a review of how to calculate the variance.) To find the method’s contribution to the overall variance, s meth 2, we use the data on the right, obtaining a value of 7.00 × 10 –8. The variance due to sampling, s samp 2, is \[s_\textrm{samp}^2=s^2-s_\textrm{meth}^2=4.71\times10^{-7}-7.00\times10^{-8}=4.01\times10^{-7}\] Converting variances to standard deviations gives s samp as 6.32 × 10 –4 and s meth as 2.65 × 10 –4. Because s samp is more than twice as large as s meth, improving the precision of the sampling process has the greatest impact on the overall precision. Practice Exercise 7.2 A polymer’s density provides a measure of its crystallinity. The standard deviation for the determination of density using a single sample of a polymer is 1.96 × 10 –3 g/cm 3. The standard deviation when using different samples of the polymer is 3.65 × 10 –2 g/cm 3. Determine the standard deviations due to sampling and the analytical method. Click here to review your answer to this exercise.
In part V we saw how a statement Alice would like to prove to Bob can be converted into an equivalent form in the “language of polynomials” called a Quadratic Arithmetic Program (QAP). In this part, we show how Alice can send a very short proof to Bob showing she has a satisfying assignment to a QAP. We will use the Pinocchio Protocol of Parno, Howell, Gentry and Raykova. But first let us recall the definition of a QAP we gave last time: A Quadratic Arithmetic Program :math:`Q` of degree :math:`d` and size :math:`m` consists of polynomials :math:`L_1,\ldots,L_m`, :math:`R_1,\ldots,R_m`, :math:`O_1,\ldots,O_m` and a target polynomial :math:`T` of degree :math:`d`. An assignment :math:`(c_1,\ldots,c_m)` satisfies :math:`Q` if, defining :math:`L:=\sum_{i=1}^m c_i\cdot L_i, R:=\sum_{i=1}^m c_i\cdot R_i, O:=\sum_{i=1}^m c_i\cdot O_i` and :math:`P:=L\cdot R -O`, we have that :math:`T` divides :math:`P`. As we saw in Part V, Alice will typically want to prove she has a satisfying assignment possessing some additional constraints, e.g. :math:`c_m=7`; but we ignore this here for simplicity, and show how to just prove knowledge of some satisfying assignment. If Alice has a satisfying assignment it means that, defining :math:`L,R,O,P` as above, there exists a polynomial :math:`H` such that :math:`P=H\cdot T`. In particular, for any :math:`s\in\mathbb{F}_p` we have :math:`P(s)=H(s)\cdot T(s)`. Suppose now that Alice doesn’t have a satisfying assignment, but she still constructs :math:`L,R,O,P` as above from some unsatisfying assignment :math:`(c_1,\ldots,c_m)`. Then we are guaranteed that :math:`T` does not divide :math:`P`. This means that for any polynomial :math:`H` of degree at most :math:`d-2`, :math:`P` and :math:`L,R,O,H` will be different polynomials. Note that :math:`P` here is of degree at most :math:`2(d-1)`, :math:`L,R,O` here are of degree at most :math:`d-1` and :math:`H` here is degree at most :math:`d-2`. Now we can use the famous Schwartz-Zippel Lemma that tells us that two different polynomials of degree at most :math:`2d` can agree on at most :math:`2d` points :math:`s\in\mathbb{F}_p`. Thus, if :math:`p` is much larger than :math:`2d` the probability that :math:`P(s)=H(s)\cdot T(s)` for a randomly chosen :math:`s\in\mathbb{F}_p` is very small. This suggests the following protocol sketch to test whether Alice has a satisfying assignment. Alice chooses polynomials :math:`L,R,O,H` of degree at most :math:`d`. Bob chooses a random point :math:`s\in\mathbb{F}_p`, and computes :math:`E(T(s))`. Alice sends Bob the hidings of all these polynomials evaluated at :math:`s`, i.e. :math:`E(L(s)),E(R(s)),E(O(s)),E(H(s))`. Bob checks if the desired equation holds at :math:`s`. That is, he checks whether :math:`E(L(s)\cdot R(s)-O(s))=E(T(s)\cdot H(s))`. Again, the point is that if Alice does not have a satisfying assignment, she will end up using polynomials where the equation does not hold identically, and thus does not hold at most choices of :math:`s`. Therefore, Bob will reject with high probability over his choice of :math:`s` in such a case. The question is whether we have the tools to implement this sketch. The most crucial point is that Alice must choose the polynomials she will use, without knowing :math:`s`. But this is exactly the problem we solved in the verifiable blind evaluation protocol, that was developed in Parts II-IV. Given that we have that, there are four main points that need to be addressed to turn this sketch into a zk-SNARK. We deal with two of them here, and the other two in the next part. Making sure Alice chooses her polynomials according to an assignment Here is an important point: If Alice doesn’t have a satisfying assignment, it doesn’t mean she can’t find any polynomials :math:`L,R,O,H` of degree at most :math:`d` with :math:`L\cdot R-O=T\cdot H`, it just means she can’t find such polynomials where :math:`L,R` and :math:`O` were “produced from an assignment”; namely, that :math:`L:=\sum_{i=1}^m c_i\cdot L_i, R:=\sum_{i=1}^m c_i\cdot R_i, O:=\sum_{i=1}^m c_i\cdot O_i` for the same :math:`(c_1,\ldots,c_m)`. The protocol of Part IV just guarantees she is using some polynomials :math:`L,R,O` of the right degree, but not that they were produced from an assignment. This is a point where the formal proof gets a little subtle; here we sketch the solution imprecisely. Let’s combine the polynomials :math:`L,R,O` into one polynomial :math:`F` as follows: :math:`F=L+X^{d+1}\cdot R+X^{2(d+1)}\cdot O` The point of multiplying :math:`R` by :math:`X^{d+1}` and :math:`O` by :math:`X^{2(d+1)}` is that the coefficients of :math:`L,R,O` “do not mix” in :math:`F`: The coefficients of :math:`1,X,\ldots,X^d` in :math:`F` are precisely the coefficients of :math:`L`, the next :math:`d+1` coefficients of :math:`X^{d+1},\ldots,X^{2d+1}` are precisely the coefficients of :math:`R`, and the last :math:`d+1` coefficients are those of :math:`O`. Let’s combine the polynomials in the QAP definition in a similar way, defining for each :math:`i\in \{1,\ldots,m\}` a polynomial :math:`F_i` whose first :math:`d+1` coefficients are the coefficients of :math:`L_i`, followed be the coefficients of :math:`R_i` and then :math:`O_i`. That is, for each :math:`i\in \{1,\ldots,m\}` we define the polynomial :math:`F_i=L_i+X^{d+1}\cdot R_i+X^{2(d+1)}\cdot O_i` Note that when we sum two of the :math:`F_i`’s the :math:`L_i`, :math:`R_i`, and :math:`O_i` “sum separately”. For example, :math:`F_1+F_2 = (L_1+L_2)+X^{d+1}\cdot (R_1+R_2)+X^{2(d+1)}\cdot(O_1+O_2)`. More generally, suppose that we had :math:`F=\sum_{i=1}^mc_i\cdot F_i` for some :math:`(c_1,\ldots,c_m)`. Then we’ll also have :math:`L=\sum_{i=1}^m c_i\cdot L_i, R=\sum_{i=1}^m c_i\cdot R_i, O=\sum_{i=1}^m c_i\cdot O_i` for the same coefficients :math:`(c_1,\ldots,c_m)`. In other words, if :math:`F` is a linear combination of the :math:`F_i`’s it means that :math:`L,R,O` were indeed produced from an assignment. Therefore, Bob will ask Alice to prove to him that :math:`F` is a linear combination of the :math:`F_i`’s. This is done in a similar way to the protocol for verifiable evaluation: Bob chooses a random :math:`\beta\in\mathbb{F}^*_p`, and sends to Alice the hidings :math:`E(\beta\cdot F_1(s)),\ldots,E(\beta\cdot F_m(s))`. He then asks Alice to send him the element :math:`E(\beta\cdot F(s))`. If she succeeds, an extended version of the Knowledge of Coefficient Assumption implies she knows how to write :math:`F` as a linear combination of the :math:`F_i`’s. Adding the zero-knowledge part – concealing the assignment In a zk-SNARK Alice wants to conceal all information about her assignment. However the hidings :math:`E(L(s)),E(R(s)),E(O(s)),E(H(s))` do provide some information about the assignment. For example, given some other satisfying assignment :math:`(c’_1,\ldots,c’_m)` Bob could compute the corresponding :math:`L’,R’,O’,H’` and hidings :math:`E(L'(s)),E(R'(s)),E(O'(s)),E(H'(s))`. If these come out different from Alice’s hidings, he could deduce that :math:`(c’_1,\ldots,c’_m)` is not Alice’s assignment. To avoid such information leakage about her assignment, Alice will conceal her assignment by adding a “random :math:`T`-shift” to each polynomial. That is, she chooses random :math:`\delta_1,\delta_2,\delta_3\in\mathbb{F}^*_p`, and defines :math:`L_z:=L+\delta_1\cdot T,R_z:=R+\delta_2\cdot T,O_z:=O+\delta_3\cdot T`. Assume :math:`L,R,O` were produced from a satisfying assignment; hence, :math:`L\cdot R-O = T\cdot H` for some polynomial :math:`H`. As we’ve just added a multiple of :math:`T` everywhere, :math:`T` also divides :math:`L_z\cdot R_z-O_z`. Let’s do the calculation to see this: :math:`L_z\cdot R_z-O_z = (L+\delta_1\cdot T)(R+\delta_2\cdot T) – O-\delta_3\cdot T` :math:`= (L\cdot R-O) + L\cdot \delta_2\cdot T + \delta_1\cdot T\cdot R + \delta_1\delta_2\cdot T^2 – \delta_3\cdot T\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;` :math:`=T\cdot (H+L\cdot \delta_2 + \delta_1\cdot R + \delta_1 \delta_2\cdot T – \delta_3)` Thus, defining :math:`H_z=H+L\cdot\delta_2 + \delta_1\cdot R + \delta_1\delta_2\cdot T-\delta_3`, we have that :math:`L_z\cdot R_z-O_z=T\cdot H_z`. Therefore, if Alice uses the polynomials :math:`L_z,R_z,O_z,H_z` instead of :math:`L,R,O,H`, Bob will always accept. On the other hand, these polynomials evaluated at :math:`s\in\mathbb{F}_p` with :math:`T(s)\neq 0` (which is all but :math:`d` :math:`s`’s), reveal no information about the assignment. For example, as :math:`T(s)` is non-zero and :math:`\delta_1` is random, :math:`\delta_1\cdot T(s)` is a random value, and therefore :math:`L_z(s)=L(s)+\delta_1\cdot T(s)` reveals no information about :math:`L(s)` as it is masked by this random value. What’s left for next time? We presented a sketch of the Pinocchio Protocol in which Alice can convince Bob she possesses a satisfying assignment for a QAP, without revealing information about that assignment. There are two main issues that still need to be resolved in order to obtain a zk-SNARK: In the sketch, Bob needs an HH that “supports multiplication”. For example, he needs to compute :math:`E(H(s)\cdot T(s))` from :math:`E(H(s))` and :math:`E(T(s))`. However, we have not seen so far an example of an HH that enables this. We have only seen an HH that supports addition and linear combinations. Throughout this series, we have discussed interactiveprotocols between Alice and Bob. Our final goal, though, is to enable Alice to send single-message non-interactive proofs, that are publicly verifiable– meaning that anybody seeing this single message proof will be convinced of its validity, not just Bob (who had prior communication with Alice). Both these issues can be resolved by the use of pairings of elliptic curves, which we will discuss in the next and final part.
Jogesh C Pati Articles written in Pramana – Journal of Physics Volume 60 Issue 2 February 2003 pp 291-336 It is noted that a set of facts points to the relevance in four dimensions of conventional supersymmetric unification based on minimally a string-unified 2( τ), suggested by SuperK), (v) the intricate pattern of the masses and mixings of the fermions, including the smallness of cb and the largeness of μνμτ osc, and (vi) the need for B-L as a generator to implement baryogenesis (via leptogenesis). A concrete proposal is presented within a predictive 34 years, with · −K + being the dominant decay mode, and quite possibly μ + 0 and +π 0 being prominent. This in turn strongly suggests that an improvement in the current sensitivity by a factor of five to ten ought to reveal proton decay. For comparison, some alternatives to the conventional approach to unification pursued here are mentioned at the end. Volume 62 Issue 2 February 2004 pp 513-522 Evidence in favor of supersymmetric grand unification including that based on the observed family multiplet-structure, gauge coupling unification, neutrino oscillations, baryogenesis, and certain intriguing features of quark-lepton masses and mixings is noted. It is argued that attempts to understand (a) the tiny neutrino masses (especially Δ 2( 2 – v 3)), (b) the baryon asymmetry of the Universe (which seems to need leptogenesis), and (c) the observed features of fermion masses such as the ratio b/m τ, the smallness of cb and the maximality of$$\Theta _{\nu _\mu \nu _\tau }^{OSC} $$ seem to select out the route to higher unification based on an effective string-unified L × R × c or 34 years. This in turn strongly suggests that an improvement in the current sensitivity by a factor of five to ten (compared to SuperK) ought to reveal proton decay. Implications of this prediction for the next-generation nucleon decay and neutrino-detector are noted. Current Issue Volume 93 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
1. Introduction Percentages is applied in several chapters in math (including Profit & Loss, Interest & Growth, Ratio & Proportion). Quite a few Data Interpretation questions include percentage in some form. % is the symbol used to denote percentage. For instance, $40$ percent is written as 40%. The value of this symbol $\% = \dfrac{1}{100}$. Note that $100\% = 100 \times \dfrac{1}{100} = 1$ Percentage is useful in comparing values, which can be observed in the following example. Example 1 Ram scored 33, 39 and 54 marks in Maths, Science and English tests. If the total marks for these tests were 75, 91 and 120 respectively, in which subject did he score the lowest percentage? Solution Percentage of marks in Maths $= \dfrac{33}{75} \times 100\% = 44\%$ Science $= \dfrac{39}{91} \times 100\% = 42.86\%$ English $= \dfrac{54}{120} \times 100\% = 45\%$ Answer: Science 2. Conversions 2.1 Conversion of Decimal to Percent To convert a decimal or fraction, we simply multiply it by 100 and retain the '%' symbol. Example 2 Express 0.0023, 0.23 and 230 as a percentage. Solution $0.0023 = 0.0023 \times 1 = 0.0023 \times 100\% = 0.23\%$ $0.23 = 0.23 \times 100\% = 23\%$ $230 = 230 \times 100\% = 23000\%$ In short, the number written with the % symbol is 100 times the number when written without %. 2.2 Conversion of Fraction to Percent In case of fraction, the output can be written as a proper fraction, mixed fraction or as a near decimal. Example 3 Express $\dfrac{2}{11}$ as a percentage. Solution $\dfrac{2}{11} = \dfrac{2}{11} \times 100\%$ $= \dfrac{200}{11} \%$ (Proper Fraction) $= 18\frac{2}{11} \%$ (Mixed Fraction) $\sim 18.18\%$ (Near Decimal) 2.3 Conversion of Ratio to Percent Ratio is a relation between two quantities. In these questions, we are typically asked to find the percentage of an item in the total. Example 4 The ratio of copper to iron in an alloy is 2 : 3. What percent of the alloy is copper? Solution Copper content in the alloy is $2$ parts for every $5$ parts. Copper $\%$ in the alloy $= \dfrac{2}{5} \times 100\% = 40\%$ Now, it is correct to state that $40\%$ of the alloy is copper. Answer: 40% 2.4 Reconversion from Percent Conversion from percent is easy. Simply replace % with $\dfrac{1}{100}$. Example 5 Express 45% as a decimal. Solution $45\% = 45 \times \dfrac{1}{100} = \dfrac{9}{20} = 0.45$ Answer: 0.45
Difference between revisions of "Learn more about it" Line 246: Line 246: * Internally WIKI converts the spaces between words into underscores. * Internally WIKI converts the spaces between words into underscores. *First letter of target is automatically capitalized. *First letter of target is automatically capitalized. − − − </td> </td> </tr> </tr> Line 273: Line 270: </td> </td> </tr> </tr> − − − − − − − − − − − − − − − − <tr> <tr> <td>When adding a comment to a Talk page, <td>When adding a comment to a Talk page, Line 348: Line 329: </td> </td> </tr> </tr> − + </table><br><br> − + − + − + − + − + − + − + − + − + − + − + == Images, Media == == Images, Media == Revision as of 08:03, 24 February 2006 The Cool Solutions Wiki is a Cool Solutions Best Practices Compendium written collaboratively by its readers. You're welcome to add to articles, or start new ones. You are even free to copy, change or distribute articles. This model is called a wiki. A Wiki or wiki (pronounced "wicky", "weekee" or "veekee") is a website (or other hypertext document collection) that allows a user to add content, as on an Internet forum, but also allows that content to be edited by anybody. We will be creating OPEN CALL article topics in the Cool Wiki and instead of emailing the responses in, you can just click Edit, and add your ideas to the article itself. Occasionally we will harvest a particularly excellent or useful version of an article and run it as a regular article in Cool Solutions. This is called "freezing" an article, and it keeps that version stable by halting the change process on it. The wiki version of the article, however, can continue to be edited and modified, and will be part of that ongoing cycle of freezing content in the Cool Solutions Vault. We will also be bringing our most popular articles from the Cool Solutions Vault into the wiki so they can be updated, expanded, and modified by the community. This is called "thawing" an article. Contents 1 How Do I Edit a Page? 2 Change Control 3 Minor edits 4 Summary field for edits 5 The wiki markup 6 New section 7 Links, URLs 8 Images, Media 9 Tables 10 Character formatting 11 Variables 12 Page protection 13 Editing using an external editor How Do I Edit a Page? Editing a Cool Wiki page is very easy. Simply click on the " Edit" tab at the top of a Wiki page. This will bring you to a page with a text box containing the editable text of that page. If you want to experiment, go try it on this sample page. When you've finished, press " Show preview" to see how your changes will look. If you're happy with what you see, then press " Save" and your changes will be immediately applied to the article. You can also click on the " Discussion" tab to see the corresponding " Talk" page, which contains comments about the page from other Cool Wiki users. Click on the " +" tab or "Edit this page") to add a comment. Please do not vandalize the other articles in the Cool Wiki-- it's a waste of time for everyone, and generally considered to be in the worst of taste. Change Control Wikis generally follow a philosophy of making it easy to fix mistakes, rather than making it hard to make them. Thus, while wikis are very open, they also provide various means to verify the validity of recent additions to the body of pages. The most prominent one on almost every wiki is the so-called Recent Changes page, which displays a list of either a specific number of recent edits or a list of all edits that have been made within a given timeframe. Some wikis allow the list to be filtered so that minor edits - or edits that have been made by automatic importing scripts ("bots") - can be excluded. From the change log, two other functions are accessible in most wikis: the revision history, which shows previous versions of the page, and the diff feature, which can highlight the changes between two revisions. The revision history allows an editor to open and save a previous version of the page and thereby restore the original content. The diff feature can be used to decide whether this is necessary or not. A regular user of the wiki can view the diff of a change listed on the "Recent changes" page and, if it is an unacceptable edit, consult the history to restore a previous revision. Minor edits When editing a page, a logged-in user has the option of flagging the edit as a "minor edit". When to use this is somewhat a matter of personal preference. The rule of thumb is that an edit of a page that is spelling corrections, formatting, and minor rearranging of text should be flagged as a "minor edit". A major edit is basically something that makes the entry worth relooking at for somebody who wants to watch the article rather closely, so any "real" change, even if it is a single word. This feature is important, because users can choose to hide minor edits in their view of the Recent Changes page, to keep the volume of edits down to a manageable level. The reason for not allowing a user who is not logged in to mark an edit as minor is that vandalism could then be marked as a minor edit, in which case it would stay unnoticed longer. This limitation is another reason to log in. Summary field for edits When you edit a page in the wiki, you should take a minute with each revision to provide a short summary in the Summary field about what the changes you made are. This significantly helps someone else to get a quick overview of the changes made to a page. It is very "un-user-friendly" to see a long list of versions without any idea what was changed in each revision unless you use the "diff" function. The wiki markup In the left column of the table below, you can see what effects are possible. In the right column, you can see how those effects were achieved. In other words, to make text look like it looks in the left column, type it in the format you see in the right column. You may want to keep this page open in a separate browser window for reference. If you want to try out things without danger of doing any harm, you can do so in the Sandbox. Sections, paragraphs, lists and lines What it looks like What you type Start your sections with header lines: New section Subsection Sub-subsection == New section == or <h3> New section </h3> === Subsection === or <h4> Subsection </h4> ==== Sub-subsection ==== or <h5> Sub-subsection </h5> A single newlinehas no effect on the layout.These can be used to separatesentences within a paragraph.Some editors find that this aids editingand improves the But an empty line starts a new paragraph. A single newline has no effect on the layout. These can be used to separate sentences within a paragraph. Some editors find that this aids editing and improves the ''diff'' function. But an empty line starts a new paragraph. You can break lines without starting a new paragraph. You can break lines<br> without starting a new paragraph. * Lists are easy to do: ** start every line with a star ** more stars means deeper levels # Numbered lists are also good ## very organized ## easy to follow * You can even do mixed lists *# and nest them *#* like this ; Definition list : definition (1) : definition (2) : definition (3) before the colon improves parsing. A manual newline starts a new paragraph. : A colon indents a line or paragraph. A manual newline starts a new paragraph. IF a line starts with a space THEN it will be formatted exactly as typed; in a fixed-width font; lines won't wrap; ENDIF this is useful for: * pasting preformatted text; * algorithm descriptions; * program source code * ascii art; WARNING If you make it wide, you force the whole page to be wide and hence less readable. Never start ordinary lines with spaces. IF a line starts with a space THEN it will be formatted exactly as typed; in a fixed-width font; lines won't wrap; ENDIF this is useful for: * pasting preformatted text; * algorithm descriptions; * program source code * ascii art; <center>Centered text.</center> A horizontal dividing line: above and below. Useful for separating threads on Talk pages. A horizontal dividing line: above ---- and below. Links, URLs What it looks like What you type To reference other WIKI pages, enclose the page name with double square brackets. For example Novell Industry Standards and Initiatives. To reference other WIKI pages, enclose the page name with double square brackets. For example [[Novell Industry Standards and Initiatives]]. Trivia: Link to a section on a page, e.g. Novell Industry Standards and Initiatives#Calendar (links to non-existent sections aren't really broken, they are treated aslinks to the page, i.e. to the top) [[Novell Industry Standards and Initiatives#Calendar]]. To have a different name displayed: To have a different name displayed: [[CoP - Open Source |Open Source Community of Practice]]. Endings are blended into the link: testing, genes Endings are blended into the link: [[test]]ing, [[gene]]s When adding a comment to a Talk page, you should sign it. You can do this by adding three tildes for your user name: or four for user name plus date/time: When adding a comment to a Talk page, you should sign it. You can do this by adding three tildes for your user name: : ~~~ or four for user name plus date/time: : ~~~~ The weather in London is a page that doesn't exist yet. the naming conventions page for your project. everyone correctly links to it. [[The weather in London]] is a page that doesn't exist yet. Redirect one article title to another by putting text like this (#Redirect) in its first line. #REDIRECT [[United States]] For a special way to link to the page on the same subject in another language or on another wiki, see MediaWiki User's Guide: Interwiki linking. [[MediaWiki User's Guide: Interwiki linking]] External Links: Novell External Links: [http://www.novell.com Novell] Or just give the URL: http://www.novell.com. If a URL contains a different character it should be converted; for example, ^ has to be written %5E (to be looked up in ASCII). Or just give the URL: http://www.novell.com. Images, Media What it looks like What you type A picture: The Novell Icon A picture: [[Image:Nov_red.gif]] or, with alternate text ( [[Image:Nov_red.gif|The Novell Icon]] Web browsers render alternate text when not displaying an image -- for example, when the image isn't loaded, or in a text-only browser, or when spoken aloud. Clicking on an uploaded image displays a description page, which you can also link directly to: Image:Nov_red.gif [[:Image:Nov_red.gif]] To include links to non-image uploads such as sounds, or to images shown as links instead of drawn on the page, use a "media" link. [[media:Penguin_vs_butterfly_real.ram| Penguin Vs. Butterfly Video]] [[media:Identity_and_Provisioning_Applications_and_Management_Roadmap_rev.ppt | Identity and Provisioning Power Point]] [[media:Securing_Netware_Standard_FINAL.doc| Securing NetWare Word Document]] [[media:Pki_directory_pp.pdf | PKI PDF]] ISBN 0123456789X "What links here" and "Related changes" can be linked as: [[Special:Whatlinkshere/How to edit a page]] and [[Special:Recentchangeslinked/How to edit a page]] Tables There are 2 syntaxes available for table creation. Wiki has it's own markup. An example would be to look at the calendar on the Novell Industry Standards and Initiatives page. Some of the usual html elements are also available. To see how tables are created using html elements, look at the source for this page. Character formatting What it looks like What you type ''Emphasize'', '''strongly''', '''''very strongly'''''. You can also write very important for graphical browsers, and many people choose to ignore it. You can also write <i>italic</i> and <b>bold</b> if the desired effect is a specific font style rather than emphasis, as in mathematical formulas: :<b>F</b> = <i>m</i><b>a</b> A typewriter font for technical terms. A typewriter font for <tt>technical terms</tt>. You can use small text for captions. You can use <small>small text</small> for captions. You can and You can <strike>strike out deleted material</strike> and <u>underline new material</u>. è é ê ë ì àÀ �? Â Ã Ä Ã… ÆÇ È É ÊË ÃŒ �? ÃŽ �? Ñ Ã’ Ó ÆÕ Ö Ø Ù Ú Û Ü ß àá â ã ä Ã¥ æ ç è é ê ë ì àî ï ñ ò ó ô œ õ ö ø ù ú û ü ÿ ¿ ¡ « » § ¶ †‡ • — £ ¤ ™ © ® ¢ € Â¥ £ ¤ Subscript: x 2 Superscript: x Subscript: x<sub>2</sub> Superscript: x<sup>2</sup> or x² ε<sub>0</sub> = 8.85 × 10<sup>−12</sup> C² / J m. 1 [[hectare]] = [[1 E4 m²]] Greek characters: α β γ δ ε ζ α β γ δ ε ζ η θ ι κ λ μ ν ξ ο π ρ σ ς τ υ φ χ ψ ω Γ Δ Θ Λ Ξ Π Σ Φ Ψ Ω ∫ ∑ ∏ √ − ± ∞ ≈ ∝ ≡ ≠ ≤ ≥ → × · ÷ ∂ ′ ″ ∇ ‰ ° ∴ ℵ ø ∈ ∉ ∩ ∪ ⊂ ⊃ ⊆ ⊇ ¬ ∧ ∨ ∃ ∀ ⇒ ⇔ → ↔ x 2 ≥ 0 true. <i>x</i><sup>2</sup> ≥ 0 true. <math>\sum_{n=0}^\infty \frac{x^n}{n!}</math> <math>\sum_{n=0}^\infty \frac{x^n}{n!}</math> <nowiki>Link → (<i>to</i>) the [[FAQ]]</nowiki> <!-- comment here --> Variables Code Effect {{CURRENTMONTH}} 09 {{CURRENTMONTHNAME}} September {{CURRENTMONTHNAMEGEN}} September {{CURRENTDAY}} 23 {{CURRENTDAYNAME}} Monday {{CURRENTYEAR}} 2019 {{CURRENTTIME}} 09:21 {{NUMBEROFARTICLES}} 625 __NOTOC__ No Table of Contents __NOEDITSECTION__ Page edits only NUMBEROFARTICLES: number of pages in the main namespace which contain a link and are not a redirect, i.e. number of articles, stubs containing a link, and disambiguation pages). Page protection Editing using an external editor A web browser's text area editing capabilities can be quite constrictingwhen making frequent or larger edits. Some browsers can be configured to allow text area editing using an external editor like vi, emacs, kwrite or wordpad.exe. Mozilla Firefox The ViewSourceWith Firefox Extension allows the use of an external editor.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional... no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right? For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right. Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present? Like can we think $H=C_q \times C_r$ or something like that from the given data? When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities? When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$. And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$. In this case how can I write G using notations/symbols? Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$? First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.? As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations? There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$. A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$ If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword There is good motivation for such a definition here So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$ It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$. Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative I don't know how to interpret this coarsely in $\pi_1(S)$ @anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale. @ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math. Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap... I can probably guess that they are using symmetries and permutation groups on graphs in this course. For example, orbits and studying the automorphism groups of graphs. @anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix! Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case @chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$ could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional... no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right? For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right. Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present? Like can we think $H=C_q \times C_r$ or something like that from the given data? When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities? When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$. And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$. In this case how can I write G using notations/symbols? Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$? First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.? As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations? There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$. A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$ If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword There is good motivation for such a definition here So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$ It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$. Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative I don't know how to interpret this coarsely in $\pi_1(S)$ @anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale. @ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math. Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap... I can probably guess that they are using symmetries and permutation groups on graphs in this course. For example, orbits and studying the automorphism groups of graphs. @anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix! Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case @chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$ could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
Difference between revisions of "Robot Dynamics and Control" (→Chapter Summary) Line 89: Line 89: tracked without solving the inverse kinematics problem. Stability of tracked without solving the inverse kinematics problem. Stability of these controllers can be verified using Lyapunov stability. these controllers can be verified using Lyapunov stability. − + </ol> </ol> == Additional Information == == Additional Information == Revision as of 02:29, 25 July 2009 Prev: Manipulator Kinematics Chapter 4 - Robot Dynamics and Control Next: Multifingered Hand Kinematics This chapter presents an introduction to the dynamics and control of robot manipulators. We derive the equations of motion for a general open-chain manipulator and, using the structure present in the dynamics, construct control laws for asymptotic tracking of a desired trajectory. In deriving the dynamics, we will make explicit use of twists for representing the kinematics of the manipulator and explore the role that the kinematics play in the equations of motion. We assume some familiarity with dynamics and control of physical systems. Chapter Summary The following are the key concepts covered in this chapter: The equations of motion for a mechanical system with Lagrangian <amsmath>L =T(q,\dot q)- V(q)</amsmath> satisfies Lagrange's equations: <amsmath> \frac{d}{dt} \frac{\partial L}{\partial \dot q_i} - \frac{\partial L}{\partial q_i} = \Upsilon_i,</amsmath> where <amsmath>q \in {\mathbb R}^n</amsmath> is a set of generalized coordinates for the system and <amsmath>\Upsilon \in {\mathbb R}^n</amsmath> represents the vector of generalized external forces. The equations of motion for a rigid body with configuration<amsmath>g(t) \in \mbox{\it SE}(3)</amsmath> are given by the Newton-Euler equations: <amsmath> \begin{bmatrix} mI & 0 \\ 0 & \cal I \end{bmatrix} \begin{bmatrix} \dot v^b \\ \dot \omega^b \end{bmatrix} + \begin{bmatrix} \omega^b \times m v^b \\ \omega^b \times {\cal I} \omega^b \end{bmatrix} = F^b,</amsmath> where <amsmath>m</amsmath> is the mass of the body, <amsmath>\cal I</amsmath> is the inertia tensor, and <amsmath>V^b = (v^b, \omega^b)</amsmath> and <amsmath>F^b</amsmath> represent the instantaneous body velocity and applied body wrench. The equations of motion for an open-chain robot manipulator can bewritten as <amsmath> M(\theta) \ddot\theta + C(\theta, \dot\theta) \dot\theta + N(\theta, \dot\theta) = \tau</amsmath> where <amsmath>\theta \in {\mathbb R}^n</amsmath> is the set of joint variables for the robot and <amsmath>\tau \in {\mathbb R}^n</amsmath> is the set of actuator forces applied at the joints. The dynamics of a robot manipulator satisfy the following properties: <amsmath>M(\theta)</amsmath> is symmetric and positive definite. <amsmath>\dot M - 2C \in {\mathbb R}^{n \times n}</amsmath> is a skew-symmetric matrix. An equilibrium point <amsmath>x^*</amsmath> for the system <amsmath>\dot x = f(x,t)</amsmath> is locally asymptotically stableif all solutions which start near <amsmath>x^*</amsmath> approach <amsmath>x^*</amsmath> as <amsmath>t \to \infty</amsmath>. Stability can be checked using the direct method of Lyapunov, by finding a locally positive definite function <amsmath>V(x,t) \geq 0</amsmath> such that <amsmath>-\dot V(x,t)</amsmath> is a locally positive definite function along trajectories of the system. In situations in which <amsmath>-\dot V</amsmath> is only positive semi-definite, Lasalle's invariance principlecan be used to check asymptotic stability. Alternatively, the indirect method of Lyapunovcan be employed by examining the linearization of the system, if it exists. Global exponential stability of the linearization implies local exponential stability of the full nonlinear system. Using the form and structure of the robot dynamics, several controllaws can be shown to track arbitrary trajectories. Two of the mostcommon are the computed torque control law, <amsmath> \tau = M(\theta) (\ddot \theta_d + K_v \dot e + K_p e) + C(\theta,\dot \theta) \dot \theta + N(\theta,\dot \theta),</amsmath> and an augmented PD control law, <amsmath> \tau = M(\theta) \ddot \theta_d + C(\theta, \dot \theta) \dot \theta_d + N(\theta, \dot \theta) + K_v \dot e + K_p e.</amsmath> Both of these controllers result in exponential trajectory tracking of a given joint space trajectory. Workspace versions of these control laws can also be derived, allowing end-effector trajectories to be tracked without solving the inverse kinematics problem. Stability of these controllers can be verified using Lyapunov stability.
I would like to do the integral $$I=\int_0^{2\pi}d\phi\frac{\ln(e^{i\phi}+e^{-i\phi}-\frac{5}{2})}{e^{i\phi}+e^{-i\phi}-\frac{5}{2}}.$$ Numerically, we readily find that it has a specific finite value: fun = -(5/2) + E^(-I \[Phi]) + E^(I \[Phi]);NIntegrate[ Log[fun]/fun, {\[Phi], 0, 2 \[Pi]}] -0.493368 - 13.1595 I Now, if we want to consider the integral analytically, we could substitute for instance $$e^{i\phi}=z~~~,~~~d\phi=\frac{-i}{z}dz$$ which leads to $$I=-i\oint_{|z|=1}\frac{\ln\left[\frac{1}{z}(z - \frac{1}{2}) (z - 2)\right]}{(z - \frac{1}{2}) (z - 2)}$$ This looks like there is a pole at z=1/2 within the unit circle. So I tried to get the residue: Residue[-(( I Log[((-2 + z) (-(1/2) + z))/z])/((-2 + z) (-(1/2) + z))), {z, 1/2}] Residue[-(( I Log[((-2 + z) (-(1/2) + z))/z])/((-2 + z) (-(1/2) + z))), {z, 1/2}] which just gave back the input. Also, the 1/z term inside the logarithm seems to blow up inside the unit circle as well. This integral is confusing and does not seem to be accessible via straightforward analytical methods. Is there a way to evaluate it exactly using Mathematica?
Part 3 in the quest for the hydrogen molecule Last time we talked about how the position and velocity of a particle are not independent properties. In fact, they are totally dependent! Unfortunately this is sometimes explained as that they can not be independently ‘measured’ or that there is ‘uncertainty’. That is absolute rubbish! Quantum mechanics is the most exact and predictive theory there is. However, one cannot extract information from reality that does not exist in that reality. We have to let go of our classical tendencies, and venture into the quantum world and become enlightened. There are different ways we can postulate this and they are all unfortunately a little magical. Usually textbooks will talk about ‘conjugate variables’ or the ‘uncertainty principle’, but I would like to try a different approach. It is equally unprovable, but I think it is a rather practical approach, and it will be fun to try to deduce the entire theory of quantum mechanics from it. However we will also have to start doing mathematics now, so let’s get to work! Let us first focus on superpositions again. Take the case of a particle with a position in one dimension, and take a superposition like this: $$| \psi \rangle = 0.2 | x=1 \rangle + 0.9| x=2 \rangle.$$ We have said before that a state is the full description of the situation and it can be a superposition of pure states. Then any state describing the position property of a particle is a superposition of the pure states of that particle being at specific locations \(x\). $$| \psi \rangle = 0.2 | x=1 \rangle + 0.9| x=2 \rangle+0.5| x=3 \rangle+0.4| x=4 \rangle +\ldots.$$ That looks a little awkward. And we have also have to take into account that \(x\) is not discrete but a continuum. So we would also have to include states like \(| x=0.5 \rangle\) and \(| x=0.999 \rangle\). I suppose it is time to do some real mathematics. It makes more sense to replace the sum by an integral: $$| \psi \rangle = \int_{-\infty}^{+\infty} \psi(x) |x \rangle dx,$$ where \(\psi(x)\) is just a function of \(x\). Note also that \(| x=x \rangle \) was abbreviated to \(| x \rangle \), since it is already clear that \(x\) stands for position. We can also ask the reverse question: how much of \(| \psi \rangle \) is contributed by the \(| x=1 \rangle \) state and how much by \(| x=2 \rangle\)? The answers are 0.2 and 0.9. It works much better with a formula, and we write the question “how much overlap with \(| x=1 \rangle\)?” as “\(\langle x=1|\)“, so that: $$\langle x=1| \psi \rangle = 0.2,$$ $$\langle x=2| \psi \rangle = 0.9.$$ It is very important to remember that expressions of the form \(\langle a| b \rangle\) are just a number. In fact, we could even decompose our original superposition as: $$| \psi \rangle = \langle x=1| \psi \rangle | x=1 \rangle + \langle x=2| \psi \rangle| x=2\rangle.$$ Doing the same with our continuous superposition: $$| \psi \rangle = \int_{-\infty}^{+\infty} \psi(x) |x \rangle dx = \int_{-\infty}^{+\infty} \langle x| \psi \rangle |x \rangle dx,$$ and we can also conclude that, by definition, \( \psi(x)= \langle x| \psi \rangle\). We are now ready to reveal exactly how the velocity and the position of a particle can be combined in a single property. Firstly, it is more natural to not talk about the velocity \(v\) of the particle, but about its momentum \(p\). They are very similar, since they are related by the simple relation $$p=mv,$$ where \(m\) is the mass of the particle. Now suppose we have a particle that has the property that it has momentum \(p\). We will write the state of the particle as \(|p\rangle\). As I told you, the position and the momentum are totally dependent properties, so we cannot add any further position information to it. This is just all there is! What we can do, is ask how much the state \(|p\rangle\) has in common with state \(|x\rangle\). Now this is the moment when we will postulate something without deriving it, namely that that overlap is: $$\langle x|p \rangle = e^{i x p / \hbar},$$ So far, there really are a many aspects of quantum mechanics that I have skipped or talked very little about, but I wanted to focus on the fundamentals. It is surprising, also to me, how much we can actually already do with the extremely limited set of concepts we have developed so far. This must mean we are close to the truth! Next time, we will derive a basic model of the hydrogen atom already!
In physics, a wave vector (also spelled wavevector) is a vector which helps describe a wave. Like any vector, it has a magnitude and direction, both of which are important: Its magnitude is either the wavenumber or angular wavenumber of the wave (inversely proportional to the wavelength), and its direction is ordinarily the direction of wave propagation (but not always, see below). In the context of special relativity the wave vector can also be defined as a four-vector. Contents Definitions 1 Physics definition 1.1 Crystallography definition 1.2 Direction of the wave vector 2 In solid-state physics 3 In special relativity 4 Lorentz transformation 4.1 Source moving away (redshift) 4.1.1 Source moving towards (blueshift) 4.1.2 Source moving tangentially (transverse Doppler effect) 4.1.3 See also 5 References 6 Further reading 7 Definitions Wavelength of a sine wave , λ , can be measured between any two consecutive points with the same phase , such as between adjacent crests, or troughs, or adjacent zero crossings with the same direction of transit, as shown. Unfortunately, there are two common definitions of wave vector, which differ by a factor of 2π in their magnitudes. One definition is preferred in physics and related fields, while the other definition is preferred in crystallography and related fields. [1] For this article, they will be called the "physics definition" and the "crystallography definition", respectively. Physics definition A perfect one-dimensional traveling wave follows the equation: \psi(x,t) = A \cos (k x - \omega t+\varphi) where: x is position, t is time, \psi (a function of x and t) is the disturbance describing the wave (for example, for an ocean wave, \psi would be the excess height of the water, or for a sound wave, \psi would be the excess air pressure). A is the amplitude of the wave (the peak magnitude of the oscillation), \varphi is a "phase offset" describing how two waves can be out of sync with each other, \omega is the temporal angular frequency of the wave, describing how many oscillations it completes per unit of time, and related to the period T by the equation \omega=2\pi/T, k is the spatial angular frequency (wavenumber) of the wave, describing how many oscillations it completes per unit of space, and related to the wavelength by the equation k=2\pi/\lambda. This wave travels in the +x direction with speed (more specifically, phase velocity) \omega/k. Crystallography definition In crystallography, the same waves are described using slightly different equations. [2] In one and three dimensions respectively: \psi(x,t) = A \cos (2 \pi (k x - \nu t)+\varphi) \psi \left({\mathbf r}, t \right) = A \cos \left(2\pi({\mathbf k} \cdot {\mathbf r} - \nu t) + \varphi \right) The differences are: The frequency \nu instead of angular frequency \omega is used. They are related by 2\pi \nu=\omega. This substitution is not important for this article, but reflects common practice in crystallography. The wavenumber k and wave vector k are defined in a different way. Here, k=|{\mathbf k}| = 1/\lambda, while in the physics definition above, k=|{\mathbf k}| = 2\pi/\lambda. The direction of k is discussed below. Direction of the wave vector The direction in which the wave vector points must be distinguished from the "direction of wave propagation". The "direction of wave propagation" is the direction of a wave's energy flow, and the direction that a small wave packet will move, i.e. the direction of the group velocity. For light waves, this is also the direction of the Poynting vector. On the other hand, the wave vector points in the direction of phase velocity. In other words, the wave vector points in the normal direction to the surfaces of constant phase, also called wave fronts. In a lossless isotropic medium such as air, any gas, any liquid, or some solids (such as glass), the direction of the wavevector is exactly the same as the direction of wave propagation. If the medium is lossy, the wave vector in general points in directions other than that of wave propagation. The condition for wave vector to point in the same direction in which the wave propagates is that the wave has to be homogeneous, which isn't necessarily satisfied when the medium is lossy. In a homogeneous wave, the surfaces of constant phase are also surfaces of constant amplitude. In case of inhomogeneous waves, these two species of surfaces differ in orientation. Wave vector is always perpendicular to surfaces of constant phase. For example, when a wave travels through an anisotropic medium, such as light waves through an asymmetric crystal or sound waves through a sedimentary rock, the wave vector may not point exactly in the direction of wave propagation. [3] [4] In solid-state physics In solid-state physics, the "wavevector" (also called k-vector) of an electron or hole in a crystal is the wavevector of its quantum-mechanical wavefunction. These electron waves are not ordinary sinusoidal waves, but they do have a kind of envelope function which is sinusoidal, and the wavevector is defined via that envelope wave, usually using the "physics definition". See Bloch wave for further details. [5] In special relativity A moving wave surface in special relativity may be regarded as a hypersurface (a 3D subspace) in spacetime, formed by all the events passed by the wave surface. A wavetrain (denoted by some variable X) can be regarded as a one-parameter family of such hypersurfaces in spacetime. This variable X is a scalar function of position in spacetime. The derivative of this scalar is a vector that characterizes the wave, the 4-WaveVector. [6] The 4-WaveVector is a wave 4-vector that is defined as: K^\mu = \left(\frac{\omega}{c}, \vec{k} \right) = \left(\frac{\omega}{c}, \frac{\omega}{v_p}\hat{n} \right)= \left(\frac{2 \pi}{cT}, \frac{2 \pi \hat{n}}{\lambda} \right) \, where the angular frequency \frac{\omega}{c} is the temporal component, and the wavenumber vector \vec{k} is the spatial component. Alternately, the wavenumber k can be written as the angular frequency \omega divided by the phase-velocity v_p, or in terms of inverse Period T and inverse wavelength \lambda. When written out explicitly in its contravariant and covariant forms are: K^\mu = \left(\frac{\omega}{c}, k_x, k_y, k_z \right)\, K_\mu = \left(\frac{\omega}{c}, -k_x, -k_y, -k_z \right) \, In general, the Lorentz Scalar Magnitude of the wave 4-vector is: K^\mu K_\mu = \left(\frac{\omega}{c}\right)^2 - k_x^2 - k_y^2 - k_z^2 \ = \left(\frac{\omega_o}{c}\right)^2 = \left(\frac{m_o c}{\hbar}\right)^2 The 4-WaveVector is null for massless (photonic) particles, where the rest mass m_o = 0 An example of a null 4-WaveVector would be a beam of coherent, monochromatic light, which has phase-velocity v_p = c K^\mu = \left(\frac{\omega}{c}, \vec{k} \right) = \left(\frac{\omega}{c}, \frac{\omega}{c}\hat{n} \right) = \frac{\omega}{c}\left(1, \hat{n} \right) \, {for light-like/null} which would have the following relation between the frequency and the magnitude of the spatial part of the 4-WaveVector: K^\mu K_\mu = \left(\frac{\omega}{c}\right)^2 - k_x^2 - k_y^2 - k_z^2 \ = 0 {for light-like/null} The 4-WaveVector is related to the 4-Momentum as follows: P^\mu = \left(\frac{E}{c}, \vec{p}\right) = \hbar K^\mu = \hbar\left(\frac{\omega}{c}, \vec{k}\right) The 4-WaveVector is related to the 4-Frequency as follows: K^\mu = \left(\frac{\omega}{c}, \vec{k} \right) = \left(\frac{2 \pi}{c}\right)N^\mu = \left(\frac{2 \pi}{c}\right)(\nu,c\vec{n}) The 4-WaveVector is related to the 4-Velocity as follows: K^\mu = \left(\frac{\omega}{c}, \vec{k} \right) = \left(\frac{\omega_o}{c^2}\right)U^\mu = \left(\frac{\omega_o}{c^2}\right) \gamma (c,\vec{u}) Lorentz transformation Taking the Lorentz transformation of the 4-WaveVector is one way to derive the relativistic Doppler effect. The Lorentz matrix is defined as \Lambda = \begin{pmatrix} \gamma&-\beta \gamma&0&0 \\ -\beta \gamma&\gamma&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{pmatrix} In the situation where light is being emitted by a fast moving source and one would like to know the frequency of light detected in an earth (lab) frame, we would apply the Lorentz transformation as follows. Note that the source is in a frame S s and earth is in the observing frame, S obs. Applying the lorentz transformation to the wave vector k^{\mu}_s = \Lambda^\mu_\nu k^\nu_{\mathrm{obs}} \, and choosing just to look at the \mu = 0 component results in k^{0}_s = \Lambda^0_0 k^0_{\mathrm{obs}} + \Lambda^0_1 k^1_{\mathrm{obs}} + \Lambda^0_2 k^2_{\mathrm{obs}} + \Lambda^0_3 k^3_{\mathrm{obs}} \, \frac{\omega_s}{c} \, = \gamma \frac{\omega_{\mathrm{obs}}}{c} - \beta \gamma k^1_{\mathrm{obs}} \, \quad = \gamma \frac{\omega_{\mathrm{obs}}}{c} - \beta \gamma \frac{\omega_{\mathrm{obs}}}{c} \cos \theta. \, where \cos \theta \, is the direction cosine of k^1 wrt k^0, k^1 = k^0 \cos \theta. So \frac{\omega_{\mathrm{obs}}}{\omega_s} = \frac{1}{\gamma (1 - \beta \cos \theta)} \, Source moving away (redshift) As an example, to apply this to a situation where the source is moving directly away from the observer (\theta=\pi), this becomes: \frac{\omega_{\mathrm{obs}}}{\omega_s} = \frac{1}{\gamma (1 + \beta)} = \frac{\sqrt{1-\beta^2}}{1+\beta} = \frac{\sqrt{(1+\beta)(1-\beta)}}{1+\beta} = \frac{\sqrt{1-\beta}}{\sqrt{1+\beta}} \, Source moving towards (blueshift) To apply this to a situation where the source is moving straight towards the observer (\theta=0), this becomes: \frac{\omega_{\mathrm{obs}}}{\omega_s} = \frac{1}{\gamma (1 - \beta)} = \frac{\sqrt{1-\beta^2}}{1-\beta} = \frac{\sqrt{(1+\beta)(1-\beta)}}{1-\beta} = \frac{\sqrt{1+\beta}}{\sqrt{1-\beta}} \, Source moving tangentially (transverse Doppler effect) To apply this to a situation where the source is moving transversely with respect to the observer (\theta=\pi/2), this becomes: \frac{\omega_{\mathrm{obs}}}{\omega_s} = \frac{1}{\gamma (1 - 0)} = \frac{1}{\gamma} \, See also References ^ Physics definition example:Harris, Benenson, Stöcker (2002). Handbook of Physics. p. 288. ^ Vaĭnshteĭn, Boris Konstantinovich (1994). Modern Crystallography. p. 259. ^ Fowles, Grant (1968). Introduction to modern optics. Holt, Rinehart, and Winston. p. 177. ^ "This effect has been explained by Musgrave (1959) who has shown that the energy of an elastic wave in an anisotropic medium will not, in general, travel along the same path as the normal to the plane wavefront...", Sound waves in solids by Pollard, 1977. link ^ Donald H. Menzel (1960). "§10.5 Bloch waves". Fundamental Formulas of Physics, Volume 2 (Reprint of Prentice-Hall 1955 2nd ed.). Courier-Dover. p. 624. ^ Wolfgang Rindler (1991). "§24 Wave motion". Introduction to Special Relativity (2nd ed.). Oxford Science Publications. p. 60-65. Further reading Brau, Charles A. (2004). Modern Problems in Classical Electrodynamics. Oxford University Press. This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
Show that for each prime number $p$ different from $2$ and $5$, there exist some number $1111\cdots 1111$ all made by ones that is a multiple of $p$ For instance, $3$ divides $111$ and $7$ divides $111111$ Consider the number $\dfrac{1}{p}$. Since $p$ is not $2$ or $5$, its decimal representation is a pure periodic decimal number, which can be converted back to a fraction whose denominator is a number all made by $9$ $$\dfrac{1}{7}=0.142857142857142857\cdots=0.\overline{142857}=\dfrac{142857}{999999}$$ $$\dfrac{1}{p}=0.\overline{period}=\dfrac{period}{999\cdots 999}$$ So $period\cdot p=999\cdots 999$ and $p|999\cdots 999$. If $p\neq 3$, it also happens that $p|111\cdots 111$, as we wanted to prove; for $p=3$, just remind that $3|111$ A deeper proof is available with the theory of congruences and Fermat Little Theorem $$\dfrac{1}{7}=0.142857142857142857\cdots=0.\overline{142857}=\dfrac{142857}{999999}$$ $$\dfrac{1}{p}=0.\overline{period}=\dfrac{period}{999\cdots 999}$$ So $period\cdot p=999\cdots 999$ and $p|999\cdots 999$. If $p\neq 3$, it also happens that $p|111\cdots 111$, as we wanted to prove; for $p=3$, just remind that $3|111$ A deeper proof is available with the theory of congruences and Fermat Little Theorem
The typical characterization of points constructible by compass and straightedge is the following: Let $S\subseteq\mathbb{C}$ with $0,1\in S$, $K_0 = \mathbb{Q}(S\cup \bar{S})$ and $a\in\mathbb{C}$. Then $a$ is constructible from $S$ by compass and straightedge if and only if there is a tower of quadratic field extensions $K_0 \subseteq \ldots \subseteq K_n$ such that $a\in K_n$. For constructible $a$ it follows that $a$ is algebraic over $K_0$ and $[K_0(a) : K_0]$ is a power of two. However, it is known that this is not sufficient for $a$ to be constructible. Now I wonder if the constructibility of $a$ is equivalent to the following sharper criterion: $a$ is algebraic over $K_0$ and the degree of the normal hull of $K_0(a)$ over $K_0$ is a power of two. The direction ,,$\Leftarrow$'' is true, I think. If $N$ is the normal hull of $K_0(a)$, then $K_0\subseteq N$ is a finite Galois extension, and thus the order of $G = \operatorname{Gal}(K_0 \subseteq N)$ is a power of two. As a $2$-group, it contains a chain of subgroups $\{\operatorname{id}\} = U_n < \ldots < U_0 = G$ of index $2$ each. The respective fixed fields give the needed tower of quadratic field extensions. But I wasn't able to proof ,,$\Rightarrow$'', nor did I find a counter example.
Could you please help me to solve this? http://i.imgur.com/o5CLFI7.jpg Could you please help me to solve these inequations..? I have many more and won't be able to solve them all :eek::eek:.. I need help. If there are specific questions you have regarding specific problems we can possibly help. We don't do your homework for you. Quote: Quote: In all cases you can do the algebra to reduce it to a simple statement of inequality. For example the first one $\begin{align*} &\dfrac{a+2}{4} \leq \dfrac{a-1}{3}; \\ \\ &3(a+2) \leq 4(a-1); \\ \\ &3a+6 \leq 4a - 4; \\ \\ &10 \leq a \end{align*}$ None of the rest of them are any more complicated. You can always deal with an inequation by solving the corresponding equation. Example. $\text {In what interval or intervals is } \dfrac{4x - 3}{3} < \dfrac{2x + 4}{4}.$ Start by solving $\dfrac{4x - 3}{3} = \dfrac{2x + 4}{4} \implies 16x - 12 = 6x + 12 \implies$ $10x = 24 \implies x = 2.4.$ Is 2.4 in the interval you want? No because we are looking for less than, not less than or equal. If we were looking for less than or equal, 2.4 would be in an interval. You have now divided the number line into two parts, x < 2.4 and x > 2.4. Pick a number in each part and test. Well 2 is less than 2.4. $\dfrac{4 * 2 - 3}{3} = \dfrac{5}{3} = \dfrac{20}{12}.$ $\dfrac{2 * 2 + 4}{4} = \dfrac{8}{4} = \dfrac{24}{12}.$ $\dfrac{20}{12} < \dfrac{24}{12}.$ So numbers less than 2.4 are in the interval you want. And 3 is greater than 2.4. $\dfrac{4 * 3 - 3}{3} = \dfrac{9}{3} = \dfrac{36}{12}.$ $\dfrac{2 * 3 + 4}{4} = \dfrac{10}{4} = \dfrac{30}{36}.$ $\dfrac{36}{12} > \dfrac{30}{12}.$ So numbers greater than 2.4 are NOT in the interval you want. Thus the answer is $(-\ \infty,\ 2.4).$ CAUTION We had one point of equality in this problem so we divided the number line into 2 parts, and we had to test each part. But if we had two points of equality, we would divide the number line into three parts and would need to test each part. If we have n points of equality, we divide the number line into n + 1 parts and must test a number in each part. EDIT: Alternatively, you can try to solve a linear inequality directly. In my example, $\dfrac{4x - 3}{3} < \dfrac{2x + 4}{4} \implies \dfrac{12(4x - 3)}{3} < \dfrac{12(2x + 4)}{4} \implies$ $16x - 12 < 6x + 12 \implies 10x < 24 \implies x < 2.4.$ That is quicker, but it gets tricky for more complex inequalities. I ordinarily would never comment on someone else's method but ... really? $\dfrac{4x-3}{3} < \dfrac{2x+4}{4}$ $4(4x-3) < 3(2x+4)$ $16x-12 < 6x + 12$ $10x < 24$ $x < \dfrac{24}{10} = \dfrac{12}{5}$ sometimes it pays to just keep it simple. Thanks that doesn't look that hard. Maybe you can explain how to solve notable products like this. This is the last one, I swear... :cool:http://i.imgur.com/UL46iTT.jpg There's nothing to solve here. These are just expressions that I guess they want you to multiply out. You can do this; you're just lazy. Get to it. Romsek will probably say I am being too complex. You can solve these particular problems directly and easily using basic algebra and the laws of exponents, but you can also use a method that is particularly helpful for more complex cases. Let's take the last problem as an example. Direct method. $\left ( \dfrac{x^2}{2} - \dfrac{x}{3} \right ) \left ( \dfrac{x^2}{2} + \dfrac{x}{3} \right ) = $ $ \left ( \dfrac{3x^2}{3 * 2} - \dfrac{2x}{2 * 3} \right ) \left ( \dfrac{3x^2}{3 * 2} + \dfrac{2x}{2 * 3} \right ) = $ $\dfrac{3x^2 - 2x}{6} * \dfrac{3x^2 + 2x}{6} =$ $\dfrac{9x^4 + 6x^3 - 6x^3 - 4x^2}{36} = \dfrac{9x^4 - 4x^2}{36} = \dfrac{9x^4}{36} - \dfrac{4x^2}{36} = \dfrac{x^4}{4} - \dfrac{x^2}{9}.$ Basic algebra. Or you can do this, formally or in your head. $u = \dfrac{x^2}{2} \text { and } v = \dfrac{x}{3}.$ $\therefore \left ( \dfrac{x^2}{2} - \dfrac{x}{3} \right ) \left ( \dfrac{x^2}{2} + \dfrac{x}{3} \right ) =$ $( u - v)(u + v) = u^2 - v^2 =$ $\left ( \dfrac{x^2}{2} \right )^2 - \left ( \dfrac{x}{3} \right )^2 = \dfrac{x^4}{4} - \dfrac{x^2}{9}.$ There are sometimes many correct ways to attack a problem. Quote: (a+b)(a-b)... It seems like you're the only one here so I'll assume that you are some sort of moderator too.. Please feel free to close this useless account. Bye.. All times are GMT -8. The time now is 12:50 AM. Copyright © 2019 My Math Forum. All rights reserved.
The Isomorphism Conjecture of Berman and Hartmanis states that all $NP$-complete sets are polynomial time isomorphic to each other. This means that $NP$-complete problems are efficiently reducible to each other via polynomial time computable and invertible bijections. The conjecture implies $P\neq NP$. The isomorphism conjecture implies an exponential lower bound on the density of $NP$-complete sets since Satisfiability problem is dense. I am wondering if it also implies an exponential lower bound on the density of witnesses for $NP$-complete set. Does the isomorphism conjecture imply exponential lower bounds on witnesses density? Does it imply that $NP$-complete problems can not be in $FewP$? The best result I am aware of is the following: If $P=UP$ and $NP=EXP$ then the isomorphism conjecture holds. Density $D$ of a set $S$ refers to the number of strings of length less than $n$ in the language. A set $S$ is exponentially dense if its density is $D=\Omega(2^{n^\epsilon})$ for some $\epsilon \gt 0$ and for infinitely many $n$ and sparse if $D$= $O(poly(n))$.
Definition:Real Interval/Notation Contents Definition An arbitrary interval is frequently denoted $\mathbb I$, although some sources use just $I$. Others use $\mathbf I$. \(\displaystyle \openint a b\) \(:=\) \(\displaystyle \set {x \in \R: a < x < b}\) Open Real Interval \(\displaystyle \hointr a b\) \(:=\) \(\displaystyle \set {x \in \R: a \le x < b}\) Half-Open (to the right) Real Interval \(\displaystyle \hointl a b\) \(:=\) \(\displaystyle \set {x \in \R: a < x \le b}\) Half-Open (to the left) Real Interval \(\displaystyle \closedint a b\) \(:=\) \(\displaystyle \set {x \in \R: a \le x \le b}\) Closed Real Interval The term Wirth interval notation has consequently been coined by $\mathsf{Pr} \infty \mathsf{fWiki}$. Some authors (sensibly, perhaps) prefer not to use the $\infty$ symbol and instead use $\to$ and $\gets$ for $+\infty$ and $-\infty$ repectively. In Wirth interval notation, such intervals are written as follows: \(\displaystyle \hointr a \to\) \(:=\) \(\displaystyle \set {x \in \R: a \le x}\) \(\displaystyle \hointl \gets a\) \(:=\) \(\displaystyle \set {x \in \R: x \le a}\) \(\displaystyle \openint a \to\) \(:=\) \(\displaystyle \set {x \in \R: a < x}\) \(\displaystyle \openint \gets a\) \(:=\) \(\displaystyle \set {x \in \R: x < a}\) \(\displaystyle \openint \gets \to\) \(:=\) \(\displaystyle \set {x \in \R} = \R\) These are the notations usually seen for real intervals: \(\displaystyle \left ({a, b}\right)\) \(:=\) \(\displaystyle \set {x \in \R: a < x < b}\) Open real interval \(\displaystyle \left [{a, b}\right)\) \(:=\) \(\displaystyle \set {x \in \R: a \le x < b}\) Half-open real interval \(\displaystyle \left ({a, b}\right]\) \(:=\) \(\displaystyle \set {x \in \R: a < x \le b}\) Half-open real interval \(\displaystyle \left [{a, b}\right]\) \(:=\) \(\displaystyle \set {x \in \R: a \le x \le b}\) Closed real interval but they can be confused with other usages for this notation. In particular, there exists the danger of taking $\paren {a, b}$ to mean an ordered pair. In order to avoid the ambiguity problem arising from the conventional notation for intervals where an open real interval can be confused with an ordered pair, some authors use the reverse-bracket notation for open and half-open intervals: \(\displaystyle \left ] {\, a, b} \right [\) \(:=\) \(\displaystyle \set {x \in \R: a < x < b}\) Open real interval \(\displaystyle \left [ {a, b} \right [\) \(:=\) \(\displaystyle \set {x \in \R: a \le x < b}\) Half-open on the right \(\displaystyle \left ] {\, a, b} \right ]\) \(:=\) \(\displaystyle \set {x \in \R: a < x \le b}\) Half-open on the left These are often considered to be both ugly and confusing, and hence are limited in popularity.
A simple way to understand an ionization constant is to think of it in a clear-cut way: To what degree will a substance produce ions in water? In other words, to what extent will ions be formed? Introduction Water has a very low concentration of ions that are detectable. Water undergoes self-ionization, where two water molecules interact to from a hydronium ion and a hydroxide ion. \[H_2O + H_2O \rightleftharpoons H_3O^+ + OH^- \label{1}\] Even though water does not form a lot of ions, the existence of them is evident in pure water by the electrical conductivity measurements. Water undergoes ionization because the powerfully electronegative oxygen atom takes the electron from a hydrogen atom, making the proton dissociate. \[\ce{H-O-H} \rightleftharpoons H^+ + OH^- \label{2}\] with two ions are formed: hydrogen ions \(H^+\) hydroxyl (hydroxide) ions \(OH^-\) The hydrogen ions then react with water to form hydronium ions: \[H^+ + H_2O \rightarrow H_3O^+ \label{3}\] Typically, hydrogen atoms are bonded with another water molecule resulting in a hydration that forms a hydronium and hydroxide ion. \[H_2O + H_2O \rightarrow H_3O^+ + OH^- \label{4}\] In 1 L of pure water at 25 degrees Celsius, there is 10 -7 moles of hyrodronium ions or hydroxide ions at equilibrium. Let’s come back to the question of to what degree will a substance form ions in water? For water, we have learned that it will occur until 10 -7 moles of either ion will ionize at the previous given conditions. Since this is during equilibrium, a constant can be formed. \[H_2O \rightleftharpoons H^+ + OH^- \label{5}\] \[K_{eq}= \dfrac{[H^+] [OH^-]}{[H_2O]} \label{6}\] This equilibrium constant is commonly referred to as \(K_w\). What about acids and bases? Now that we have an idea of what an ionization constant is, let’s take a look at how acids and bases play in this scenario. Strong acids and bases are those that completely dissociate into ions once placed in solution. For example: \[KOH + H_2O → K^+ + OH^-\] So, a 2 M solution of KOH would have 2 M concentration of of OH- ions (also 2 M concentration of \(K\)). Weak acids and bases, however do not behave the same way. Their amounts cannot be calculated as easily. This is because the ions do not fully dissociate in the solution. Weak acids have a higher pH than strong acids, because weak acids do not release all of its hydrogens. The acid dissociation constant tell us the extent to which an acid dissociates \[HCN_{(aq)} + H_2O \rightleftharpoons H_3O^+ + CN^- \label{8}\] \[K_a= \dfrac{[H_3O^+] [CN^-]}{[HCN]} \label{9}\] This equation is used fairly often when looking at equilibrium reactions. During equilibrium, the rates of the forward and backward reaction are the same. However, the concentrations tend to be varied. Since concentration is what gives us an idea of how much substance has dissociated, we can relate concentration ratios to give us a constant. K is found by first finding out the molarity of each substance. Then, just as shown in the equation, we divide the products by the reactants, excluding solids and liquids. Also when there is more than one product or reactant, their concentrations must be multiplied together. Even though you will not see a multiplication sign, if there are two molecules associated, remember to multiply them. If there is a coefficient in front of a molecule, the concentration must be raised to that power in the calculations. A weaker acid tends to have a smaller \(K_a\) because the concentration on the bottom of the equation is larger. There is more of the acid, and less of the ions. You can think of Ka as a way of relating concentration in order to find out other calculations, typically the pH of a substance. A pH tells you how basic or acidic something is, and as we have learned that depends on how much ions become dissociated. Example \(\PageIndex{1}\): Calculate Ionization Constant Calculate the ionization constant of a weak acid. Solve for K a given 0.8 M of hydrogen cyanide and 0.0039 M for hyrodronium and cyanide ions: \[HCN_{(aq)} + H_2O → H_3O^+ + CN^-\] Solution [HCN] is 0.8 M [H_3O^+] is 0.0039 M [CN^-] is 0.0039 M. We can assume here that [H 3O +] = [CN -] The equilibrium constant would be \[K_a= \dfrac{0.0039^2}{0.8} = 1.9 \times -^{5}\] We can use the similar method to find the K b constant for weak bases. Again, an ionization constant is a measure of the extent a base will dissociate. K b relates these molarity quantities. A smaller K b corresponds to a weaker base, as a higher one a stronger base. Some common weak bases: NH 3, NH 4OH, HS -, C 5H 5N \[C_5H_5N_{(aq)} + H_2O_{(l)} → C_5H_5N^+_{(aq)} + OH^-_{ (aq)} \label{10}\] We would find K b the same way we did K a. However, most problems are not as simple and obvious. Let’s do an example that is a little more challenging to help you understand this better. Example \(\PageIndex{2}\): Calculate Concentration Calculate the concentration of [OH -] in a 3M Pyridine solution with K b= 1.5 * 10 -9. Solution Since we already have our equation, let’s write the expression for the constant as discussed earlier (concentration of products divided by concentration of reactants) \[K_b= \dfrac{[OH^-] [C_5H_5NH^+]}{[C_5H_5N]} = 1.5 \times 10^{-9}\] Because pyridine is a weak base, we can assume that not much of it will disassociate. And since we have 3M, let’s make 3M-X our calculation. Subtracting X is going to be the change in molarity due to dissociation. Since our ions are unknown and they are both one mole, we can solve for them as being X. THUS: (X 2/3-X) = 1.5 * 10 -9 To make the calculation simple, we can estimate 3-X to be approximately 3 because of the weak base. Now our equation becomes: (X 2/3) = 1.5 * 10 -9. When we solve for X our answer is \(4.7 \times 10^{-5}\; M\) This approximation was effective because x is small, which must be less than 5% of the initial concentration for that estimation to be justified. Summary We have learned that an ionization constant quantifies the degree a substance will ionize in a solution (typically water). K a, K b, and K w are constants for acids, bases, and finally water, respectively and are related by \[K_a \times K_b = K_w\] However, these constants are also used to find concentrations as well as pH. References Grisham, Charles. Biochemistry: Updated Third Edition. Belmont, Ca; Thomas Learning Inc. 2007. pgs. 37-40 Contributors Tatiana Litvin (UCD)
From the last lecture, we saw that Liouville's equation could be cast in the form \[ \frac {\partial f}{\partial t} + \nabla _x \cdot \dot {x} f = 0 \] The Liouville equation is the foundation on which statistical mechanics rests. It will now be cast in a form that will be suggestive of a more general structure that has a definite quantum analog (to be revisited when we treat the quantum Liouville equation). Define an operator \[ iL = \dot {x} \cdot \nabla _x \] known as the Liouville operator ( \(i = \sqrt {-1} \) - the i is there as a matter of convention and has the effect of making \(L\) a Hermitian operator). Then Liouville's equation can be written \[ \frac {\partial f}{\partial t} + iLf = 0 \] The Liouville operator also be expressed as \[ iL = \sum _{i=1}^N \left [ \frac {\partial H}{\partial p_i} \cdot \frac {\partial}{\partial r_i} - \frac {\partial H}{\partial r_i} \cdot \frac {\partial}{\partial p_i} \right ] \equiv \left \{ \cdots , H \right \} \] where \(\{ A, B \} \) is known as the Poisson bracket between \(A(x) \) and \(B (x)\): \[ \left \{ A, B \right \} = \sum _{i=1}^N \left [ \frac {\partial A}{\partial r_i} \cdot \frac {\partial B}{\partial p_i} - \frac {\partial A}{\partial p_i} \cdot \frac {\partial B}{\partial r_i} \right ] \] Thus, the Liouville equation can be written as \[ \frac {\partial f}{\partial t} + \left \{ f, H \right \} = 0 \] The Liouville equation is a partial differential equation for the phase space probability distribution function. Thus, it specifies a general class of functions \(f (x, t)\) that satisfy it. In order to obtain a specific solution requires more input information, such as an initial condition on f, a boundary condition on f, and other control variables that characterize the ensemble.
A vector-valued function, also referred to as a vector function, is a mathematical function of one or more variables whose range is a set of multidimensional vectors or infinite-dimensional vectors. The input of a vector-valued function could be a scalar or a vector. The dimension of the domain is not defined by the dimension of the range. Contents Example 1 Properties 2 Derivative of a three-dimensional vector function 3 Partial derivative 3.1 Ordinary derivative 3.2 Total derivative 3.3 Reference frames 3.4 Derivative of a vector function with nonfixed bases 3.5 Derivative and vector multiplication 3.6 Derivative of an n-dimensional vector function 4 Infinite-dimensional vector functions 5 Functions with values in a Hilbert space 5.1 Other infinite-dimensional vector spaces 5.2 See also 6 Notes 7 References 8 External links 9 Example A graph of the vector-valued function r( t) = <2 cos t, 4 sin t, t> indicating a range of solutions and the vector when evaluated near t = 19.5 A common example of a vector-valued function is one that depends on a single real number parameter t, often representing time, producing a vector v( t) as the result. In terms of the standard unit vectors i, j, k of Cartesian 3-space, these specific type of vector-valued functions are given by expressions such as \mathbf{r}(t)=f(t)\mathbf{i}+g(t)\mathbf{j} or \mathbf{r}(t)=f(t)\mathbf{i}+g(t)\mathbf{j}+h(t)\mathbf{k} where f( t), g( t) and h( t) are the coordinate functions of the parameter t. The vector r( t) has its tail at the origin and its head at the coordinates evaluated by the function. The vector shown in the graph to the right is the evaluation of the function near t=19.5 (between 6π and 6.5π; i.e., somewhat more than 3 rotations). The spiral is the path traced by the tip of the vector as t increases from zero through 8π. Vector functions can also be referred to in a different notation: \mathbf{r}(t)=\langle f(t), g(t)\rangle or \mathbf{r}(t)=\langle f(t), g(t), h(t)\rangle Properties The domain of a vector-valued function is the intersection of the domain of the functions f, g, and h. Derivative of a three-dimensional vector function Many vector-valued functions, like scalar-valued functions, can be differentiated by simply differentiating the components in the Cartesian coordinate system. Thus, if \mathbf{r}(t) = f(t)\mathbf{i} + g(t)\mathbf{j} + h(t)\mathbf{k} is a vector-valued function, then \frac{d\mathbf{r}(t)}{dt} = f'(t)\mathbf{i} + g'(t)\mathbf{j} + h'(t)\mathbf{k}. The vector derivative admits the following physical interpretation: if r( t) represents the position of a particle, then the derivative is the velocity of the particle \mathbf{v}(t) = \frac{d\mathbf{r}(t)}{dt} . Likewise, the derivative of the velocity is the acceleration \frac{d\bold{v}(t)}{d t}=\bold{a}(t). Partial derivative The partial derivative of a vector function a with respect to a scalar variable q is defined as [1] \frac{\partial\mathbf{a}}{\partial q} = \sum_{i=1}^{n}\frac{\partial a_i}{\partial q}\mathbf{e}_i where a is the i scalar component of a in the direction of e . It is also called the direction cosine of i a and e or their dot product. The vectors i e 1, e 2, e 3 form an orthonormal basis fixed in the reference frame in which the derivative is being taken. Ordinary derivative If a is regarded as a vector function of a single scalar variable, such as time t, then the equation above reduces to the first ordinary time derivative of a with respect to t, [1] \frac{d\mathbf{a}}{dt} = \sum_{i=1}^{3}\frac{da_i}{dt}\mathbf{e}_i. Total derivative If the vector a is a function of a number n of scalar variables q ( r r = 1,..., n), and each q is only a function of time r t, then the ordinary derivative of a with respect to t can be expressed, in a form known as the total derivative, as [1] \frac{d\mathbf a}{dt} = \sum_{r=1}^{n}\frac{\partial \mathbf a}{\partial q_r} \frac{dq_r}{dt} + \frac{\partial \mathbf a}{\partial t}. Some authors prefer to use capital D to indicate the total derivative operator, as in D/ Dt. The total derivative differs from the partial time derivative in that the total derivative accounts for changes in a due to the time variance of the variables q . r Reference frames Whereas for scalar-valued functions there is only a single possible reference frame, to take the derivative of a vector-valued function requires the choice of a reference frame (at least when a fixed Cartesian coordinate system is not implied as such). Once a reference frame has been chosen, the derivative of a vector-valued function can be computed using techniques similar to those for computing derivatives of scalar-valued functions. A different choice of reference frame will, in general, produce a different derivative function. The derivative functions in different reference frames have a specific kinematical relationship. Derivative of a vector function with nonfixed bases The above formulas for the derivative of a vector function rely on the assumption that the basis vectors e 1, e 2, e 3 are constant, that is, fixed in the reference frame in which the derivative of a is being taken, and therefore the e 1, e 2, e 3 each has a derivative of identically zero. This often holds true for problems dealing with vector fields in a fixed coordinate system, or for simple problems in physics. However, many complex problems involve the derivative of a vector function in multiple moving reference frames, which means that the basis vectors will not necessarily be constant. In such a case where the basis vectors e 1, e 2, e 3 are fixed in reference frame E, but not in reference frame N, the more general formula for the ordinary time derivative of a vector in reference frame N is [1] \frac{dt} = \sum_{i=1}^{3}\frac{da_i}{dt}\mathbf{e}_i + \sum_{i=1}^{3}a_i\frac{{}^\mathrm{N}d\mathbf{e}_i}{dt} where the superscript N to the left of the derivative operator indicates the reference frame in which the derivative is taken. As shown previously, the first term on the right hand side is equal to the derivative of a in the reference frame where e 1, e 2, e 3 are constant, reference frame E. It also can be shown that the second term on the right hand side is equal to the relative angular velocity of the two reference frames cross multiplied with the vector a itself. [1] Thus, after substitution, the formula relating the derivative of a vector function in two reference frames is [1] \frac{{}^\mathrm Nd\mathbf a}{dt} = \frac{{}^\mathrm Ed\mathbf a }{dt} + {}^\mathrm N \mathbf \omega^\mathrm E \times \mathbf a where N ω E is the angular velocity of the reference frame E relative to the reference frame N. One common example where this formula is used is to find the velocity of a space-borne object, such as a rocket, in the inertial reference frame using measurements of the rocket's velocity relative to the ground. The velocity N v R in inertial reference frame N of a rocket R located at position r R can be found using the formula \frac{{}^\mathrm Nd}{dt}(\mathbf r^\mathrm R) = \frac{{}^\mathrm Ed}{dt}(\mathbf r^\mathrm R) + {}^\mathrm N \mathbf \omega^\mathrm E \times \mathbf r^\mathrm R. where N ω E is the angular velocity of the Earth relative to the inertial frame N. Since velocity is the derivative of position, N v R and E v R are the derivatives of r R in reference frames N and E, respectively. By substitution, {}^\mathrm N \mathbf v^\mathrm R = {}^\mathrm E \mathbf v^\mathrm R + {}^\mathrm N \mathbf \omega^\mathrm E \times \mathbf r^\mathrm R where E v R is the velocity vector of the rocket as measured from a reference frame E that is fixed to the Earth. Derivative and vector multiplication The derivative of the products of vector functions behaves similarly to the derivative of the products of scalar functions. [2] Specifically, in the case of scalar multiplication of a vector, if p is a scalar variable function of q, [1] \frac{\partial}{\partial q}(p\mathbf a) = \frac{\partial p}{\partial q}\mathbf a + p\frac{\partial \mathbf a}{\partial q}. In the case of dot multiplication, for two vectors a and b that are both functions of q, [1] \frac{\partial}{\partial q}(\mathbf a \cdot \mathbf b) = \frac{\partial \mathbf a }{\partial q} \cdot \mathbf b + \mathbf a \cdot \frac{\partial \mathbf b}{\partial q}. Similarly, the derivative of the cross product of two vector functions is [1] \frac{\partial}{\partial q}(\mathbf a \times \mathbf b) = \frac{\partial \mathbf a }{\partial q} \times \mathbf b + \mathbf a \times \frac{\partial \mathbf b}{\partial q}. Derivative of an n-dimensional vector function A function f of a real number t with values in the space R^n can be written as f(t)=(f_1(t),f_2(t),\ldots,f_n(t)). Its derivative equals f'(t)=(f_1'(t),f_2'(t),\ldots,f_n'(t)). If f is a function of several variables, say of t\in R^m, then the partial derivatives of the components of f form a n\times m matrix called the Jacobian matrix of f. Infinite-dimensional vector functions If the values of a function f lie in an infinite-dimensional vector space X, such as a Hilbert space, then f may be called an infinite-dimensional vector function. Functions with values in a Hilbert space If the argument of f is a real number and X is a Hilbert space, then the derivative of f at a point t can be defined as in the finite-dimensional case: f'(t)=\lim_{h\rightarrow0}\frac{f(t+h)-f(t)}{h}. Most results of the finite-dimensional case also hold in the infinite-dimensional case too, mutatis mutandis. Differentiation can also be defined to functions of several variables (e.g., t\in R^n or even t\in Y, where Y is an infinite-dimensional vector space). N.B. If X is a Hilbert space, then one can easily show that any derivative (and any other limit) can be computed componentwise: if f=(f_1,f_2,f_3,\ldots) (i.e., f=f_1 e_1+f_2 e_2+f_3 e_3+\cdots, where e_1,e_2,e_3,\ldots is an orthonormal basis of the space X), and f'(t) exists, then f'(t)=(f_1'(t),f_2'(t),f_3'(t),\ldots). However, the existence of a componentwise derivative does not guarantee the existence of a derivative, as componentwise convergence in a Hilbert space does not guarantee convergence with respect to the actual topology of the Hilbert space. Other infinite-dimensional vector spaces Most of the above hold for other topological vector spaces X too. However, not as many classical results hold in the Banach space setting, e.g., an absolutely continuous function with values in a suitable Banach space need not have a derivative anywhere. Moreover, in most Banach spaces setting there are no orthonormal bases. See also Notes ^ a b c d e f g h i Kane & Levinson 1996, pp. 29–37 ^ In fact, these relations are derived applying the product rule componentwise. References Kane, Thomas R.; Levinson, David A. (1996), "1–9 Differentiation of Vector Functions", Dynamics Online, Sunnyvale, California: OnLine Dynamics, Inc., pp. 29–37 External links Vector-valued functions and their properties (from Lake Tahoe Community College) Weisstein, Eric W., "Vector Function", MathWorld. Everything2 article 3 Dimensional vector-valued functions (from East Tennessee State University) "Position Vector Valued Functions" Khan Academy module This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
I'm looking at Example VII.3.3.3 (p.193, 2nd ed.) of Silverman's The Arithmetic of Elliptic Curves. We have the elliptic curve $E:y^2=x^3+x$, with discriminant $\Delta=-64$, so there is good reduction for all primes $p\geq 3$. It is noted that $(0,0)$ is a point of order two in $E(\mathbb{Q})$, and that $$\tilde{E}(\mathbb{F}_3)=\{\mathcal{O},(0,0),(2,1),(2,2)\}\cong\mathbb{Z}/4\mathbb{Z}$$ $$\tilde{E}(\mathbb{F}_5)=\{\mathcal{O},(0,0),(2,0),(3,0)\}\cong(\mathbb{Z}/2\mathbb{Z})^2$$Then it is said that Since $E(\mathbb{Q})_{\text{tors}}$ injects into both of these groups, we see that $(0,0)$ is the only nonzero torsion point in $E(\mathbb{Q})$. Now, my understanding of Proposition VII.3.1b (p.192, 2nd ed.) is that for any discretely valued local field $K$ with residue field $k$, the reduction map from $E(K)[m]$ to $\tilde{E}(k)$ is injective for all $\gcd(m,\text{char}(k))=1$, where $E(K)[m]$ denotes the $m$-torsion subgroup of $E(K)$. So, we are looking at the compositions$$E(\mathbb{Q})[m]\hookrightarrow E(\mathbb{Q}_3)[m]\hookrightarrow \tilde{E}(\mathbb{F}_3)\quad\text{ for all }3\nmid m$$$$E(\mathbb{Q})[n]\hookrightarrow E(\mathbb{Q}_5)[n]\hookrightarrow \tilde{E}(\mathbb{F}_5)\quad\text{ for all }5\nmid n$$It seems the best we can say (I think) is that the $3$-torsion-free part of $E(\mathbb{Q})_{\text{tors}}$ injects into $\tilde{E}(\mathbb{F}_3)$, and the $5$-torsion-free part of $E(\mathbb{Q})_{\text{tors}}$ injects into $\tilde{E}(\mathbb{F}_5)$. This still clearly implies that $E(\mathbb{Q})_{\text{tors}}$ must be of order 2 in this particular case, because $\tilde{E}(\mathbb{F}_5)$ is $3$-torsion-free and $\tilde{E}(\mathbb{F}_3)$ is $5$-torsion-free; but it seems to me that the reasoning in the block-quoted statement (at least without further explanation) is technically wrong. If $p$ is a prime of good reduction for $E$, then is it true that all of $E(\mathbb{Q})_{\text{tors}}$ injects into $\tilde{E}(\mathbb{F}_p)$, or just that $E(\mathbb{Q})_{\text{tors}}[m]$ injects into $\tilde{E}(\mathbb{F}_p)$ for any $m$ relatively prime to $p$ (and hence the $p$-torsion-free part of $E(\mathbb{Q})_{\text{tors}}$ injects into $\tilde{E}(\mathbb{F}_p)$)?
Definition:Ultrafilter on Set Contents Definition Let $S$ be a set. Let $\mathcal F \subseteq \powerset S$ be a filter on $S$. Then $\mathcal F$ is an ultrafilter (on $S$) if and only if: or equivalently, if and only if: whenever $\mathcal G$ is a filter on $S$ and $\mathcal F \subseteq \mathcal G$ holds, then $\mathcal F = \mathcal G$. Let $S$ be a set. Let $\mathcal F \subseteq \mathcal P \left({S}\right)$ be a filter on $S$. Then $\mathcal F$ is an ultrafilter (on $S$) if and only if: for every $A \subseteq S$ and $B \subseteq S$ such that $A \cap B = \varnothing$ and $A \cup B \in \mathcal F$, either $A \in \mathcal F$ or $B \in \mathcal F$. Let $S$ be a set. Let $\mathcal F \subseteq \mathcal P \left({S}\right)$ be a filter on $S$. Then $\mathcal F$ is an ultrafilter (on $S$) if and only if: for every $A \subseteq S$, either $A \in \mathcal F$ or $\complement_S \left({A}\right) \in \mathcal F$ where $\complement_S \left({A}\right)$ is the relative complement of $A$ in $S$, that is, $S \setminus A$. Let $S$ be a non-empty set. Then $\mathcal F$ is an ultrafilter on $S$ if and only if both of the following hold: $\mathcal F$ has the finite intersection property For all $U \subseteq S$, either $U \in \mathcal F$ or $U^c \in \mathcal F$ where $U^c$ is the complement of $U$ in $S$.
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional... no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right? For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right. Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present? Like can we think $H=C_q \times C_r$ or something like that from the given data? When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities? When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$. And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$. In this case how can I write G using notations/symbols? Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$? First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.? As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations? There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$. A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$ If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword There is good motivation for such a definition here So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$ It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$. Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative I don't know how to interpret this coarsely in $\pi_1(S)$ @anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale. @ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math. Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap... I can probably guess that they are using symmetries and permutation groups on graphs in this course. For example, orbits and studying the automorphism groups of graphs. @anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix! Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case @chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$ could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
The universal-NOT gate in quantum computing is an operation which maps every point on the Bloch sphere to its antipodal point (see Buzek et al, Phys. Rev. A 60, R2626–R2629). In general, a single qubit quantum state, $|\phi\rangle = \alpha |0\rangle + \beta | 1 \rangle$ will be mapped to $\beta^* |0\rangle - \alpha^*| 1 \rangle$. This operation is not unitary (in fact it is anti-unitary) and so is not something that can be implemented deterministically on a quantum computer. Optimal approximations to such gates drew quite a lot of interest about 10 years ago (see for example this Nature paper which presents an experimental realization of an optimal approximation). What has been puzzling me, and what I cannot find in any of the introductions to these papers, is why one would ever want such a gate. Is it actually useful for anything? Moreover, why would one want an approximation, when there are other representations of $SU(2)$ for which there is a unitary operator which anti-commutes with all of the generators? This question may seem vague, but I believe it has a concrete answer. There presumable is one or more strong reasons why we might want such an operator, and I am simply not seeing them (or finding them). If anyone could enlighten me, it would be much appreciated.This post has been migrated from (A51.SE)
Search Now showing items 1-1 of 1 Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-09) The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
Conway’s orbifold notation gives a uniform notation for all discrete groups of isometries of the sphere, the Euclidian plane as well as the hyperbolic plane. This includes the groups of symmetries of Escher’s Circle Limit drawings. Here’s Circle Limit III And ‘Angels and Devils’ aka Circle Limit IV: If one crawls along a mirror of this pattern until one hits another mirror and then turns right along this mirror and continues like this, you get a quadrilateral path with four corners $\frac{\pi}{3}$, whose center seems to be a $4$-fold gyration point. So, it appears to have symmetry $4 \ast 3$. (image credit: MathCryst) However, looking more closely, every fourth figure (either devil or angel) is facing away rather than towards us, so there’s no gyration point, and the group drops to $\ast 3333$. Harold S. M. Coxeter met Escher in Amsterdam at the ICM 1954. The interaction between the two led to Escher’s construction of the Circle Limits, see How did Escher do it? Here’s an old lecture by Coxeter on the symmetry of the Circle Limits:
When considering a bilattice you need to distinguish two type of sites. A B A B -------o-------o---------------o-------o---- |<--a-->|<------b------>| For instance you can denote the two kind of sites with letters $A$ and $B$ as it is shown above. Then you have now two different creation and destruction operators . $c_{iA}^{\dagger}$ and $c_{iA}$ in order to create or annihilate a particle in $A$ sites and $c_{iB}^{\dagger}$ and $c_{iB}$ for $B$ sites. You have to be careful with indices $i$. With a simple lattice the sites had indices like that: A A A A A -------o-------o-------o-------o-------o i-1 i i+1 i+2 i+3 |<--a-->|<--a-->|<--a-->| But know the indices are different since there is two kinds of sites: A B A B -------o-------o---------------o-------o---- i i i+1 i+1 |<--a-->|<------b------>| All this changes expressions of Fourier transforms and hamiltonian:$$c_{kA}=\frac{1}{\sqrt{V/2}}\sum_{i}{c_{iA} e^{i k r_{i}}} \quad \text{where} \quadr_i=i*(a+b)$$$$c_{kB}=\frac{1}{\sqrt{V/2}}\sum_{i}{c_{iB} e^{i k r_{i}}} \quad \text{where} \quadr_i=i*(a+b)+a$$ EDIT:The volume of the system has to be divided by two in Fourier transforms since there is now two sites in each primitive cell (there were only one before). END EDIT The hamiltonian now takes the form:$$H=-\sum_{i}{t_s (c^{\dagger}_{iA} c_{iB} + c^{\dagger}_{iB} c_{iA})+t_l (c^{\dagger}_{iB} c_{(i+1)A} + c^{\dagger}_{(i+1)A} c_{iB})}$$where I have considered a one-dimensional problem. Since the distance between sites is not always the same, you might want to consider two different hopping parameters: $t_s$ for short jumps and $t_l$ for long ones. EDIT: I had forgotten terms in the Hamiltonian, nearest neighbor hopping terms must be present in both ways $(A,i)\rightarrow (B,i)$ and $(B,i) \rightarrow(A,i)$. The long jumps are $(A,i+1)\rightarrow (B,i)$ and $(B,i)\rightarrow (A,i+1)$. END EDIT If you want to obtain a diagonal form for your Hamiltonian, you can try to find a $2\times2$ matrix $M$ such that:$$H=-\sum_{k}{(c^{\dagger}_{kA} c^{\dagger}_{kB})M\binom{c_{kA}}{c_{kB}}}$$$M$ will contain hopping parameters $t_s$ and $t_l$, once you have diagonalized $M$ your problem is solved.This post imported from StackExchange Physics at 2014-05-04 11:30 (UCT), posted by SE-user ChocoPouce
The problem is: $\sum_{n=1}^{\infty} \frac{1}{n(n+3)}$ The first thing I did was use the divergence test which didn't help since the result of the limit was 0. If I multiply it through, the result is $\sum_{n=1}^{\infty} \frac{1}{n^2+3n}$ I'm wondering if I can consider this as a p-series and simply use the largest power. In this case the power would be 2 which would mean it converges. If this is the correct way to go about this, how do I find where it converges to.
This is a bit of fun geometry that doesn’t have much to do with what’s going on in class, but does reflect on mathematical thinking. An article promoting the use of technology in the classroom began: “Draw a perfect circle. Now bisect that with a 45-degree angle, the perfect slice of geometric pizza. Now, using your drawing, find the area of the rest of the circle. ” What they meant was that the 45-degree angle divides the circle (or, more properly speaking, the region contained by the circle*). “Bisect” has a rigorous definition in mathematics: It means to divide an object into two objects of the same exact size. So if we bisect a circular region with anything, then the area of the two pieces will have half the area of the circular region, by definition. Rather than accepting this and moving on, though, I asked myself: Can you bisect a circular region with a 45-degree angle? I shifted this to: Given a circle and an angle whose vertex is on or in the circle, what is the smallest angle that will cut out half of its region? My conjecture is that the shaded region, where angles BEA and CEA are congruent, represents the largest portion of the circular region that can be covered by angle BEC. Let \(\alpha = m\angle BEC\). So what is the area of the shaded region? First, we have sector BAC. The area of a sector is \(\beta r^2/2\), where \(\beta\) is the measure of the central angle (in radians). Since it’s a unit circle, \(r^2 = 1\). Since angle BEC is the inscribed angle that corresponds to the central angle BAC, it has half the measure, and the area of the sector is \(\beta/2 = \alpha\). Since AE bisects angle BEC, the two triangles are congruent. Since AC, AE, and AB are all radii, the two triangles are isosceles. We can determine the area of triangle BAE by first dropping a line perpendicular to BE. This divides the triangle into two congruent right triangles, AFE and AFB. Call \(\gamma = m\angle AEF\), so \(\gamma = \alpha/2\). To find the area of \(\Delta AFE\), we need a height and a width. Since AE is a radius, the height AF is \(\sin\gamma\) and the width FE is \(\cos\gamma\). This gives an area of \(\sin\gamma\cos\gamma/2\). I’ll write about the double- and half-angle trigonometric formulas in a separate post, but one of these is: \[\sin 2\theta = 2\sin\theta\cos\theta\] Applying this gives us an area of triangle AEF of \((\sin 2\gamma)/4 = (\sin\alpha)/4\). Since there are four such congruent triangles, the total area of the two larger shaded triangles is \(\sin\alpha\), and the area of the entire shaded region is \(\sin\alpha + \alpha\). If the shaded region is half the area bounded by the circle, and since the area bounded by a unit circle is \(\pi\), \(\sin\alpha + \alpha = \pi/2\). I’m not sure how or if that can be solved analytically, but we can use a calculator to graph \(\sin\alpha + \alpha – \pi/2\) and find its solution. The solution function of the calculator gives a zero at 0.8317112. Throughout this post, I have worked in radians. For the Algebra 2 students, we haven’t gotten there yet. Radians represent a different way to measure angles, but it’s a straightforward conversion: \(2\pi = 360^o\), and \(1 = 180^o/\pi\). So we have \(0.8317112 \times 180^o/\pi \approx 47.65^o\). So: An inscribed angle slightly larger than 47.65 degrees bisected by a diameter will create a rounded wedge that bounds half as many points as a circle. Going back to the original inspiration for this item, this means that the 45 degree angle with a vertex that is inside a circle cannot bisect the circle’s region. I started with a conjecture. This conjecture can be proven, but the formal proof requires more trigonometry, so I’ll leave it for the reader, or for another time. * Since this article is about rigorous definitions, I’m using “circle” to refer to the set of points equidistant from a point (i.e., \((x-h)^2 + (y-k)^2 = r^2\)) and “circular region” to refer to all points satisfying \((x-h)^2 + (y-k)^2 \le r^2\). In high school, we often conflate these two with the word “circle”.
Covariance Covariance measures the extent to which two variables, say x and y, move together. A positive covariance means that the variables move in tandem and a negative value indicates that the variables have an inverse relationship. While covariance can indicate the direction of relation, the correlation coefficient is a better measure of the strength of relationship. Covariance is an important input in estimation of diversification benefits and portfolio optimization, calculation of beta coefficient, etc. In manual calculation of covariance, the following steps are involved: Calculate mean of each variable i.e. µ xand µ x, Find deviation of each value of x and y from their respective means i.e. (x i- µ x) and (y i- µ y) Multiply deviation of x corresponding deviation of y i.e. (x i- µ x) × (y i- µ y) Sum up all the products of deviations Divide by total number of observations N. Formula The following equation describes the relationship: $$ \text{Cov}(\text{x} \text{,} \text{y})=\sum _ {\text{i}}^{\text{N}}\frac{(\text{x}\ -\ \mu _ \text{x})(\text{y}\ -\ \mu _ \text{y})}{\text{N}} $$ Covariance can also be calculated using Excel COVAR, COVARIANCE.P and COVARIANCE.S functions. If we know the correlation coefficient, we can work out covariance indirectly as follows: $$ \text{Cov}(\text{x} \text{,} \text{y})=\rho\times\sigma _ \text{x}\times\sigma _ \text{y} $$ Where ρ is the correlation coefficient, \sigma x is the standard deviation of x and \sigmay is the standard deviation of y. Example Let’s calculate covariance using the same data set as in correlation coefficient. The data set bellows shows monthly closing prices of SPDR Oil & Gas Exploration and Production ETF (designed as y) and Brent Crude (designed as x): Date x y 1/1/2014 109.95 65.75 2/1/2014 108.16 69.69 3/1/2014 108.98 71.83 4/1/2014 105.7 77.61 5/1/2014 108.63 77.04 6/1/2014 109.21 82.28 7/1/2014 110.84 75.29 8/1/2014 103.45 79.05 9/1/2014 101.12 68.83 10/1/2014 94.57 60.87 11/1/2014 84.17 51.08 12/1/2014 70.87 47.86 1/1/2015 55.27 46.18 2/1/2015 47.52 50.81 3/1/2015 61.89 51.66 4/1/2015 55.73 55.09 5/1/2015 64.13 49.53 6/1/2015 64.88 46.66 7/1/2015 62.01 38.35 8/1/2015 52.21 36.00 9/1/2015 49.56 32.84 10/1/2015 47.69 36.61 11/1/2015 49.56 37.13 12/1/2015 44.44 30.22 1/1/2016 37.28 28.49 2/1/2016 34.24 24.60 3/1/2016 36.81 30.35 4/1/2016 38.67 35.74 5/1/2016 48.13 35.52 6/1/2016 49.72 34.81 7/1/2016 50.35 34.25 8/1/2016 42.14 36.79 9/1/2016 45.45 38.46 10/1/2016 49.06 35.35 11/1/2016 48.14 41.93 12/1/2016 53.94 43.18 1/1/2017 56.82 40.08 2/1/2017 56.8 37.86 3/1/2017 56.36 37.44 4/1/2017 52.83 34.95 5/1/2017 51.52 32.57 6/1/2017 50.63 31.92 7/1/2017 47.92 32.52 8/1/2017 51.78 30.16 9/1/2017 52.75 34.09 10/1/2017 57.54 34.28 11/1/2017 60.49 35.72 12/1/2017 63.73 37.18 $$ \text{Mean} \text{of} \text{x} = \ \mu _ \text{x} = \text{63.83} $$ $$ \text{Mean} \text{of} \text{y} = \ \mu _ \text{y} = \text{45.34} $$ $$ \text{Sum of products of deviations} = (\text{x}\ -\ \mu _ \text{x})(\text{y}\ -\ \mu _ \text{y}) = \text{16,467} $$ $$ \text{Cov}(\text{x} \text{,} \text{y})=\sum _ {\text{i}}^{\text{N}}{\frac{\text{16,467}}{\text{48}}=\text{343.06}} $$ Covariance of SPDR XOP ETF with Brent Crude is positive which indicates that they both move together. However, we can’t say conclusively how strong the relationship because the covariance value depends on the units used. Correlation coefficient is a better measure which works out to 0.93 for the given data set. We can arrive at covariance value if we have the value for correlation coefficient and individual standard deviations of x and y, which are 23.63 and 15.87 respectively. $$ \text{Cov}(\text{x} \text{,} \text{y})=\text{0.93}\times\text{23.63}\times\text{15.87}=\text{343} $$ by Obaidullah Jan, ACA, CFA and last modified on
Construct sequence of continuous functions $f_n:[0,1] \to \mathbb{R} $ such that $\displaystyle \lim _{n \to \infty} f_n(x)=0$ implies that $\displaystyle \lim_{n \to \infty} \int _0 ^ 1 f_n(x) dx = +\infty$ $$f_n(x) = \begin{cases} 6n^3x(1-nx), & 0\le x\le \frac 1n \\ 0, & \frac 1n<x\le 1 \end{cases}$$ If you dislike definitions by cases, you could rewrite this as $$f_n(x)=\max\left(6n^3x(1-nx),0\right)$$ Then for any given $x$, $f_n(x)=0$ if $n$ is large enough, but $\int_0^1 f_n(x)\, dx = n$ for all $n$. Here are $y=f_1(x)$ through $y=f_{6}(x)$. (Ignore the values outside $x\in[0,1]$.) Note that the maximum value of $f_n(x)$ is $\frac 32n^2$ and occurs at $x=\frac 1{2n}$. What about $f_n (x) = n^3 x \exp(-nx)$? $f_n(0) = 0 \to 0$ and for $x >0$ we have $\exp(-nx) \to 0$ much faster than $n^3 \to +\infty$, so $f_n(x) \to 0 \, \forall x$. But by substituting $y=nx$: $$ \int_0^1 n^3 x \exp(-nx)\, dx = n \int_0^n y \exp(-y) \, dy \to +\infty $$ You can take the piecewise linear function $f_n(x), n\ge 2$ such that $f_n(0)=0$ , $ f_n(\frac{1}{n}) = n^2$ , $f_n\left(\frac{2}{n}\right)=0 $ and $f_n(1)=0$. Then you have $\forall x \in [0,1], \lim\limits_{n \to \infty} f_n(x)=0$ and $ \lim\limits_{n \to \infty} \int\limits_{0}^1 f_n(x) dx = \lim\limits_{n \to \infty} n = +\infty$. To see how this sequence of functions behaves, you can take a look at this animation : https://www.desmos.com/calculator/l0ap9b58qr . this might be the right answer: $$ f_n(x)=-1/(nx^3). $$ $f_n(x)= n^3x^n(1-x)$ will do the job.
Event detail Probabilistic Operator Algebra Seminar: An Elementary Approach to Free Gibbs States with Convex Potentials Seminar | January 28 | 2-4 p.m. | 736 Evans Hall David Andrew Jekel, UCLA We present an alternative approach to the theory of free Gibbs states with convex potentials. Instead of solving SDE's, we combine PDE techniques with a notion of asymptotic approximability by trace polynomials for a sequence of functions on $M_N(\mathbb C)_{sa}^m$ to prove the following. Suppose $\mu _N$ is a probability measure on $M_N(\mathbb C)_{sa}^m$ given by uniformly convex and semi-concave potentials $V_N$, and suppose that the sequence $DV_N$ is asymptotically approximable by trace polynomials. Then the moments of $\mu _N$ converge to a noncommutative law λ. Moreover, the free entropies $\chi (\lambda )$, $\underline {\chi }(\lambda )$, and $\chi ^*(\lambda )$ agree and equal the limit of the normalized classical entropies of $\mu _N$. We also sketch further applications to conditional expectations, relative entropy, and free transport for these free Gibbs states.
The Fibonacci sequence is defined to be $u_1=1$, $u_2=1$, and $u_n=u_{n-1}+u_{n-2}$ for $n\ge 3$. Note that $u_2=1$ is a definition, and we may have just as well set $u_2=\pi$ or any other number. Since $u_2$ shares no relation to $u_1$ (without considering any other $u_k$), we can't use induction to go from the case $n=1$ to $n=2$. If we attempted it anyway, it would go something like this: Base case $n=1$: (proved in the question statement) Inductive step $(k)\rightarrow (k+1)$: Assume $u_k=\frac{\alpha^k-\beta^k}{\sqrt{5}}$. Then we have the formula $u_{k+1}=u_k+u_{k-1}$. How can we apply this? We only have information about $u_k$, but we also need information about $u_{k-1}$ to get information about $u_{k+1}$. Okay, so weak induction doesn't quite do it for us. Let's be lax and allow ourselves at this point to switch to strong induction so we can also assume that $u_{k-1}=\frac{\alpha^{k-1}-\beta^{k-1}}{\sqrt{5}}$. Brilliant! Now we can just do some plugging in and fiddling and get $$u_{k+1}=u_k+u_{k-1}=\frac{\alpha^{k}-\beta^{k}}{\sqrt{5}}+\frac{\alpha^{k-1}-\beta^{k-1}}{\sqrt{5}}=\frac{\alpha^{k+1}-\beta^{k+1}}{\sqrt{5}},$$where the second equality comes from the formulas $\beta^2=\beta+1$ and $\alpha^2=\alpha+1$. This gives us exactly what we wanted, right? Unfortunately, the answer is no. There is a huge gaping flaw with this proof, and that is that it doesn't work for $n=2$, so it doesn't work for any $n\ge2$ either. Here's the issue: When we did our inductive step, we used the recurrence formula $u_{k+1}=u_k+u_{k-1}$, but this formula $\textit{isn't true}$ for $k+1=2$. In this case we have $u_2=u_1+u_0$, but we haven't defined $u_0$! In our world, $u_0$ $\textit{doesn't even exist}$. Since the formula $u_{n}=u_{n-1}+u_{n-2}$ is only valid for $n\geq 3$, we must prove the $n=2$ case separately as part of our base cases, and once we have done that, the above proof will be correct.
I recently read the article Nonce-Based Symmetric Encryption by Rogaway, where he presents two different notions of indistinguishability, which he calls ind$ and ind, respectively. Here's the definitions of these to notions: First, let $A^g$ be an algorithm with access to an oracle $g$, and let $\Pi = (\mathcal{E},\mathcal{D})$ be an encryption scheme with key space $\mathrm{Key}$. Ind\$ is then defined as follows: $$ \mathbf{Adv}_{\Pi}^{\mathrm{ind$}} = \mathrm{Pr}[K \xleftarrow{$} \mathrm{Key} : A ^{\mathcal{E}_K(\cdot)} \Rightarrow 1] - \mathrm{Pr}[A ^{\mathcal{$}(\cdot)} \Rightarrow 1],$$ where $ \$(\cdot)$ is a random oracle, returning random bits equal to the block size of $\mathcal{E}$. In other words: ind$ asks an adversary to distinguish between messages encrypted by the real encryption scheme and random bits. Ind is defined as: $$ \mathbf{Adv}_{\Pi}^{\mathrm{ind}} = \mathrm{Pr}[K \xleftarrow{$} \mathrm{Key} : A ^{\mathcal{E}_K(\cdot)} \Rightarrow 1] - \mathrm{Pr}[K \xleftarrow{$} \mathrm{Key} : A ^{\mathcal{E}_K(0^{|\cdot|})} \Rightarrow 1].$$ That is, ind asks an adversary, when quarrying the input message $M$ to the oracles, to distinguish between the real encryption of $M$, and the encryption of $0^{|M|}$. He then claims that "It is easy to verify that the ind$-notion of security implies the ind-notion, and by a tight reduction". My question: Intuitively, the implication seems easy enough: If $\Pi$ is ind\$-secure, then encryption of $0^{|M|}$ will be indistinguishable from random, so we just get the "ind\$-game". However, how would you go about showing the tightness of the reduction? Usually I'm used to doing this by a reduction like so: assume you have an ind-adversary that breaks the ind-security, how can you turn this into an (effective) adversary against the ind\$-security of $\Pi$? But I don't really see how an adversary against ind can be turned into an (effective) adversary against ind\$.
The risk of collision is only theoretical; it will not happen in practice. Time spent worrying about such a risk of collision is time wasted. Consider that even if you have $2^{90}$ 1MB blocks (that's a billion of billions of billions of blocks -- stored on 1TB hard disks, the disks would make a pile as large of the USA and several kilometers high), risks of ... how can for example SHA-256 be unique if there is only a limited number of them?!Where your issue occurs is that they're not unique. It's just very improbable that they'll reoccur. Unique in this context is not a mathematical definition, it's a humanist one.In terms of human numbers, $ 2^{256} $ = ... This answer is based on the work by AleksanderRas, although my conclusion is different.First, to lay out a definition, a hash is a function that takes an arbitrary length input to a fixed length output. For example, MD5 takes any input and produces a 128 bit output.A cryptographic hash is a hash function which has certain additional security properties. ... The functions considered are binary functions of 3 bits to 1 bit (extended to bit vectors, that is bitwise functions). There are $2^{(2^3)}=256$ such functions.All the functions considered are balanced; that is, there is an equal number of input combinations for which the function outputs 0 and for which the function outputs 1. That reduces the number of ... A new result shows how to generate single block MD5 collisions, including an example collision:Message 1Message 2> md5sum message1.bin message2.bin> 008ee33a9d58b51cfeb425b0959121c9 message1.bin> 008ee33a9d58b51cfeb425b0959121c9 message2.binThere is an earlier example of a single block collision but not technique for generating it was ... I know that MD5 should not be used for password hashing, and that it also should not be used for integrity checking of documents. There are way too many sources citing MD5 preimaging attacks and MD5s low computation time.There is no published preimage attack on MD5 that is cheaper than a generic attack on any 128-bit hash function. But you shouldn't rely ... MD5 was intended to be a cryptographic hash function, and one of the useful properties for such a function is its collision-resistance. Ideally, it should take work comparable to around $2^{64}$ tries (as the output size is $128$ bits, i.e. there are $2^{128}$ different possible values) to find a collision (two different inputs hashing to the same output).(... Just to show you how easy it is today to create collisions on MD5:One could create collisions using Marc Steven's HashClash onAWS and estimated the the cost of around $0.65 per collision.These 2 images have the same md5 hash: 253dd04e87492e4fc3471de5e776bc3dIf you want to test it yourself and the images below do not give you the MD5 hash ... That depends on what you want to use the hash function for.For signing documents, sha2 (e. g. sha512) is considered secure.For storing passwords, you should use one of the algorithms dedicated for this purpose: e. g. bcrypt, sha512crypt or scrypt. In order to slow down an attacker, these algorithms apply the hash functions many times with an input that ... Yes, there are currently no known attacks on HMAC-MD5.In particular, after the first collision attacks on MD5, Mihir Bellare (one of the inventors of HMAC) came up with a new security proof for HMAC that doesn't require collision resistance:"Abstract: HMAC was proved by Bellare, Canetti and Krawczyk (1996) to be a PRF assuming that (1) the underlying ... You are right, hashes won't be all unique as you already have shown. The important part are practical collisions - how many SHA-512 hashes can the whole earth generate in its lifetime? Definitely much less than $2^{512}$, it's even less than $2^{128}$.Let's guess unrealistically high and say we generate these $2^{128}$ hashes from perfectly random input, ... If you follow the reference for the alleged preimage attack on MD5, you will see that although the time cost is $2^{123.4}$ steps, the memory cost is $2^{45} \times 11$ words of memory, which has a far higher area*time cost than a smart attacker would use—a smart attacker would fit 32 CPU cores or MD5 circuits in parallel into much less die area and get an ... MD5 and SHA-1 have a lot in common; SHA-1 was clearly inspired on either MD5 or MD4, or both (SHA-1 is a patched version of SHA-0, which was published in 1993, while MD5 was described as a RFC in 1992).The main structural differences are the following:SHA-1 has a larger state: 160 bits vs 128 bits.SHA-1 has more rounds: 80 vs 64.SHA-1 rounds have an ... The answer is 1 bit (Hamming-distance = 1) for any cryptographic hash algorithm.There are definitely collisions, since the digest of the MD5 algorithm is always 128 bits long but there are more than 2128 possible inputs.We can explain this due to the Pigeonhole principle.Mathematical explanationLet's say we take an input message of 3 bits:There are ... Two different strings in hex format:4dc968ff0ee35c209572d4777b721587d36fa7b21bdc56b74a3dc0783e7b9518afbfa200a8284bf36e8e4b55b35f427593d849676da0d1555d8360fb5f07fea24dc968ff0ee35c209572d4777b721587d36fa7b21bdc56b74a3dc0783e7b9518afbfa202a8284bf36e8e4b55b35f427593d849676da0d1d55d8360fb5f07fea2both have MD5 hash:008ee33a9d58b51cfeb425b0959121c9Example:... There is an important misconception on your part: in general cryptographic hashes such as MD5, SHA-1 or SHA-512 should not be used to directly hash a password. A password hash or PBKDF should be used. Examples are PBKDF2, bcrypt, scrypt and Argon2. These functions also take a salt and work factor to provide additional protection.There are a few problems ... Does hashing algorithms have an upper bound in the input space?They can, but they don't have to and it depends on their specification.All Merkle-Damgård based hash functions do have an upper limit, because appending the message length simplifies the security proof and the backdoor-resistance of the function and they usually use a fixed-length encoding of ... MD5 is ok here as usual cryptographic attacks do not apply in this scenario.The probability of accidental MD5 collision is much less than usual probability for soft error. For details read more.MD5 is currently considered too weak to work as a cryptographic hash. However, for all traditional (i.e. non-cryptographic) hash uses MD5 is often perfectly fine.... Here are some other interesting examples. One of them is, two downloadable executables that have the same MD5 hash, but are actually different, and produce different (safe) results when run!So much for using MD5 hashes to ensure download file integrity :-(http://www.mscs.dal.ca/~selinger/md5collision/ To answer your question, we must first state that for an integer $x$, we define MD5($x$) to be the MD5 hash of the encoding of $x$ as a sequence of bits. Indeed, MD5 expects a sequence of bits as input, not an integer. We should choose a conventional encoding; I select big-endian. Thus, integer $44$ encodes as a sequence of 6 bits: 101100. One may note that ... Right now, the best published attack against MD5's preimage resistance (first preimage, actually, but it applies to second preimage resistance as well) finds preimages in cost $2^{123.4}$ average cost, which is slightly better than the generic attack (average cost of $2^{128}$), but still way beyond the technologically feasible. The attack rebuilds the ... There are two answers to this: one practical, and one theoretical.First, the practical one: MD5 is a broken hash function, and we know of collisions for it, and a quick web search turned up a collision with a hamming distance of 6.Second, the theoretical one: Most cryptographic hash functions are designed to be a reasonable approximation of a random ... Among the options for a replacement of MD5 as a hash function:If at all possible, you should increase the width of the hash for strong collision resistance, and use an at-least-256 bit member of the SHA-2, or perhaps the new SHA-3 family. The collision resistance of any 128-bit hash can be broken by educated brute force and about $2^{65}$ hashes (which is ... The algorithm (now reasonably clear) is reminiscent of a block cipher in CFB mode, with $random$ as the IV (which can be public), $secret$ as the key, and MD5 used as keystream generator instead of the block cipher.Decryption works as in CFB:\begin{align*}M_1 &= C_1 \oplus \operatorname{MD5}( secret\mathbin\|random )\\M_n &= C_n \oplus \... The echo command appends a new line at the end, by default. The -n option omits this character. Compare these two executions:> echo -n "test123" | md5sumcc03e747a6afbbcbf8be7668acfebee5> echo "test123" | md5sum4a251a2ef9bbf4ccc35f97aba2c9cbdaSo the difference between the hash values is simply caused by the new line character. There is no timing attack possible on MD5 as practically implemented on most platforms. That's because MD5 uses only 32-bit addition, 32-bit bitwise boolean operators, and constant rotations/shifts, which exhibit no data-dependent timing for any reasonable implementation, even written without consideration for resistance to timing attacks.There is however ... Surprisingly enough, it would appear that generating a simultaneous collision wouldn't be that much more expensive than generating a single collision for SHA-1.The basic idea is to form a $2^{64}$ wide multicollision on SHA-1; that is, $2^{64}$ distinct messages that all SHA-1 hash to the same value. We can do this by using Joux's idea of forming finding ... The risk of collision is only theoretical; it will not happen in practice.Except in one particular instance. The description given implies that this system is going to be some form of de-duplicating filesystem or backup system. For most users, the collision risk is tiny.But, for one particular class of users, there is a much larger risk. Those users ... MD5 – Can I use MD5 as a two-way function? If I can break the data in 64 bit portions, will I be able to recover the original message without a pre-calculated lookup table?MD5 is a hash function, not a cipher. Differently stated: you will not be able to encrypt or decrypt anything by simply using a hash function.You could compare MD5 hashes with each ...
User:Dfeuer/Open Set may not be Open Ball [{WIP|not there yet}} Theorem Proof Let $x, y, z \in A$ be $3$ distinct points in $M$ such that $d \left({x, y}\right) \le d \left({y,z}\right)$ $d \left({x, z}\right) \le d \left({y,z}\right)$ Let $r = \dfrac {\min\left\{{d \left({x,y}\right), d \left({x,z}\right)\right\}} \min\left\{{2, $. Let $U = B_r(y) \cup B_r(z)$. Then because the union of open sets is open, $U$ is open. Suppose for the sake of contradiction that $U$ is an open ball. Then there must be a point $w$ and a positive real number $q$ such that $U = B_q(w)$. Since $w \in U$, we must have $w \in B_r(y)$ or $w \in B_r(z)$. Suppose without loss of generality that $w \in B_r(y)$. By the triangle inequality: $d(x,w) \le d(x,y) + d(y,w) < d(x,y) + r$ We also have $d(y,z) \le d(y,w) + d(w,z) \le r + d(w,z) < r + q$. Thus $d(w,y) < q$ and $d(w,z) < q$. By the triangle inequality: $d(y,z) < 2q$ Then by the triangle inequality: $d(x,w) \le d(x,y) + d(y,w) < d(x,y) + r$. By the definition of $r$: $d(x,w) \le d(x,y) + d(y,w) < d(x,y) (1+1/4)$. Because $r < d \left({x,y}\right) \le d \left({x,z}\right)$: $x \notin B_r(y)$ $x \notin B_r(z)$ Thus $x \notin U$. We will show that $x \in U$, a contradiction. By the triangle inequality: $d(y,z) \le :$d(x,w) \le d(x,y) + d(y,w) < d(x,y) + q$ $\blacksquare$[[Category:Proven Results]] =='"`UNIQ--h-2--QINU`"' Sources == * 1975: [[Mathematician:W.A. Sutherland|W.A. Sutherland]]: [[Book:W.A. Sutherland/Introduction to Metric and Topological Spaces|''Introduction to Metric and Topological Spaces'']] ... [[Half-Open Real Interval is not Open Set|(previous)]] ... [[Empty Set is Open in Metric Space|(next)]]: $2.3$: Open sets in metric spaces
ä is in the extended latin block and n is in the basic latin block so there is a transition there, but you would have hoped \setTransitionsForLatin would have not inserted any code at that point as both those blocks are listed as part of the latin block, but apparently not.... — David Carlisle12 secs ago @egreg you are credited in the file, so you inherit the blame:-) @UlrikeFischer I was leaving it for @egreg to trace but I suspect the package makes some assumptions about what is safe, it offers the user "enter" and "exit" code for each block but xetex only has a single insert, the interchartoken at a boundary the package isn't clear what happens at a boundary if the exit of the left class and the entry of the right are both specified, nor if anything is inserted at boundaries between blocks that are contained within one of the meta blocks like latin. Why do we downvote to a total vote of -3 or even lower? Weren't we a welcoming and forgiving community with the convention to only downvote to -1 (except for some extreme cases, like e.g., worsening the site design in every possible aspect)? @Skillmon people will downvote if they wish and given that the rest of the network regularly downvotes lots of new users will not know or not agree with a "-1" policy, I don't think it was ever really that regularly enforced just that a few regulars regularly voted for bad questions to top them up if they got a very negative score. I still do that occasionally if I notice one. @DavidCarlisle well, when I was new there was like never a question downvoted to more than (or less?) -1. And I liked it that way. My first question on SO got downvoted to -infty before I deleted it and fixed my issues on my own. @DavidCarlisle I meant the total. Still the general principle applies, when you're new and your question gets donwvoted too much this might cause the wrong impressions. @DavidCarlisle oh, subjectively I'd downvote that answer 10 times, but objectively it is not a good answer and might get a downvote from me, as you don't provide any reasoning for that, and I think that there should be a bit of reasoning with the opinion based answers, some objective arguments why this is good. See for example the other Emacs answer (still subjectively a bad answer), that one is objectively good. @DavidCarlisle and that other one got no downvotes. @Skillmon yes but many people just join for a while and come form other sites where downvoting is more common so I think it is impossible to expect there is no multiple downvoting, the only way to have a -1 policy is to get people to upvote bad answers more. @UlrikeFischer even harder to get than a gold tikz-pgf badge. @cis I'm not in the US but.... "Describe" while it does have a technical meaning close to what you want is almost always used more casually to mean "talk about", I think I would say" Let k be a circle with centre M and radius r" @AlanMunn definitions.net/definition/describe gives a websters definition of to represent by drawing; to draw a plan of; to delineate; to trace or mark out; as, to describe a circle by the compasses; a torch waved about the head in such a way as to describe a circle If you are really looking for alternatives to "draw" in "draw a circle" I strongly suggest you hop over to english.stackexchange.com and confirm to create an account there and ask ... at least the number of native speakers of English will be bigger there and the gamification aspect of the site will ensure someone will rush to help you out. Of course there is also a chance that they will repeat the advice you got here; to use "draw". @0xC0000022L @cis You've got identical responses here from a mathematician and a linguist. And you seem to have an idea that because a word is informal in German, its translation in English is also informal. This is simply wrong. And formality shouldn't be an aim in and of itself in any kind of writing. @0xC0000022L @DavidCarlisle Do you know the book "The Bronstein" in English? I think that's a good example of archaic mathematician language. But it is still possible harder. Probably depends heavily on the translation. @AlanMunn I am very well aware of the differences of word use between languages (and my limitations in regard to my knowledge and use of English as non-native speaker). In fact words in different (related) languages sharing the same origin is kind of a hobby. Needless to say that more than once the contemporary meaning didn't match a 100%. However, your point about formality is well made. A book - in my opinion - is first and foremost a vehicle to transfer knowledge. No need to complicate matters by trying to sound ... well, overly sophisticated (?) ... The following MWE with showidx and imakeidx:\documentclass{book}\usepackage{showidx}\usepackage{imakeidx}\makeindex\begin{document}Test\index{xxxx}\printindex\end{document}generates the error:! Undefined control sequence.<argument> \ifdefequal{\imki@jobname }{\@idxfile }{}{... @EmilioPisanty ok I see. That could be worth writing to the arXiv webmasters as this is indeed strange. However, it's also possible that the publishing of the paper got delayed; AFAIK the timestamp is only added later to the final PDF. @EmilioPisanty I would imagine they have frozen the epoch settings to get reproducible pdfs, not necessarily that helpful here but..., anyway it is better not to use \today in a submission as you want the authoring date not the date it was last run through tex and yeah, it's better not to use \today in a submission, but that's beside the point - a whole lot of arXiv eprints use the syntax and they're starting to get wrong dates @yo' it's not that the publishing got delayed. arXiv caches the pdfs for several years but at some point they get deleted, and when that happens they only get recompiled when somebody asks for them again and, when that happens, they get imprinted with the date at which the pdf was requested, which then gets cached Does any of you on linux have issues running for foo in *.pdf ; do pdfinfo $foo ; done in a folder with suitable pdf files? BMy box says pdfinfo does not exist, but clearly do when I run it on a single pdf file. @EmilioPisanty that's a relatively new feature, but I think they have a new enough tex, but not everyone will be happy if they submit a paper with \today and it comes out with some arbitrary date like 1st Jan 1970 @DavidCarlisle add \def\today{24th May 2019} in INITEX phase and recompile the format daily? I agree, too much overhead. They should simply add "do not use \today" in these guidelines: arxiv.org/help/submit_tex @yo' I think you're vastly over-estimating the effectiveness of that solution (and it would not solve the problem with 20+ years of accumulated files that do use it) @DavidCarlisle sure. I don't know what the environment looks like on their side so I won't speculate. I just want to know whether the solution needs to be on the side of the environment variables, or whether there is a tex-specific solution @yo' that's unlikely to help with prints where the class itself calls from the system time. @EmilioPisanty well th eenvironment vars do more than tex (they affect the internal id in teh generated pdf or dvi and so produce reproducible output, but you could as @yo' showed redefine \today oor teh \year, \month\day primitives on teh command line @EmilioPisanty you can redefine \year \month and \day which catches a few more things, but same basic idea @DavidCarlisle could be difficult with inputted TeX files. It really depends on at which phase they recognize which TeX file is the main one to proceed. And as their workflow is pretty unique, it's hard to tell which way is even compatible with it. "beschreiben", engl. "describe" comes from the math. technical-language of the 16th Century, that means from Middle High German, and means "construct" as much. And that from the original meaning: describe "making a curved movement". In the literary style of the 19th to the 20th century and in the GDR, this language is used. You can have that in englisch too: scribe(verb) score a line on with a pointed instrument, as in metalworking https://www.definitions.net/definition/scribe @cis Yes, as @DavidCarlisle pointed out, there is a very technical mathematical use of 'describe' which is what the German version means too, but we both agreed that people would not know this use, so using 'draw' would be the most appropriate term. This is not about trendiness, just about making language understandable to your audience. Plan figure. The barrel circle over the median $s_b = |M_b B|$, which holds the angle $\alpha$, also contains an isosceles triangle $M_b P B$ with the base $|M_b B|$ and the angle $\alpha$ at the point $P$. The altitude of the base of the isosceles triangle bisects both $|M_b B|$ at $ M_ {s_b}$ and the angle $\alpha$ at the top. \par The centroid $S$ divides the medians in the ratio $2:1$, with the longer part lying on the side of the corner. The point $A$ lies on the barrel circle and on a circle $\bigodot(S,\frac23 s_a)$ described by $S$ of radius…
Computing Variational Derivatives Allow me to start with a definition. Given a function $u = u(x)$, a function $L$ of $x, u$, and all derivatives of $u$; and $I = \int L(x,u,u_x u_{xx}, \ldots)dx$ the variational derivative of $I$ is defined as $\frac{\delta I}{\delta u} := \frac{\partial L}{\partial u} - \frac{d}{dx} \frac{\partial L}{\partial u_x} + \frac{d^2}{dx^2} \frac{\partial L}{\partial u_{xx}} - \cdots$ For example, with $L=u^3 + u_x^2/2$ we have $\frac{\delta I}{\delta u} = 3u^2 - u_{xx}$. From my poking around I suspect that Sage doesn't have the option of computing the variational derivative of an integral operator. However, I'd like to write some code that does this. Does anyone have any suggestions on how to go about doing this? One issue is that if you define a function sage: u = function('u', x) you can take derivatives of u with respect to x, of course, but not with respect to u. sage: u.derivative(u) # this should be equal to 1Traceback (click to the left of this block for traceback)...TypeError: argument symb must be a symbolsage: u_x = u.derivative(x)sage: L = u_x^2/2sage: L.derivative(u_x) # this should be equal to u_xTraceback (click to the left of this block for traceback)...TypeError: argument symb must be a symbol My thoughts include doing some string parsing so I can take the necessary derivatives with respect to $u,u_x,u_{xx},\ldots$ but that's starting to sound a bit messy. Any suggestions on how to cleanly go about doing this in Sage would be greatly appreciated!
Analytic results for the linear stability of the equilibrium point in Robe's restricted elliptic three-body problem 1. Department of Mathematics, Zhejiang University, Hangzhou 310027, Zhejiang, China 2. School of Mathematics and Statistics, Northeastern University at Qinhuangdao, Qinhuangdao 066004, Hebei, China We study the Robe's restricted three-body problem. Such a motion was firstly studied by A. G. Robe in [ Keywords:Restricted three-body problem, equilibrium point, linear stability, Maslov-type $ω$-index. Mathematics Subject Classification:Primary:70F07, 70H14;Secondary:34C25. Citation:Qinglong Zhou, Yongchao Zhang. Analytic results for the linear stability of the equilibrium point in Robe's restricted elliptic three-body problem. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1763-1787. doi: 10.3934/dcds.2017074 References: [1] [2] [3] P. P. Hallen and D. N. Rana, The Existence and stability of equilibrium points in the Robe's restricted three-body problem, [4] X. Hu, Y. Long and S. Sun, Linear stability of elliptic Lagrangian solutions of the classical planar three-body problem via index theory, [5] [6] X. Hu and S. Sun, Index and stability of symmetric periodic orbits in Hamiltonian systems with its application to figure-eight orbit, [7] [8] [9] [10] [11] [12] [13] [14] A. K. Shrivastava and D. Garain, Effect of perturbation on the location of liberation point in the Robe restricted problem of three bodies, [15] K. T. Singh, B. S. Kushvah and B. Ishwar, Stability of triangular equilibrium points in Robe's generalised restricted three body problem, [16] Q. Zhou and Y. Long, Equivalence of linear stabilities of elliptic triangle solutions of the planar charged and classical three-body problems, [17] [18] Q. Zhou and Y. Long, The reduction on the linear stability of elliptic Euler-Moulton solutions of the $n$-body problem to those of $3$-body problems, show all references References: [1] [2] [3] P. P. Hallen and D. N. Rana, The Existence and stability of equilibrium points in the Robe's restricted three-body problem, [4] X. Hu, Y. Long and S. Sun, Linear stability of elliptic Lagrangian solutions of the classical planar three-body problem via index theory, [5] [6] X. Hu and S. Sun, Index and stability of symmetric periodic orbits in Hamiltonian systems with its application to figure-eight orbit, [7] [8] [9] [10] [11] [12] [13] [14] A. K. Shrivastava and D. Garain, Effect of perturbation on the location of liberation point in the Robe restricted problem of three bodies, [15] K. T. Singh, B. S. Kushvah and B. Ishwar, Stability of triangular equilibrium points in Robe's generalised restricted three body problem, [16] Q. Zhou and Y. Long, Equivalence of linear stabilities of elliptic triangle solutions of the planar charged and classical three-body problems, [17] [18] Q. Zhou and Y. Long, The reduction on the linear stability of elliptic Euler-Moulton solutions of the $n$-body problem to those of $3$-body problems, [1] [2] Hadia H. Selim, Juan L. G. Guirao, Elbaz I. Abouelmagd. Libration points in the restricted three-body problem: Euler angles, existence and stability. [3] Jaime Angulo Pava, César A. Hernández Melo. On stability properties of the Cubic-Quintic Schródinger equation with $\delta$-point interaction. [4] [5] [6] [7] Yeping Li, Jie Liao. Stability and $ L^{p}$ convergence rates of planar diffusion waves for three-dimensional bipolar Euler-Poisson systems. [8] Xiaojun Chang, Tiancheng Ouyang, Duokui Yan. Linear stability of the criss-cross orbit in the equal-mass three-body problem. [9] Florin Diacu, Shuqiang Zhu. Almost all 3-body relative equilibria on $ \mathbb S^2 $ and $ \mathbb H^2 $ are inclined. [10] Niraj Pathak, V. O. Thomas, Elbaz I. Abouelmagd. The perturbed photogravitational restricted three-body problem: Analysis of resonant periodic orbits. [11] Tuan Anh Dao, Michael Reissig. $ L^1 $ estimates for oscillating integrals and their applications to semi-linear models with $ \sigma $-evolution like structural damping. [12] Jean-Baptiste Caillau, Bilel Daoud, Joseph Gergaud. Discrete and differential homotopy in circular restricted three-body control. [13] Frederic Gabern, Àngel Jorba, Philippe Robutel. On the accuracy of restricted three-body models for the Trojan motion. [14] Edcarlos D. Silva, José Carlos de Albuquerque, Uberlandio Severo. On a class of linearly coupled systems on $ \mathbb{R}^N $ involving asymptotically linear terms. [15] Qunyi Bie, Haibo Cui, Qiru Wang, Zheng-An Yao. Incompressible limit for the compressible flow of liquid crystals in $ L^p$ type critical Besov spaces. [16] Nicholas J. Kass, Mohammad A. Rammaha. Local and global existence of solutions to a strongly damped wave equation of the $ p $-Laplacian type. [17] [18] Jaume Llibre, Y. Paulina Martínez, Claudio Vidal. Phase portraits of linear type centers of polynomial Hamiltonian systems with Hamiltonian function of degree 5 of the form $ H = H_1(x)+H_2(y)$. [19] Dajana Conte, Raffaele D'Ambrosio, Beatrice Paternoster. On the stability of $\vartheta$-methods for stochastic Volterra integral equations. [20] Yong Ren, Huijin Yang, Wensheng Yin. Weighted exponential stability of stochastic coupled systems on networks with delay driven by $ G $-Brownian motion. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Markdown example Welcome to Markdown! Hey! I’m your first Markdown document in StackEdit 1. Don’t delete me, I’m very helpful! I can be recovered anyway in the Utils tab of the Settings dialog. Documents StackEdit stores your documents in your browser, which means all your documents are automatically saved locally and are accessible offline! Note: StackEdit is accessible offline after the application has been loaded for the first time. Your local documents are not shared between different browsers or computers. Clearing your browser’s data may delete all your local documents!Make sure your documents are synchronized with Google Driveor Dropbox(check out the Synchronization section). Create a document The document panel is accessible using the button in the navigation bar. You can create a new document by clicking New document in the document panel. Switch to another document All your local documents are listed in the document panel. You can switch from one to another by clicking a document in the list or you can toggle documents using Ctrl+[ and Ctrl+]. Rename a document You can rename the current document by clicking the document title in the navigation bar. Delete a document You can delete the current document by clicking Delete document in the document panel. Export a document You can save the current document to a file by clicking Export to disk from the menu panel. Tip:Check out the Publish a document section for a description of the different output formats. Synchronization StackEdit can be combined with Google Drive and Dropbox to have your documents saved in the Cloud. The synchronization mechanism takes care of uploading your modifications or downloading the latest version of your documents. Note: Full access to Google Driveor Dropboxis required to be able to import any document in StackEdit. Permission restrictions can be configured in the settings. Imported documents are downloaded in your browser and are not transmitted to a server. If you experience problems saving your documents on Google Drive, check and optionally disable browser extensions, such as Disconnect. Open a document You can open a document from Google Drive or the Dropbox by opening the Synchronize sub-menu and by clicking Open from…. Once opened, any modification in your document will be automatically synchronized with the file in your Google Drive / Dropbox account. Save a document You can save any document by opening the Synchronize sub-menu and by clicking Save on…. Even if your document is already synchronized with Google Drive or Dropbox, you can export it to a another location. StackEdit can synchronize one document with multiple locations and accounts. Synchronize a document Once your document is linked to a Google Drive or a Dropbox file, StackEdit will periodically (every 3 minutes) synchronize it by downloading/uploading any modification. A merge will be performed if necessary and conflicts will be detected. If you just have modified your document and you want to force the synchronization, click the button in the navigation bar. Note:The button is disabled when you have no document to synchronize. Manage document synchronization Since one document can be synchronized with multiple locations, you can list and manage synchronized locations by clicking Manage synchronization in the Synchronize sub-menu. This will let you remove synchronization locations that are associated to your document. Note:If you delete the file from Google Driveor from Dropbox, the document will no longer be synchronized with that location. Publication Once you are happy with your document, you can publish it on different websites directly from StackEdit. As for now, StackEdit can publish on Blogger, Dropbox, Gist, GitHub, Google Drive, Tumblr, WordPress and on any SSH server. Publish a document You can publish your document by opening the Publish sub-menu and by choosing a website. In the dialog box, you can choose the publication format: Markdown, to publish the Markdown text on a website that can interpret it ( GitHubfor instance), HTML, to publish the document converted into HTML (on a blog for example), Template, to have a full control of the output. Note:The default template is a simple webpage wrapping your document in HTML format. You can customize it in the Advancedtab of the Settingsdialog. Update a publication After publishing, StackEdit will keep your document linked to that publication which makes it easy for you to update it. Once you have modified your document and you want to update your publication, click on the button in the navigation bar. Note:The button is disabled when your document has not been published yet. Manage document publication Since one document can be published on multiple locations, you can list and manage publish locations by clicking Manage publication in the menu panel. This will let you remove publication locations that are associated to your document. Note:If the file has been removed from the website or the blog, the document will no longer be published on that location. Markdown Extra StackEdit supports Markdown Extra, which extends Markdown syntax with some nice features. Tip:You can disable any Markdown Extrafeature in the Extensionstab of the Settingsdialog. Note:You can find more information about Markdownsyntax here and Markdown Extraextension here. Tables Markdown Extra has a special syntax for tables: Item Value Computer $1600 Phone $12 Pipe $1 You can specify column alignment with one or two colons: Item Value Qty Computer $1600 5 Phone $12 12 Pipe $1 234 Definition Lists Markdown Extra has a special syntax for definition lists too: Term 1 Term 2 Definition A Definition B Term 3 Definition C Definition D part of definition D Fenced code blocks GitHub’s fenced code blocks are also supported with Highlight.js syntax highlighting: // Foovar bar = 0; Tip:To use Prettifyinstead of Highlight.js, just configure the Markdown Extraextension in the Settingsdialog. Note:You can find more information: Footnotes You can create footnotes like this 2. SmartyPants SmartyPants converts ASCII punctuation characters into “smart” typographic punctuation HTML entities. For example: ASCII HTML Single backticks 'Isn't this fun?' ‘Isn’t this fun?’ Quotes "Isn't this fun?" “Isn’t this fun?” Dashes -- is en-dash, --- is em-dash – is en-dash, — is em-dash Table of contents You can insert a table of contents using the marker [TOC]: [TOC] MathJax You can render LaTeX mathematical expressions using MathJax, as on math.stackexchange.com: The Gamma function satisfying $\Gamma(n) = (n-1)!\quad\forall n\in\mathbb N$ is via the Euler integral Tip:To make sure mathematical expressions are rendered properly on your website, include MathJaxinto your template: <script type="text/javascript" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_HTML"></script> Note:You can find more information about LaTeXmathematical expressions here. UML diagrams You can also render sequence diagrams like this: Alice->Bob: Hello Bob, how are you?Note right of Bob: Bob thinksBob-->Alice: I am good thanks! And flow charts like this: st=>start: Starte=>endop=>operation: My Operationcond=>condition: Yes or No?st->op->condcond(yes)->econd(no)->op Note:You can find more information: Support StackEdit
The bonding of alkali cations to water, to EDTA, and to crown ethers is , at least based upon QTAIM analysis$^{[1]}$ of representative systems. strictly electrostatic in all three cases For this analysis, I ran gas-phase quantum chemical optimizations on $[\ce{Li}(\text{12-crown-4})]^+$, $[\ce{Na}(\text{EDTA})]^{3-}$, and $[\ce{Na}(\ce{H2O})_6]^+$ using ORCA v3.0.3. (I can post the computation inputs/outputs if anyone is interested.) I also optimized $\ce{NaCl}$ as an additional system with uncontroversial ionic bonding. While the gas-phase results will poorly represent a variety of aspects of the dissolved species, I think they should capture well enough the characteristics of the wavefunction necessary to study the types of bonding involved. Other gas-phase calculations I've run on, e.g., $[\ce{Cu}(\ce{H2O})_6]^{2+}$ have optimized smoothly to the expected octahedral ligand arrangement, lending credence to this assumption. Some notes on the optimizations: Geometry optimization of $[\ce{Li}(\text{12-crown-4})]^+$ converged readily. For $[\ce{Na}(\text{EDTA})]^{3-}$, despite starting in a prototypical octahedral chelation geometry, the EDTA molecule immediately began to 'loosen its grip' on the $\ce{Na+}$, clearly heading toward a less compact configuration. In fact, one of the oxygen atoms actually discarded its bonding interaction with the $\ce{Na+}$ in favor of an intramolecular hydrogen bond. I performed the QTAIM analysis on the last geometry obtained before ORCA reached its iteration limit in the geometry optimzation procedure. Similarly, for $[\ce{Na}(\ce{H2O})_6]^+$, despite starting the optimization in a prototypical octahedral coordination geometry, the water ligands 'wandered' throughout the optimization process, in a fashion loosely reminiscent of a molecular dynamics run. As with the EDTA complex, I just chose the final geometry after the optimizer halted as the one on which I ran the QTAIM analysis. For illustration, I generated GIF animations of the $[\ce{Na}(\text{EDTA})]^{3-}$ and $[\ce{Na}(\ce{H2O})_6]^+$ optimizations. The individual frames were produced with the 'Movie Maker' module of VMD 1.9.1 and the final GIFs were generated using EZgif.com. Several metrics generated by QTAIM analysis argue for strictly ionic bonding of the alkali metal atoms in these systems: Low electron density $(\rho \substack{<\\\sim}0.1)$ and positive density Laplacian $(\nabla^2\rho>0)$ at the relevant line critical points between the alkali cation and the coordinating oxygen (or nitrogen) atoms High localization index $(\mathrm{LI}\substack{>\\\sim}0.9)$ in the alkali atom basin Low delocalization index $(\mathrm{DI}\substack{<\\\sim}0.75)$ between the alkali atom basin and those of the coordinating atoms Strictly speaking, in addition to $\#1$ above, the terms of the Laplacian perpendicular to the bond path must also be small in magnitude in order to diagnose ionic bond character$^{[2]}$, but I have omitted those values here. The table below presents these metrics for the above representative chemical systems, as well as some reference molecules possessing uncontroversial bonding types. Due to the fairly regular geometry of $[\ce{Li}(\text{12-crown-4})]^+$, the metrics didn't vary much throughout the system and I averaged them for each bond and atom type. Despite the lack of a deep energy well, the metrics were also similar for the six oxygens in $[\ce{Na}(\ce{H2O})_6]^+$, so I averaged the values here as well. For the irregular geometry of $[\ce{Na}(\text{EDTA})]^{3-}$, the values differed enough that I chose to report them indivdually; the numbering of the coordinating atoms is arbitrary. All non-literature data were generated by MultiWFN v3.3.7 on Windows 7 using the default settings and the "Medium" grid for generation of the atomic basins for the $\mathrm{DI}$ and $\mathrm{LI}$ calculations. All values are in atomic units: $\rho\equiv{e^-\over\mathrm{Bohr}^3}$ and $\nabla^2\rho\equiv{e^-\over\mathrm{Bohr}^5}$ ($\mathrm{DI}$ and $\mathrm{LI}$ are dimensionless). $$~ \\\textbf{QTAIM Results} \\\begin{array}{ccccccccc}\hline\mathbf{Species} & ~ & \mathbf{Bond} & \rho_\mathbf{LCP} & \mathbf{\left(\nabla^2\rho\right)_\mathbf{LCP}} & \mathbf{DI} & ~ & \mathbf{Atom} & \mathbf{LI} \\\hline & & \ce{Li-O} & 0.036 & +0.26 & 0.077 & & \mathrm{Li} & 0.922 \\[\ce{Li}(\text{12-crown-4})]^+ & & \ce{O-C} & 0.25 & -0.50 & 0.856 & & \ce O & 0.870 \\ & & \ce{C-C} & 0.26 & -0.70 & 0.949 & & \ce C & 0.656 \\ & & \ce{C-H} & 0.28 & -1.01 & 0.901 & & \ce H & 0.420\\\hline& & \ce{Na-O}1 & 0.015 & +0.086 & 0.064 & & \ce{Na} & 0.974 \\& & \ce{Na-O}2 & 0.021 & +0.131 & 0.096 & & \ce{O}1 & 0.899 \\[\ce{Na}(\text{EDTA})]^{3-} & & \ce{Na-O}3 & 0.020 & +0.121 & 0.089 & & \ce{O}2 & 0.900 \\& & \ce{Na-N}1 & 0.014 & +0.074 & 0.056 & & \ce{O}3 & 0.900 \\& & \ce{Na-N}2 & 0.018 & +0.094 & 0.065 & & \ce{N}1 & 0.766 \\& & & & & & & \ce{N}2 & 0.769 \\\hline[\ce{Na}(\ce{H2O})_6]^+ & & \ce{Na-O} & 0.014 & +0.086 & 0.057 & & \ce{Na} & 0.975 \\& & & & & & & \ce{O} & 0.916 \\\hline[\ce{NaCl}]^0 & & \ce{Na-Cl} & 0.035 & +0.197 & 0.321 & & \ce{Na} & 0.978 \\& & & & & & & \ce{Cl} & 0.975 \\\hline[\ce{LiF}]^{0\,\ddagger} & & \ce{Li-F} & 0.079 & +0.749 & 0.179 & & \ce{Li} & 0.957 \\& & & & & & & \ce{F} & 0.991 \\\hline[\ce{CH4}]^{0\,\ddagger} & & \ce{C-H} & 0.291 & -1.168 & 0.980 & & \ce{C} & 0.661 \\& & & & & & & \ce{H} & 0.469 \\\hline^\ddagger~_{\text{Values from Ref. [2]}}\end{array}$$ As can clearly be seen, the interactions between the alkali metals and the ligating atoms are uniformly of ionic character: low electron density and positive Laplacian at the LCP; low $\mathrm{DI}$ between the paired basins, and quite high $\mathrm{LI}$ for the alkali metal basins. This is consistent with the ionic species $\ce{LiF}$ and $\ce{NaCl}$ in the table, and is in marked contrast to the covalent systems included. $^{[1]}$ More information can be found on Wikipedia and at RFW Bader's page at McMaster University. A more readable (but unfortunately paywalled) description is provided in Ref. [2]. $^{[2]}$Bader & Matta, Found Chem 15: 253 (2013). doi:10.1007/s10698-012-9153-1.
Determinant with Rows Transposed Theorem Proof Let $\mathbf A = \sqbrk a_n$ be a square matrix of order $n$. Let $\map det {\mathbf A}$ be the determinant of $\mathbf A$. Let $1 \le r < s \le n$. From Parity of K-Cycle, $\map \sgn \rho = -1$. Let $\mathbf A' = \sqbrk {a'}_n$ be $\mathbf A$ with rows $r$ and $s$ transposed. By the definition of a determinant: $\displaystyle \map \det {\mathbf A'} = \sum_\lambda \paren {\map \sgn \lambda \prod_{k \mathop = 1}^n a'_{k \map \lambda k} }$ $\displaystyle \map \det {\mathbf A'} = \sum_\lambda \paren {\map \sgn \rho \map \sgn \lambda \prod_{k \mathop = 1}^n a_{\map \rho k \map \lambda k} }$ We can take $\map \sgn \rho = -1$ outside the summation because it is constant, and so we get: $\displaystyle \map \det {\mathbf A'} = \map \sgn \rho \sum_\lambda \paren {\map \sgn \lambda \prod_{k \mathop = 1}^n a_{\map \rho k \map \lambda k} } = -\sum_\lambda \paren {\map \sgn \lambda \prod_{k \mathop = 1}^n a_{k \map \lambda k} }$ Hence the result. From Determinant of Transpose, it follows that the same applies to columns as well. $\blacksquare$
First note that your formula can be interpreted as the nth term of a sequence: $$\{n(2n+1)(2n-1)/3\}=1,10,35,84,165,...$$ You claim that your nth term formula gives the summation (more properly, the partial sums) of all squares of odd numbers, up to $n$. This implies that term $n=3$, for example, is the sum of the first three odd squares: $$35=1+9+25=1^2+3^2+5^2$$ This appears to be correct, but I have not tested beyond the first four terms. I would be interested to see a proof of the claim that this sequence is the partial sums of the squares of odd numbers. This is the nth partial sum of the squares of odd numbers: $$S_n=\sum_{i=1}^{n}(2i-1)^2=1+9+25+...+(2n-1)^2$$ Note that odd numbers are produced by the moiety $\{2i+1\}$ or $\{2i-1\}$ and even numbers by $\{2i\}$; this may help you construct sequences and series in the future. Let's examine some partial sums of this series: $$S_1=\sum_{i=1}^{1}(2i-1)^2=1$$$$S_2=\sum_{i=1}^{2}(2i-1)^2=1+9=10$$$$S_3=\sum_{i=1}^{3}(2i-1)^2=1+9+25=35$$$$S_4=\sum_{i=1}^{4}(2i-1)^2=1+9+25+49=84$$ Notice that the partial sums are indeed following your sequence (for the first four terms at least): $$1,10,35,84,...=\{n(2n+1)(2n-1)/3\}$$ So, we may suspect an equation between the partial sums of the squares of odd numbers and the sequence of the partial sums: $$\sum_{i=1}^{n}(2i-1)^2=\frac{n(2n+1)(2n-1)}{3}\equiv S_n$$ where "$\equiv$" means "define as", so your function $S_n$ can be calculated by either of the above expressions. UPDATE Given an arbitrary integer $N$, you seem to be seeking the sum of the squares of odd numbers up to but not exceeding $N$. That is, you desire the partial sum $S_n$ where none of the $n$ terms (all squares of odd numbers) exceeds $N$. (Correct me if I'm wrong). I propose the following algorithm as a solution to your problem: For arbitrary input integer $N$, define the integer index $n$: $$n\equiv \Big\lfloor\frac{\sqrt N+1}{2}\Big\rfloor$$ where $\lfloor x\rfloor$ is the floor function, rounding down the input x to the nearest integer. I suppose I would show the derivation for the above expression if you request it, but I've already spent more time than I should have on this little puzzle. When the appropriate parameter value $n$ is obtained from arbitrary input $N$, the value you seek is the $n$th partial sum of the squares of odd numbers: $$S_n\equiv\sum_{i=1}^{n}(2i-1)^2\\=\frac{n(2n+1)(2n-1)}{3}\\=\frac{1}{3}\Big(4n^3-n\Big)$$ where whichever form seems most convenient may be used for calculation. TL;DR I think the function you desire is $$f(N)\equiv\sum_{i=1}^{\lfloor(\sqrt N+1)/2\rfloor}(2i-1)^2\\=\frac{1}{3}\Big(4\Big\lfloor\frac{\sqrt N+1}{2}\Big\rfloor^3-\Big\lfloor\frac{\sqrt N+1}{2}\Big\rfloor\Big)$$ I have listed two equivalent forms for convenience. This function is intended to return the sum of the squares of odd numbers, truncated so that no term (odd number square) is larger than the input $N$. I invite you to tabulate values of $f(N)$ for integer inputs $N$ and let me know if this function fulfills your your needs. If I may ask, what motivates your interest in such a function?
I need some sharped and advanced advices for the following issue ... Model and its assumptions I'm working on the methodology of a two-way error component model. Here is the model: $y_{jis} = x_{jis} \beta + \upsilon_{jis}$. $j$ refers to school, $i$ refers to individual and $s$ to tested area/topic (mathematics or english, ...). $J$ is the number of schools and $S$ is the number of tested fields. $N$ is the number of students among all schools. $\upsilon_{jis}$ is the error term and can be decomposed as follows: $\upsilon_{jis} = \theta_{j} + \phi_{js} + \epsilon_{jis}$ $\theta_{j}$ is the random effect for students attenting a school $j$, $\phi_{js}$ is the random effect for students attenting a school $j$ for a topic $s$ and $\epsilon_{jis}$ is the traditionnal idiosyncratic error. In multivariate form it gives: $$Y = X \beta + \upsilon$$ where $\upsilon = R \theta + F \phi + \epsilon $ $R$ and $F$ are matrices that enable to correclty distribute their respective random effect. R: size $= NS \times J$. F: size $= NS \times JS$ $\theta$ follows a multivariate normal with mean $0$ and a variance covariance matrix $= \tau I_{J}$. $\tau$ is a scalar and $I_{J}$ is the identity matrix (size: $J \times J$). $\phi$ follows a multivariate normal with mean $0$ and a variance covariance matrix $= \gamma I_{JS}$. $\gamma$ is a scalar and $I_{JS}$ is the identity matrix (size: $J S \times J S$). $\epsilon$ follows a multivariate normal with mean $0$ and a variance covariance matrix $= \sigma I_{NS}$. $\sigma$ is a scalar and $I_{JS}$ is the identity matrix (size: $N S \times N S$). Estimation of unknown parameters I have to make estimation of the different variances: $\tau$, $\gamma$ and $\sigma$ before running FGLS method. I work step by step to delete the different random effect. First I made a within regression to get rid of $\phi$ effect. Since $\phi$ is an effect for a given school, that within operator ($W$) will automatically make the $\theta$ effect vanishing. $\epsilon$ is the only survivor and I can use the methodology in Hayashi to calculate the expectation of $( (We)' We | X) = . ..$ in order to correct with the right degree of freedom. It is easy since the variance of that element is only diagonal, it will then lead to calculate the trace .... Now, I only want to make the term theta disappear with another within operator ($W_{2}$). By construction, it deletes $\theta$, so there are $\phi$ and $\epsilon$ left. Once again I try to perform the expectation of $ ( (W_{2}e)' W_{2}e | X)$ where $e$ is composed now of $\phi$ and $\epsilon$. The variance of that vector is not diagonal, thus we don't have a beautiful formula for the estimated variance of $\phi$ ... I carefully follow the good methodology but I don't have a good-looking formula. Is it compulsory to make transformation to the model in order to have only diagonal elements on variance-covariance matrix of errors? I can also provide more details about the development of Hayashi that I reuse ... Thanks in advance.
This answer describes a realistic problem where a natural consistent estimator is dominated (outperformed for all possible parameter values for all sample sizes) by an inconsistent estimator. It is motivated by the idea that consistency is best suited for quadratic losses, so using a loss departing strongly from that (such as an asymmetric loss) should render consistency almost useless in evaluating the performance of estimators. Suppose your client wishes to estimate the mean of a variable (assumed to have a symmetric distribution) from an iid sample $(x_1, \ldots, x_n)$, but they are averse to either (a) underestimating it or (b) grossly overestimating it. To see how this might work out, let us adopt a simple loss function, understanding that in practice the loss might differ from this one quantitatively (but not qualitatively). Choose units of measurement so that $1$ is the largest tolerable overestimate and set the loss of an estimate $t$ when the true mean is $\mu$ to equal $0$ whenever $\mu \le t\le \mu+1$ and equal to $1$ otherwise. The calculations are particularly simple for a Normal family of distributions with mean $\mu$ and variance $\sigma^2 \gt 0$, for then the sample mean $\bar{x}=\frac{1}{n}\sum_i x_i$ has a Normal$(\mu, \sigma^2/n)$ distribution. The sample mean is a consistent estimator of $\mu$, as is well known (and obvious). Writing $\Phi$ for the standard normal CDF, the expected loss of the sample mean equals $1/2 + \Phi(-\sqrt{n}/\sigma)$: $1/2$ comes from the 50% chance that the sample mean will underestimate the true mean and $\Phi(-\sqrt{n}/\sigma)$ comes from the chance of overestimating the true mean by more than $1$. The expected loss of $\bar{x}$ equals the blue area under this standard normal PDF. The red area gives the expected loss of the alternative estimator, below. They differ by replacing the solid blue area between $-\sqrt{n}/(2\sigma)$ and $0$ by the smaller solid red area between $\sqrt{n}/(2\sigma)$ and $\sqrt{n}/\sigma$. That difference grows as $n$ increases. An alternative estimator given by $\bar{x}+1/2$ has an expected loss of $2\Phi(-\sqrt{n}/(2\sigma))$. The symmetry and unimodality of normal distributions imply its expected loss is always better than that of the sample mean. (This makes the sample mean inadmissible for this loss.) Indeed, the expected loss of the sample mean has a lower limit of $1/2$ whereas that of the alternative converges to $0$ as $n$ grows. However, the alternative clearly is inconsistent: as $n$ grows, it converges in probability to $\mu+1/2 \ne \mu$. Blue dots show loss for $\bar{x}$ and red dots show loss for $\bar{x}+1/2$ as a function of sample size $n$.
Bifurcation and multiplicity results for a class of $n\times n$ $p$-Laplacian system 1. Department of Mathematics, Indian Institute of Technology Madras, Chennai-600036, India 2. Department of Mathematics and Statistics, University of North Carolina at Greensboro, Greensboro, NC 27412, USA 3. Department of Mathematics, Wayne State University, Detroit, MI 48202, USA $n\times n$ $p$ $\begin{equation*}\begin{cases}-\left(\varphi_{p_1}(u_1')\right)' = \lambda h_1(t) \left(u_1^{p_1-1-\alpha_1}+f_1(u_2)\right),\quad t\in (0,1),\\-\left(\varphi_{p_2}(u_2')\right)' = \lambda h_2(t) \left(u_2^{p_2-1-\alpha_2}+f_2(u_3)\right),\quad t\in (0,1),\\\quad\quad\quad\vdots\qquad\,\: =\quad\quad\quad\quad\quad\quad \vdots\\-\left(\varphi_{p_n}(u_n')\right)' = \lambda h_n(t) \left(u_n^{p_n-1-\alpha_n}+f_n(u_1)\right),~~\, t\in (0,1),\\\quad\,\,\,\, u_j(0)=0=u_j(1); ~~ j=1,2,\dots,n, \\ \end{cases}\end{equation*}$ $\lambda$ $p_j>1$ $\alpha_j\in(0,p_j-1)$ $\varphi_{p_j}(w)=|w|^{p_j-2}w$ $h_j \in C((0,1),(0, \infty))\cap L^1((0,1),(0,\infty))$ $j=1,2,\dots,n$ $f_j:[0,\infty)\rightarrow[0,\infty)$ $j=1,2,\dots,n$ $f_j(0)=0$ $\lambda>0$ $\lambda$ Mathematics Subject Classification:Primary: 34B16, 34B18; Secondary: 35J57. Citation:Mohan Mallick, R. Shivaji, Byungjae Son, S. Sundar. Bifurcation and multiplicity results for a class of $n\times n$ $p$-Laplacian system. Communications on Pure & Applied Analysis, 2018, 17 (3) : 1295-1304. doi: 10.3934/cpaa.2018062 References: [1] [2] [3] [4] R. Shivaji, A remark on the existence of three solutions via sub-super solutions, [5] show all references References: [1] [2] [3] [4] R. Shivaji, A remark on the existence of three solutions via sub-super solutions, [5] [1] K. D. Chu, D. D. Hai. Positive solutions for the one-dimensional singular superlinear $ p $-Laplacian problem. [2] Claudianor O. Alves, Vincenzo Ambrosio, Teresa Isernia. Existence, multiplicity and concentration for a class of fractional $ p \& q $ Laplacian problems in $ \mathbb{R} ^{N} $. [3] Phuong Le. Symmetry of singular solutions for a weighted Choquard equation involving the fractional $ p $-Laplacian. [4] Nicholas J. Kass, Mohammad A. Rammaha. Local and global existence of solutions to a strongly damped wave equation of the $ p $-Laplacian type. [5] Gabriele Bonanno, Giuseppina D'Aguì. Mixed elliptic problems involving the $p-$Laplacian with nonhomogeneous boundary conditions. [6] Vladimir Chepyzhov, Alexei Ilyin, Sergey Zelik. Strong trajectory and global $\mathbf{W^{1,p}}$-attractors for the damped-driven Euler system in $\mathbb R^2$. [7] [8] Annalisa Cesaroni, Serena Dipierro, Matteo Novaga, Enrico Valdinoci. Minimizers of the $ p $-oscillation functional. [9] Yonglin Cao, Yuan Cao, Hai Q. Dinh, Fang-Wei Fu, Jian Gao, Songsak Sriboonchitta. Constacyclic codes of length $np^s$ over $\mathbb{F}_{p^m}+u\mathbb{F}_{p^m}$. [10] Joaquim Borges, Cristina Fernández-Córdoba, Roger Ten-Valls. On ${{\mathbb{Z}}}_{p^r}{{\mathbb{Z}}}_{p^s}$-additive cyclic codes. [11] [12] Peng Mei, Zhan Zhou, Genghong Lin. Periodic and subharmonic solutions for a 2$n$th-order $\phi_c$-Laplacian difference equation containing both advances and retardations. [13] Genghong Lin, Zhan Zhou. Homoclinic solutions of discrete $ \phi $-Laplacian equations with mixed nonlinearities. [14] Jiao Du, Longjiang Qu, Chao Li, Xin Liao. Constructing 1-resilient rotation symmetric functions over $ {\mathbb F}_{p} $ with $ {q} $ variables through special orthogonal arrays. [15] Erchuan Zhang, Lyle Noakes. Riemannian cubics and elastica in the manifold $ \operatorname{SPD}(n) $ of all $ n\times n $ symmetric positive-definite matrices. [16] Lianjun Zhang, Lingchen Kong, Yan Li, Shenglong Zhou. A smoothing iterative method for quantile regression with nonconvex $ \ell_p $ penalty. [17] Qunyi Bie, Haibo Cui, Qiru Wang, Zheng-An Yao. Incompressible limit for the compressible flow of liquid crystals in $ L^p$ type critical Besov spaces. [18] VicenŢiu D. RǍdulescu, Somayeh Saiedinezhad. A nonlinear eigenvalue problem with $ p(x) $-growth and generalized Robin boundary value condition. [19] Woocheol Choi, Yong-Cheol Kim. $L^p$ mapping properties for nonlocal Schrödinger operators with certain potentials. [20] Shixiong Wang, Longjiang Qu, Chao Li, Huaxiong Wang. Further improvement of factoring $ N = p^r q^s$ with partial known bits. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
Why Yitang Zhang's proof is probably far less fundamental than the claim Yitang Zhang worked at Subway before he would land a mathematics job. And when he did, he wasn't publishing almost anything for years before he would offer a proof of something rather important weeks ago. That turned the name of the popular math instructor in New Hampshire into one of the most well-known names of number theorists in the world. Some increasingly popular links are: Bounded gaps between primes (Zhang's technical paper)If \(p_1,p_2,p_3,\dots =2,3,5,\dots\) denotes the \(n\)-th prime, the statement proven by Zhang may be phrased in a very simple way:\[ Philosophy behind the proof (Math Overflow) First proof that... (Nature) Prime number breakthrough by unknown professor (Telegraph) \liminf_{n\to\infty} (p_{n+1}-p_n) \lt 70,000,000. \] The operator above is called the limit inferior which is just\[ \liminf_{n\to\infty}x_n := \lim_{n\to\infty}\Big(\inf_{m\geq n}x_m\Big) \] If you think about this limit of the infimum for a while, you will understand that the limit inferior in the claim proved by Zhang is just the smallest gap between the adjacent primes that is realized infinitely many times (for infinitely many pairs). In other words, there exists at least one number – a potential gap between adjacent primes – that is realized infinitely many times. Because of some technical properties that probably depended on many personal choices that Zhang has made while attacking the problem, the upper bound in the inequality turns out to be a high number, namely 70 million. It is such a high number that for all practical purposes, the proposition proven by Zhang is de facto equivalent to\[ \liminf_{n\to\infty} (p_{n+1}-p_n) \lt \infty \] i.e. to the claim that there exists a finite number that is realized as the gap between adjacent primes in infinitely many pairs. On the other hand, as I will argue, the actually correct (but rigorously unproven) claim stronger than Zhang's theorem says\[ \liminf_{n\to\infty} (p_{n+1}-p_n) = 2 \] which means that even twin primes – pairs of primes that differ by two – are realized infinitely many times: there are infinitely many pairs of twin primes. This claim is the famous twin prime conjecture. In some sense, the assertion proven by Zhang is 35 million times weaker than the twin prime conjecture. Note that the first twin primes are\[ (3, 5), (5, 7), (11, 13), (17, 19), (29, 31), (41, 43), (59, 61), \\ (71, 73), (101, 103), (107, 109), (137, 139), \dots \] and there doesn't seem to be the tiniest reason to think that the list should terminate at some point. The largest currently known twin prime pair is \(2,003,663,613\cdot 2^{195,000}\pm 1\), two similar numbers that have 58,711 digits (each). I don't plan to study the proof in detail because it looks very complicated and "non-unique" to me. The proven statement is slightly interesting but the proof is probably less interesting – there's just a small chance that I am wrong – and there's less "profound message" to learn from it. It's like if you are interested in the Moon and someone asks you to study O-rings in the Apollo spacecraft. Moreover, I am not too interested in the claim that has been proved. But there is one more key reason: I feel certain that the proposition is true. The reason behind this certainty is the validity of a much stronger claim – a not quite rigorously defined one – that implies the twin prime conjecture, Zhang's proof, and many and many other much weaker corollaries. The claim is that except for patterns that may be easily proved, the prime integers are distributed randomly and independently with \(1/ \ln n\) being the probability that a random number close to \(n\) is a prime.This general – somewhat vague but still very important – claim has many consequences, including the Riemann Hypothesis. In fact, the character of the proposition above is more or less a special case of Gell-Mann's totalitarian principle in physics: Everything that isn't forbidden is mandatory.By this quote that generalizes the experience of the people suffering under totalitarian regimes such as communism and Nazism (there's no freedom: they tell you what to do and what not to do) to all of physics, Gell-Mann meant that the coefficient of every interaction in a Lagrangian or the probability of any resulting complicated process is nonzero unless one may use a symmetry or another rock-solid principle to prove that the coefficient is zero (because it violates the symmetry or another sacred principle). In this analogy, "patterns that are easy to prove" are analogous to the "symmetries or other principles forbidding certain things". May I explain what I mean in the case of primes? A pattern that is easy to prove is, for example, that if \(n\) is a prime, then \(n+1\) and \(n-1\) are not primes, assuming that \(n\gt 3\). It's because except for \(n=2\), only odd numbers may be prime. Similarly, among six consecutive integers greater than \(12\), just to be sure, at most two numbers may be primes. It's because only three numbers among the six are odd; and one of them is a multiple of three. One could continue with many examples of this kind. Similarly, using the Gell-Mann totalitarian principle, one may demonstrate that the twin prime conjecture and its generalizations hold. There doesn't seem to be any reason why the difference between primes shouldn't be equal to two (or some other allowed even numbers) – there are many examples in which it is two, in fact – so there must exist infinitely many examples for the probability that \(n\) and \(n+2\) are both primes to be nonzero. Of course, it's hard to prove that "there is no reason why twin primes should stop at some point", either, but at least, one may prove that there exist no "reasons of the well-known types". A TRF-based heuristic proof of the prime number theorem Now, the density of primes around \(n\) asymptotically goes like \(1/ \ln n\). This is the right estimate for \(n\to\infty\), including the right numerical prefactor (the relative error goes to zero in the limit). This statement is known as the prime number theorem and it is a severely weakened sibling of the Riemann Hypothesis which may be equivalently stated as the (easy-to-prove) assertion that the roots of \(\zeta(s)\) only exist for \(s\in\RR\) or in the critical strip \(0\leq {\rm Re}(s)\leq 1\). I can offer you a supersimple, Lumoesque argument why the density of primes goes like \(1/\ln(n)\). Call the functional dependence of the density \(\rho(n)\); it's really the probability that the number around \(n\) is prime. A number \(n\) is prime if it is not divisible by any prime smaller than or equal to \(\sqrt{n}\). These are statistically independent conditions. So\[ \rho(n) = P_{n\in{\rm primes}} = \prod_{p\leq \sqrt{n}}^{p\in{\rm primes}} (1-1/p)=\dots \] because the probability that a random large \(n\) isn't a multiple of \(p\) equals \(1-1/p\). But the product may be written as the exponential of the sum of logarithms\[ \dots = \exp\sum_{p\leq \sqrt{n}}^{p\in{\rm primes}} \ln (1-1/p) = \dots \] and the sum over primes \(p\) may be approximated by the sum over all integers \(i\) weighted by the probability \(\rho(i)\) that \(i\) is prime:\[ \rho(n) = \dots = \exp \sum_{i\leq\sqrt{n}} \rho(i) \ln (1-1/i). \] Now, the sum over \(i\) may be approximated by an integral when \(\rho(i)\) is smoothened. Take the logarithm of the identity above (with the sum replaced by the integral)\[ \ln\rho(n) = \dots = \int_1^{\sqrt{n}} \dd i\, \rho(i) \ln (1-1/i) \] and differentiate it with respect to \(n\) to get\[ \frac{\rho'(n)}{\rho(n)} = \frac{1}{2\sqrt{n}} \rho(\sqrt{n}) \ln(1-1/ \sqrt{n})\sim -\frac{\rho(\sqrt{n})}{2n} \] where \(1/2\sqrt{n}\) came from \(\dd(\sqrt{n})/\dd n\) and where \(\ln(1-x)\sim -x\) because \(x\to 0^+\). One may easily verify that \(\rho(n)\sim 1 / \ln(n)\) satisfies the identity above; both sides are equal to \(-1/ n\ln(n)\) in that case. Among uniformly, nicely decreasing functions \(\rho(n)\), this solution may be seen to be unique. Even the coefficient in front of the logarithm or, equivalently, the base of the logarithm (\(e\)) may be seen to be determined by the (nonlinear) condition above. You may check how Terence Tao imagines a heuristic proof of the prime number theorem. I leave you to decide who among the two of us is the cumbersome overworked craftsman and who is the seer. ;-) At any rate, the heuristic proofs above aren't rigorous but one may rigorously prove the prime number theorem. One may also prove other things. As you can see by comparing various proofs sketched by various people – or the same people at various moments – there are many strategies that may be used to attack similar problems. When we're rigorously proving something like that in mathematics, we often work with lots of inequalities – not only the final one that e.g. Zhang has proved; but also with many inequalities in the intermediate steps. And the inequalities are usually ad hoc. We want to find an object that is "good enough to achieve a certain next step" but how good this good enough object has to be isn't quite determined. What the next step has to be isn't quite determined, either. There's simply a lot of freedom when one designs a proof. It's very likely that some other mathematicians will improve Zhang's proof so that they will reduce the constant 70 million to something smaller. Such proofs may be perhaps obtained as "modest mutations" of Zhang's machinery. However, it's unlikely that someone will reduce the constant 70 million to a constant smaller than 6 while keeping the bulk of Zhang's proof intact because certain tools become inapplicable for such small gaps (see the Math Overflow summary of the proof). The proof of the actual twin prime conjecture will probably have to be completely different than Zhang's proof. It's nice that he has achieved a rigorous proof of a theorem that is a weaker version of the twin prime conjecture but I doubt that one can learn a lot by studying the details of his proof. There had to be so much freedom when he designed it. So it's like a NASA rocket engineer's decision to study every detail of a Soyuz aircraft. I don't think that this is the most important activity needed to conquer the outer space. Much like the Soyuz spaceships, Zhang's proof probably have many idiosyncrasies reflect the Russians' and the Chinese-American man's suboptimal approach to problems. In mathematics and theoretical physics, when something is just being proved, we often encounter two different situations: in one subclass, the methods needed to prove something give us such new insights that these insights – methods, auxiliary structures that were used to complete the proof, and so on – are actually more valuable than the statement that has been proven. But I tend to think that Zhang's proof belongs to the opposite class of situations – in which the proof is less important than the assertion because it's composed of many idiosyncratic steps and tricks that are probably inapplicable elsewhere and that may be replaced by completely different "building blocks" to prove even the desired proposition. Of course that I can't be quite sure about this pessimistic appraisal of the proof's methodology if I haven't actually mastered the proof. But because of general reasons and experience, I believe it's the case, anyway. Moreover, I tend to believe that the theorem proved by Zhang – and even the twin prime conjecture that may be proved in the future – is extremely weak relatively to some rigorous formulations of Gell-Mann's totalitarian principle applied here which says something like "the distribution of primes is random except for [simple divisibility-based] patterns that may be easily demonstrated". I tend to believe that such a principle will ultimately be formulated in a rigorous way and proved by a rather simple yet ingenious method, too. You should understand that if I believe that this elegant goal is a legitimate, finite, \({\mathcal O}(1)\) task for some future mathematicians, it's also reasonable for me to believe that the assertion by Zhang and its seemingly cumbersome proof is a nearly infinitesimal fraction of what mathematicians will achieve sometime in the future. Zhang's proof represents a kind of the cutting edge that the mathematicians are able to prove about similar propositions today. But do I really care about the cutting edge? This cutting edge, much like most cutting edges in mathematics, is made terribly modest by the mathematicians' uncompromising insistence on complete rigor. If one is actually interested in the truth and is satisfied with arguments suggesting that something is true at the 5-sigma or 10-sigma confidence level, in some counting, the cutting edge is elsewhere – it's much further. So of course that the hunt for strictly rigorous proofs that has defined mathematics after its divorce with physics is a legitimate goal – a constraint worshiped by a large group of professionals, the mathematicians in the modern sense. However, the strict rules of this hunt inevitably imply that in many cases, these professionals place themselves miles beneath the actual cutting edge of knowledge as I understand it. And that's the memo.
Hint $\ $ By $ $ Vieta, $\,\ x^2 -\frac{10}3 x - 67\, =\, (x-a)(x-b)\iff \ \color{#0a0}{a+b} = 10/3,\ \color{#c00}{ab} = -67$ $(a-b)^2$ is symmetric in $\,a,b\,$ so by FTSP it can be written as a polynomial in $\,\color{#0a0}{a+b},\ \color{#c00}{ab}$ Indeed, applying Gauss's Algorithm we find that $\, (a-b)^2 = (\color{#0a0}{a+b})^2 -4\color{#c00}{ab}\, =\, \dfrac{16\cdot 157}9$ Remark $\ $ The same algorithm works for polynomials in any number of variables. It reduces problem like this to rote mechanical computation, i.e. no guesswork is required to solve such problems, only simple polynomial arithmetic. The algorithm yields a constructive interpretation of the FTSP = Fundamental Theorem of Symmetric Polynomials, that every symmetric polynomial has a unique representation as a polynomial in the elementary symmetric polynomials. Gauss's algorithm may be viewed as a special case of Gröbner basis methods (which may be viewed both as a multivariate generalization of the (Euclidean) polynomial division algorithm, as well as a nonlinear genralization of Gaussian elimination for linear systems of equation). Gauss's algorithm is the earliest known use of such a lexicographic order for term-rewriting (now mechanized by the Grobner basis algorithm and related methods).
I am faced with the following problem: Given $C = \{B,C,D,F\}$ and $V = \{A, E, I, O, U\}$ find the number of 9-letter words with elements from $C$ and $V$ such that no two vowels (elements of V) are adjacent. Following this answer about a very similar question I get that I should express the problem as a double recurrence: $a_{n+1} = 5b_n$, $a_0 = 1$ $b_{n+1} = 4(a_n + b_n)$, $a_0 = 1$ where $a_n$ is the number of string starting by a vocal and $b_n$ is the number of string starting by a consonant. Expressing it as a matrix I get $\begin{pmatrix} a_{n+1} \\ b_{n+1} \end{pmatrix} = \begin{pmatrix} 0 & 5 \\ 4 & 4 \end{pmatrix} \begin{pmatrix} a_n \\ b_n \end{pmatrix}$ And I get that the eigenvalues of this matrix are $\{2+\frac{\sqrt{96}}{2}, 2-\frac{\sqrt{96}}{2}\}$. Is this the correct approach? I don't really know where to go from here or how to solve the recurrence for any given $k$. Thank you in advance
A $\sigma$-algebra is an algebraic structure (specifically, a Boolean algebra with a countably infinite operation of countable supremum (satisfying the axiom that it's the supremum of its arguments)), together with a representation of its elements as sets, making the operations the real set operations. When forming the quotient, the 'algebra layer' (the operations and the axioms they satisfy) are kept, but in general, we lose the representation. However, in some specific cases we can simply determine the new representation, like for example if $I=\{B\in\mathcal F:B\subseteq A\}$ for some $A\in\mathcal F$. For the general case, we need to generalize the construction of the Boolean algebra case, and for that we need $\sigma$-ultrafilters, i.e. ultrafilters that are closed under countable intersections. Let $Y:=\{U\ \,\sigma$-ultrafilter on $X: U\cap I=\emptyset\}$, and represent $[A]\in\mathcal F/I$ as $\{U\in Y: A\in U\}$. Check that it's a well-defined injective $\sigma$-algebra morphism $\mathcal F/I\to\mathcal P(Y)$.
Huge cardinal Huge cardinals (and their variants) were introduced by Kenneth Kunen in 1972 as a very large cardinal axiom. Kenneth Kunen first used them to prove that the consistency of the existence of a huge cardinal implies the consistency of $\text{ZFC}$+"there is a $\omega_2$-saturated $\sigma$-ideal on $\omega_1$". It is now known that only a Woodin cardinal is needed for this result. However, the consistency of the existence of an $\omega_2$-complete $\omega_3$-saturated $\sigma$-ideal on $\omega_2$, as far as the set theory world is concerned, still requires an almost huge cardinal. [1] Contents 1 Definitions 2 Consistency strength and size 3 Relative consistency results 4 References Definitions Their formulation is similar to that of the formulation of superstrong cardinals. A huge cardinal is to a supercompact cardinal as a superstrong cardinal is to a strong cardinal, more precisely. The definition is part of a generalized phenomenon known as the "double helix", in which for some large cardinal properties n-$P_0$ and n-$P_1$, n-$P_0$ has less consistency strength than n-$P_1$, which has less consistency strength than (n+1)-$P_0$, and so on. This phenomenon is seen only around the n-fold variants as of modern set theoretic concerns. [2] Although they are very large, there is a first-order definition which is equivalent to n-hugeness, so the $\theta$-th n-huge cardinal is first-order definable whenever $\theta$ is first-order definable. This definition can be seen as a (very strong) strengthening of the first-order definition of measurability. Elementary embedding definitions $\kappa$ is almost n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length less than $\lambda$ (that is, $M^{<\lambda}\subseteq M$). $\kappa$ is n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length $\lambda$ ($M^\lambda\subseteq M$). $\kappa$ is almost n-hugeiff it is almost n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is n-hugeiff it is n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is super almost n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is almost n-huge with target $\lambda$ (that is, the target can be made arbitrarily large). $\kappa$ is super n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is n-huge with target $\lambda$. $\kappa$ is almost huge, huge, super almost huge, and superhugeiff it is almost 1-huge, 1-huge, etc. respectively. Ultrahuge cardinals A cardinal $\kappa$ is $\lambda$-ultrahuge for $\lambda>\kappa$ if there exists a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ such that $j(\kappa)>\lambda$, $M^{j(\kappa)}\subseteq M$ and $V_{j(\lambda)}\subseteq M$. A cardinal is ultrahuge if it is $\lambda$-ultrahuge for all $\lambda\geq\kappa$. [1] Notice how similar this definition is to the alternative characterization of extendible cardinals. Furthermore, this definition can be extended in the obvious way to define $\lambda$-ultra n-hugeness and ultra n-hugeness, as well as the " almost" variants. Ultrafilter definition The first-order definition of n-huge is somewhat similar to measurability. Specifically, $\kappa$ is measurable iff there is a nonprincipal $\kappa$-complete ultrafilter, $U$, over $\kappa$. A cardinal $\kappa$ is n-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$, and cardinals $\kappa=\lambda_0<\lambda_1<\lambda_2...<\lambda_{n-1}<\lambda_n=\lambda$ such that: $$\forall i<n(\{x\subseteq\lambda:\text{order-type}(x\cap\lambda_{i+1})=\lambda_i\}\in U)$$ Where $\text{order-type}(X)$ is the order-type of the poset $(X,\in)$. [1] $\kappa$ is then super n-huge if for all ordinals $\theta$ there is a $\lambda>\theta$ such that $\kappa$ is n-huge with target $\lambda$, i.e. $\lambda_n$ can be made arbitrarily large. If $j:V\to M$ is such that $M^{j^n(\kappa)}\subseteq M$ (i.e. $j$ witnesses n-hugeness) then there is a ultrafilter $U$ as above such that, for all $k\leq n$, $\lambda_k = j^k(\kappa)$, i.e. it is not only $\lambda=\lambda_n$ that is an iterate of $\kappa$ by $j$; all members of the $\lambda_k$ sequence are. As an example, $\kappa$ is 1-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$ such that $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}\in U$. The reason why this would be so surprising is that every set $x\subseteq\lambda$ with every set of order-type $\kappa$ would be in the ultrafilter; that is, every set containing $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}$ as a subset is considered a "large set." Coherent sequence characterization of almost hugeness Consistency strength and size Hugeness exhibits a phenomenon associated with similarly defined large cardinals (the n-fold variants) known as the double helix. This phenomenon is when for one n-fold variant, letting a cardinal be called n-$P_0$ iff it has the property, and another variant, n-$P_1$, n-$P_0$ is weaker than n-$P_1$, which is weaker than (n+1)-$P_0$. [2] In the consistency strength hierarchy, here is where these lay (top being weakest): measurable = 0-superstrong = 0-huge n-superstrong n-fold supercompact (n+1)-fold strong, n-fold extendible (n+1)-fold Woodin, n-fold Vopěnka (n+1)-fold Shelah almost n-huge super almost n-huge n-huge super n-huge ultra n-huge (n+1)-superstrong All huge variants lay at the top of the double helix restricted to some natural number n, although each are bested by I3 cardinals (the critical points of the I3 elementary embeddings). In fact, every I3 is preceeded by a stationary set of n-huge cardinals, for all n. [1] Similarly, every huge cardinal $\kappa$ is almost huge, and there is a normal measure over $\kappa$ which contains every almost huge cardinal $\lambda<\kappa$. Every superhuge cardinal $\kappa$ is extendible and there is a normal measure over $\kappa$ which contains every extendible cardinal $\lambda<\kappa$. Every (n+1)-huge cardinal $\kappa$ has a normal measure which contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is super n-huge" [1], in fact it contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is ultra n-huge". Every n-huge cardinal is m-huge for every m<n. Similarly with almost n-hugeness, super n-hugeness, and super almost n-hugeness. Every almost huge cardinal is Vopěnka (therefore the consistency of the existence of an almost-huge cardinal implies the consistency of Vopěnka's principle). [1] Every ultra n-huge is super n-huge and a stationary limit of super n-huge cardinals. Every super almost (n+1)-huge is ultra n-huge and a stationary limit of ultra n-huge cardinals. In terms of size, however, the least n-huge cardinal is smaller than the least supercompact cardinal (assuming both exist). [1] This is because n-huge cardinals have upward reflection properties, while supercompacts have downward reflection properties. Thus for any $\kappa$ which is supercompact and has an n-huge cardinal above it, $\kappa$ "reflects downward" that n-huge cardinal: there are $\kappa$-many n-huge cardinals below $\kappa$. On the other hand, the least super n-huge cardinals have both upward and downward reflection properties, and are all much larger than the least supercompact cardinal. It is notable that, while almost 2-huge cardinals have higher consistency strength than superhuge cardinals, the least almost 2-huge is much smaller than the least super almost huge. While not every $n$-huge cardinal is strong, if $\kappa$ is almost $n$-huge with targets $\lambda_1,\lambda_2...\lambda_n$, then $\kappa$ is $\lambda_n$-strong as witnessed by the generated $j:V\prec M$. This is because $j^n(\kappa)=\lambda_n$ is measurable and therefore $\beth_{\lambda_n}=\lambda_n$ and so $V_{\lambda_n}=H_{\lambda_n}$ and because $M^{<\lambda_n}\subset M$, $H_\theta\subset M$ for each $\theta<\lambda_n$ and so $\cup\{H_\theta:\theta<\lambda_n\} = \cup\{V_\theta:\theta<\lambda_n\} = V_{\lambda_n}\subset M$. Every almost $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\theta$-supercompact for each $\theta<\lambda_n$, and every $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\lambda_n$-supercompact. The $\omega$-huge cardinals A cardinal $\kappa$ is almost $\omega$-huge iff there is some transitive model $M$ and an elementary embedding $j:V\prec M$ with critical point $\kappa$ such that $M^{<\lambda}\subset M$ where $\lambda$ is the smallest cardinal above $\kappa$ such that $j(\lambda)=\lambda$. Similarly, $\kappa$ is $\omega$-huge iff the model $M$ can be required to have $M^\lambda\subset M$. Sadly, $\omega$-huge cardinals are inconsistent with ZFC by a version of Kunen's inconsistency theorem. Now, $\omega$-hugeness is used to describe critical points of I1 embeddings. Relative consistency results Hugeness of $\omega_1$ In [2] it is shown that if $\text{ZFC +}$ "there is a huge cardinal" is consistent then so is $\text{ZF +}$ "$\omega_1$ is a huge cardinal" (with the ultrafilter characterization of hugeness). Generalizations of Chang's conjecture Cardinal arithmetic in $\text{ZF}$ If there is an almost huge cardinal then there is a model of $\text{ZF+}\neg\text{AC}$ in which every successor cardinal is Ramsey. It follows that (1) for all inner models $W$ of $\text{ZFC}$ and every singular cardinal $\kappa$, one has $\kappa^{+W} < \kappa^+$ and that (2) for all ordinals $\alpha$ there is no injection $\aleph_{\alpha+1}\to 2^{\aleph_\alpha}$. This in turns imply the failure of the square principle at every infinite cardinal (and consequently $\text{AD}^{L(\mathbb{R})}$, see determinacy). [3] References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex
SONIC FORMS Brand new Dynamic Synthesis album is here: Sonic environments for public and private spaces Roland Kuit - KYMA Dynamic synthesis is used to create information streams. Lissajous orbits on van der Pol oscillator. A system of parametric equations: {\displaystyle x=A\sin(at+\delta ),\quad y=B\sin(bt),} x=A\sin(at+\delta ),\quad y=B\sin(bt) Design principles: balance, harmony, gradation, repetition, contrast, dominance. Design elements: line, shape, size, direction, texture, hue/tone This concept deals with how much information is needed in nano timbre / time to get meaning. The information comes from so-called transients. The duration varies between a small part of a period, from 1 usec to 3 or 4 milliseconds. The meaning of this 'articulation' is identity. This onset as impulse makes it possible to distinguish the identities of these sonic forms and transforming them to autonomous audio sculptures. iTUNES Amazon iTUNES Amazon
→ → → → Browse Dissertations and Theses - Mathematics by Title Now showing items 469-488 of 1147 application/pdfPDF (3MB) (2001)The approach to the ideal membership problem for Z [X] followed here is based on some properties (such as Weierstrass Division) of the ring Z p〈X〉 of restricted power series with coefficients in the ring Z ... application/pdfPDF (8MB) application/pdfPDF (3MB) application/pdfPDF (12MB) (2004)In Chapter 5, among other things, we give a new simple proof that ideals generated by c-sequences are of linear type (by adapting a theorem of K. Raghavan about d-sequences), prove that initial subsequences of c-sequences ... application/pdfPDF (3MB) (2015-01-21)This thesis addresses two closely related problems about ideals of powers of linear forms. In the first chapter, we analyze a problem from spline theory, namely to compute the dimension of the vector space of tri-variate ... application/pdfPDF (649kB) (2013-08-22)My dissertation is mainly about various identities involving theta functions and analogues of theta functions. In Chapter 1, we give a completely elementary proof of Ramanujan's circular summation formula of theta functions ... application/pdfPDF (488kB) application/pdfPDF (2MB) application/pdfPDF (1MB) application/pdfPDF (1MB) (1984)This paper investigates the following questions. If a compact manifold, M, with positive sectional curvature is isometrically immersed in some ambient space, N, what is the radius of the smallest ball in which its image ... application/pdfPDF (1MB) (2010-08-20)The main goal of this work is to improve algebraic geometric/number theoretic constructions of error-correcting codes and secret sharing schemes. For both objects we define parameters that indicate their effectiveness in ... application/pdfPDF (612kB) (1980)In the notation of H. Halberstam and H.-E. Richert, the small sieve estimate of Selberg is application/pdfPDF (2MB) application/pdfPDF (1MB) (1981)This thesis is divided into two parts. The first part studies the control of the maximal function of N-dimensional Brownian motion, B(,t), by the maximal function of partially observed Brownian motion. Let R denote a fixed ... application/pdfPDF (2MB) Inequalities for the differential subordinates of Martingales, harmonic functions and Ito processes (1995)In Chapter 1 we sharpen Burkholder's inequality $\mu(\vert v\vert\geq1)\leq2\Vert u\Vert\sb1$ for two harmonic functions u and v by adjoining an extra assumption. That is, we prove the weak-type inequality $\mu(\vert ... application/pdfPDF (2MB) (2003)We prove several infinite series identities. In Chapter 2, We extend C. L. Siegel's method of proving the Dedekind-eta function transformation by integrating some selected functions over a positively oriented polygon, ... application/pdfPDF (4MB) application/pdfPDF (3MB) application/pdfPDF (2MB) (2017-07-14)This thesis focuses on the effect of takeover announcements in financial markets. We want to use a math model to analyze the inside traders' behavior when there is a potential takeover in the market. The thesis starts ... application/pdfPDF (477kB)
How do you find the square root of an irrational numbers? Irrational numbers arise in many circumstances in mathematics. Examples include the following: The hypotenuse of a right triangle with base sides of length 1 has length \( \sqrt{2}\), which is irrational.... The difference between rational and irrational numbers can be drawn clearly on the following grounds . Rational Number is defined as the number which can be written in a ratio of two integers. An irrational number is a number which cannot be expressed in a ratio of two integers. In rational numbers, both numerator and denominator are whole numbers, where the denominator is not equal to zero Step-By-Step Guide to Adding Irrational Numbers Watch video · And then if you just take that irrational number and you multiply it, and you divide it by any other numbers, you're still going to get an irrational number. So square root of 8 is irrational. You divide that by 2, it is still irrational. So this is not rational. Or in other words, I'm saying it is irrational. Now, you have pi, 3.14159-- it just keeps going on and on and on forever without... Is the square root of 75 a rational or irrational number? was asked by Shelly Notetaker on May 31 2017. 1533 students have viewed the answer on StudySoup. View the answer on StudySoup. Q&A > Math > Is the square root of 75 a rational or irrational number? Is the square root of 75 a rational or irrational number? Is the square root of 75 a rational or irrational number? 2592 Views. Is the how to find three rational and three irrational numbers The set of irrational real numbers is uncountably infinite; in layman's terms, there are infinitely more irrational numbers than there are integers. Examples of Rational and Irrational Numbers Rational numbers are easy to find. how to get back into ketosis quickly after cheating For example, you can write the rational number 2.11 as 211/100, but you cannot turn the irrational number 'square root of 2' into an exact fraction of any kind. Don't assume, however, that Eighth grade Lesson Rational or Irrational (Day 1 of 2) Rational numbers and irrational numbers are mutually exclusive: they have no numbers in common. Furthermore, they span the entire set of real numbers; that is, if you add the set of rational numbers to the set of irrational numbers, you get the entire set of real numbers. Each of these sets has an infinite number of members. how to find the derivative of an integral find three ratioal and three irrational numbers between the following a. 6.5 and 7.8 b. 2.3 and 3.2 c. 1/7 and 2/7 d. root 3/2 and root 5/2 How long can it take? List of Irrational Numbers Math@TutorCircle.com Lesson 16 Rational and Irrational Numbers EngageNY Mathematical Numbers Natural Whole Rational Irrational Step-By-Step Guide to Adding Irrational Numbers Mathematical Numbers Natural Whole Rational Irrational How To Find Rational Or Irrational Number This is a correct way to solve be problem, and this technique works in general. Another technique is to take some irrational number (maybe $\pi$) and divide it by a ration number that's so big that the quotient is tiny (compared to the difference between the numbers you want to bound it by). $\pi$ is between $3$ and $4$, so $$\frac{1}{100 LESSON 6: Rational or Irrational (Day 1 of 2) LESSON 7: Rational or Irrational (Day 2 of 2) Rational numbers: Comes from the word Question 3 leads to the most confusion. I find that students often argue that there is a pattern in the numbers at 3a and 3c. When discussing Question 3, I focus on the following two points: The decimal expansion of a rational number repeats the same … Irrational Numbers: Any real number that cannot be written in fraction form is an irrational number. These numbers include the non-terminating, non-repeating decimals (pi, … 14/03/2005 · Assume that the difference between a rational number and an irrational number is an irrational number. The sum of any such two irrational numbers would be a rational number. The sum of any such two irrational numbers would be a rational number. You can find the square root of an irrational number by approximating irrational square roots of them, after you use the calculator. (The calculator gives an approximate root also) For example,
Monopoly Power Monopoly power (also called market power) refers to a firm’s ability to charge a price higher than its marginal cost. Monopoly power typically exists where the there is low elasticity of demand and significant barriers to entry. Why is it that a firm in perfect competition is a price-taker while a monopoly can set any price it deems fit? The answer lies in the nature of the demand curve facing each firm. In a perfect competition, no firm has any market power because they face a horizontal demand curve. They must supply at the prevailing market price or sell nothing. A monopoly, on the other hand, need not worry about any competition. Since a monopolist is the only firm in the market, if the elasticity of demand for its product is low, he determines the market price. In other words, a monopolist has infinite monopoly power. The most popular measure of monopoly power is Lerner Index which measures the difference between a firm's marginal cost and the price it charges as a proportion of the price. Sources of Monopoly Power Important determinants/sources of monopoly power include elasticity of demand of the product, existence of economies of scale, control of a key resource, existence of legal barriers, etc. Elasticity of Demand If the elasticity of demand is low, a firm is in a better position to charge a price higher than its marginal cost. If close substitutes exist and hence the elasticity of demand is high, even a single firm can’t increase price beyond some reasonable range. For example, if people could switch to other word processors easily, elasticity of demand for Microsoft Word would be low and Microsoft wouldn’t enjoy a near-monopoly in the market. Since a monopolist faces a downward-sloping demand curve, its marginal revenue is given by the following equation: $$ \text{MR}=\text{P}+\text{Q}\times\frac{\Delta \text{P}}{\Delta \text{Q}} $$ It shows that in order to increase its revenue by one unit, a monopolist must reduce its market price. The first factor on the right-hand side of the equation i.e. P represents the revenue from additional unit sold and the second factor (Q × ∆P/∆Q) represents the loss in revenue from reduction in market price. Multiplying and dividing the right-hand side by P, we get a relationship between a firm’s elasticity of demand and its marginal revenue. $$ \text{MR}=\text{P}+\text{Q}\times\frac{\text{P}}{\text{P}}\times\frac{\Delta \text{P}}{\Delta \text{Q}}=\text{P}+\text{P}\times \frac{\text{Q}}{\text{P}}\times \frac{\Delta \text{P}}{\Delta \text{Q}} $$ But Q/P multiplied by ∆P/∆Q equals elasticity of demand Ed, we can get the following equation: $$ \text{MR}=\text{P}+\text{P}\times \text{E} _ \text{d} $$ Economies of Scale A firm’s production function and cost structure are also important determinants of whether positive economic profit is possible in the long-run. If the cost structure of an industry is such that economies of scale matter a lot, a single large firm might be able to produce at a significantly lower cost than other small and medium-sized firms. This enables the largest player to price other firms out of the market. Many utility companies are able to monopolize a market owing to economies of scale. Such a monopoly is called a natural monopoly. Similarly, existence of increasing returns to scale means that as the size of a firm gets larger, its productivity (i.e. output per unit) increases and he can supply the product at increasingly lower prices. Existence of economies of scale and increasing returns to scale means that the industry’s minimum efficient scale is high, and this restricts entry by new firms because they must start big to stand a change and not many firms may have the capital needed to start at such a scale. Legal Barriers Third source of monopoly power is the existence of legal entry barriers including patents, copyrights, licenses, etc. In many instances, a monopoly is created and enforced by a government through its intellectual property rights laws. For example, Microsoft has monopoly in Windows-based operating systems because no one else can copy and sell Windows. Similarly, many pharmaceutical companies have monopoly in specific drugs due to existence of patents. Access to Critical Natural Resource Monopolies also arise when one firm has control over certain important physical and natural resource. The firm controlling the resource can restrict supply of the resource to other firms thereby controlling the ultimate market price. For example, De Beers has monopoly in diamonds because it owns and controls all major diamond mines. by Obaidullah Jan, ACA, CFA and last modified on
I never know what is meant by Chevalley's theorem, everyone has its own version. The version I know is EGA IV 1 théorème 1.8.4 (that I will call main theorem) : if the morphism of schmes $f : X \to Y$ is locally of finite presentation then for if $Z$ is a locally constructible subset of $Y$, the subset $f(Z)$ is locally constructible in $Y$. The version that you provide is : EGA IV 1 corollaire 1.8.5 (that I will call reduced theorem). The strategy to prove the main theorem is quite instructive, so let me remind it by quoting EGA (as it is a delight to read it) : you take $y\in Y$ and $V$ an open affine neighborhood of $y$. As the morphism $f$ is quasi-compact and quasi-separated so is its "restriction" $f^{-1} (V) \to V$ which implies that $f^{-1}$ is a quasi-compact and quasi-separated scheme. Through the instructive EGA IV 1 1.8.1 the part $Z \cap f^{-1} (V)$ is constructible, which shows that it suffices to prove the main theorem with $Y$ affine and $Z$ constructible. The scheme $X$ itself is quasi-compact and quasi-separeted so that you can find a morphism of finite presentation $g : X' \to X$ such that $g(X') = Z$. Then as $f \circ g$ is of finite presentation as well one sees that one can suppose that $Z = X$. That is, one has to show that : if $Y$ is an affine scheme and $f : X \to Y$ is a quasi-compact morphism that is locally of finite presentation then $f(X)$ is a constructible subset of $Y$. (This is actually EGA IV 1 lemme 1.8.4.1.) In this case as $X$ is quasi-compact it is finite union of open affines so that we can suppose $Y = \textrm{Spec}(A)$, $X = \textrm{Spec}(B)$ and that $B$ is $A$-algebra of finite presentation. Now $A$ is the inductive limit of its finite type $\mathbf{Z}$-sub-algebras. Then by the technical EGA IV 1 lemme 1.8.4.2 there is such a finite type $\mathbf{Z}$-sub-algebras $A_0$ and an $A_0$-algebra $B_0$ of finite type such that $B$ is isomorphic to $B_0 \otimes_{A_0} A$. Now if $Y_0 := \textrm{Spec}(A_0)$ and $X_0 = \textrm{Spec}(B_0)$ then $X = X_0 \times_{Y_0} Y$ with the projection $X \to Y$ being equal to $f$. If $f_0 : X_0 \to Y_0$ and $g_0 : Y \to Y_0$ are the structural morphisms one sees (thanks to EGA 1, corollaire 3.4.8) that $f(X) = g_0^{-1} \left( f_0^{-1} \left( X_0 \right)\right)$ : it indeed suffice to show that $f_0 (X_0)$ is constructible, i.e. to show the reduced theorem.
Search Now showing items 1-9 of 9 Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV (Springer, 2012-09) The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
Yesterday, I came across an interesting post by Allison at Infinigons. She reminds us that we must remain fearless in the face of difficult problems, and attempt to do something, rather than give problems a cursory once-over, then run to Google. It is worth reading her post for the details. The example she gives is a fun problem, which I would be happy to discuss with any of my three or four readers, should they wish to comment below. The answer that I got was \(\frac{4\sqrt{2}}{3}-\frac{5}{3}\approx 0.219\), if you want to check your (or my) work (I’m pretty confident of that solution). That being said, I think that the emphasis of the linked post should really be placed upon the willingness to think about and work with hard problems, in mathematics and beyond, without instantly looking for an authority to answer the problem. I have noticed (though I don’t have any statistics to back this up) that my students are often unwilling to spend time working through problems. All of their homework is done online, as per the department’s wishes, and the website offers many resources that make it easier for students to get through the problems without struggling. Then they come to class and do poorly on quizzes, and often turn in problems that have not even been attempted! They will literally sit through a 15 minute quiz with three problems, and write nothing more than three frowny-faces and an apology for not knowing how to complete the problem. I find this behaviour to be very frustrating, and I think it reflects an unwillingness face hard problems head on. Of course, I am guilty of the same crime myself. I have begun my thesis research, and have been working through a book on dimension theory (hence the last major post I wrote). At the end of chapter two, there was a particularly difficult problem that I was working on. I struggled with it for a couple of days, but felt like I wasn’t making much progress. So I spent two hours trying to hunt down a solution on Google. Fortunately for me, there was no solution, and after a couple of meetings with my advisor (who also struggled with the problem), I think that I finally managed to come up with a reasonably solid proof. The feeling of accomplishment is profound, and I am now quite happy that I could not find a proof online. It what may be a truly sad example of the same behaviour, I picked up a copy of a game called Shadow of the Colossus a few years ago (if you play video games, and have a PS2, this is a must-play game). I struggled mightily with the final boss, and after two or three attempts, gave in and looked over one of the walkthroughs on GameFAQs. It turns out the that solution was mind-bogglingly simple, and I felt kind of dumb for not figuring it out myself. Moreover, defeating that final boss never gave me the sense of accomplishment that it might have otherwise. I still feel a tinge of shame whenever I play that game (which I do quite a lot, because it is a very good game). I wish that I had struggled with it for more time, and figured it out on my own. I think that there is value in working through difficult problems without appealing to authority. Ultimately, we expect our students to be the leaders and decision makers (even if only by voting for elected officials and ballot measures) of our society, and making the right decisions requires that they are capable of thinking through hard problems without giving up and blindly following pundits and demagogs. We all need to be able to think and decide for ourselves. So let it be resolved: my students need to work on hard problems, and I need to be a better role model. For the remainder of my tenure as a graduate teaching assistant, I will do everything in my power to provide my students with problems that require them to think and struggle, and I will provide only as much feedback as is required to get them to think through the problems on their own. Moreover, in my own research I will cease to use the internet, the library, and the back of the book as sources of answers. When the problems get difficult, I will buckle down and work them out on my own. I will do everything that I require of my students.
Can somebody explain in a simple way why, talking about representations $$3\otimes3\otimes3=1\oplus8\oplus8\oplus10~?$$ Here $3$ and $\bar{3}$ are the fundamental and anti-fundamental of $SU(3)$, in this case. Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community \begin{equation} \boldsymbol{3}\boldsymbol{\otimes}\boldsymbol{3}\boldsymbol{\otimes}\boldsymbol{3}= \boldsymbol{1}\boldsymbol{\oplus}\boldsymbol{10}\boldsymbol{\oplus} \boldsymbol{8}^{\boldsymbol{\prime}}\boldsymbol{\oplus}\boldsymbol{8} \end{equation} We talk about this because it explains the structure of a number of baryons in Particle Physics made from three quarks : 1 singlet - 1 decuplet - 2 octets, that is 27 baryons in total. I refer to my answer in the following link for more details :
If $\omega=\frac{\sqrt{2}}{2}+i \frac{\sqrt{2}}{2}$, then $\omega$ is an 8th root of unity. And I know $\omega,\omega^3,\omega^5$,and $\omega^7$ are furthermore primitive 8th roots of unity in $\mathbb{C}$. But what happens when we change $\mathbb{C}$ to $\mathbb{Z}/17\mathbb{Z}$ (integers mod 17)? How can I find the primitive 8th roots of unity in $\mathbb{Z}/17\mathbb{Z}$? Note that $2^4\equiv -1$ (mod 17), hence $2^8\equiv 1$ (mod 17). So $2$ is a primitive 8th root of unity. The other primitive 8th roots of unity mod 17 are $2^3=8$, $2^5\equiv 15$, and $2^7\equiv 9$. In general, $\mathbb{Z}/p\mathbb{Z}$ for a prime $p$ contains all the $n$th roots of unity precisely when $n$ divides $p-1$. The multiplicative group of $\mathbb{Z}/17\mathbb{Z}$ is cyclic of order $16$. Therefore, there are $\phi(8)=4$ elements of order $8$. (Just like in $\mathbb C^\times !$) In the context of groups, a primitive 8th root of unity is the same as an element of order $8$.
A trick that is standard in my little world is this: the matrix$$M = \left(\begin{array}{rr}-2 & -4 \\4 & 6\end{array}\right)$$has trace $4$ and determinant $4.$ The characteristic roots satisfy $\lambda^2 - 4 \lambda + 4 = 0.$ The Cayley-Hamilton Theorem (if this is not familiar, see the ADDENDUM) says that$$ a_{n+2} = 4 a_{n+1} - 4 a_n, $$ $$ b_{n+2} = 4 b_{n+1} - 4 b_n. $$It is easy enough to confirm these with direct calculations. Because of the repeated characteristic value $2,$ we get $a_n = A 2^n + B n 2^n,$ with $b_n = C 2^n + D n 2^n.$ Calculating the first few of each to solve for the coefficients, we get$$ a_n = 2^n - 2n 2^n, \; \; \; \; b_n = 2n 2^n. $$ ADDENDUM: Not everyone has seen Cayley-Hamilton. I did say it could be confirmed by straightforward calculation: Suppose we have the system$$ \color{blue}{ a_{n+1} = \alpha a_n + \beta b_n,}$$$$ \color{blue}{ b_{n+1} = \gamma a_n + \delta b_n.} $$ We will find $a_{n+2}$ in two slightly different ways. $$ a_{n+2} = \alpha a_{n +1} + \beta b_{n +1} = \alpha(\alpha a_n + \beta b_n) + \beta ( \gamma a_n + \delta b_n) = (\alpha^2 + \beta \gamma) a_n +(\alpha \beta + \beta \delta) b_n $$ Let me go straight to this, define$$ \Psi = (\alpha + \delta) a_{n+1} - (\alpha \delta - \beta \gamma) a_n, $$$$ \Psi = (\alpha + \delta)( \alpha a_n + \beta b_n) - (\alpha \delta - \beta \gamma) a_n, $$$$ \Psi = (\alpha^2 + \alpha \delta) a_n + (\alpha \beta + \beta \delta)b_n - (\alpha \delta - \beta \gamma) a_n, $$$$ \Psi = (\alpha^2 + \beta \gamma) a_n + (\alpha \beta + \beta \delta)b_n. $$From$$ a_{n+2} = (\alpha^2 + \beta \gamma) a_n +(\alpha \beta + \beta \delta) b_n $$we find$$ a_{n+2} = \Psi, $$or$$ \color{blue}{ a_{n+2} = (\alpha + \delta) a_{n+1} - (\alpha \delta - \beta \gamma) a_n.} $$An analogous calculation works for $b_{n+2}= (\alpha + \delta) b_{n+1} - (\alpha \delta - \beta \gamma) b_n .$
This is not homework. Problem 3-38 reads: Let $A_{n}$ be a closed set contained in $(n,n+1)$. Suppose that $f:\mathbb{R}\rightarrow \mathbb{R}$ satisfies $\int_{A_{n}}f=(-1)^{n}/n$ and $f(x)=0$ for $x\notin$ any $A_{n}$. Find two partitions of unity $\Phi$ and $\Psi$ such that $\sum_{\phi\in\Phi}\int_{\mathbb{R}}\phi\cdot f$ and $\sum_{\psi\in\Psi}\int_{\mathbb{R}}\psi\cdot f$ converge absolutely to different values. A few observations: First, $n\ge 1$. Second, Spivak uses what he calls an extended integral, whose definition and relations with the usual integral can be found on p.65, which can be found here or here. It may be helpful to have an example of such a function in mind. Let $A_{n}=$ closed interval of length $1/2n$ centered at the point $(2n+1)/2$. Clearly $A_{n}\subset(n,n+1)$. then define $$f(x)=\begin{cases} \hphantom{-}2& \text{if $x\in A_{n}$ for $n$ even}\\ -2& \text{if $x\in A_{n}$ for $n$ odd}\\ \hphantom{-}0& \text{otherwise}. \end{cases}$$ A possible approach: Let $a_{n}=(-1)^{n}/n$. Since $\sum_{n}a_{n}=\alpha\in\mathbb{R}$ but the convergence is conditional, then for any $\beta\not=\alpha$ there is a rearrangement $\{b_{n}\}$ of the sequence $\{a_{n}\}$ such that $\sum_{n}b_{n}=\beta$. Now, we form a family of open sets $\{U_{n}\}$, where $U_{n}$ is the union of $n$ intervals $(k,k+1)$, each corresponding to a term of the $n$-th partial sum of $\sum_{n}a_{n}$. We form a similar family $\{V_{n}\}$ looking at the partial sums of $\sum_{n}b_{n}$. E.g., if we let $\{b_{n}\}=\{-1,1/2,1/4,-1/3,1/6,1/8,-1/5,\ldots\}$ we have $V_{3}=(1,2)\cup(2,3)\cup(4,5)$ while since $\{a_{n}\}=\{-1,1/2,-1/3,1/4,\ldots\}$ we have $U_{3}=(1,2)\cup(2,3)\cup(3,4)$. If we slightly fatten-up the $U_{n}$ (resp. the $V_{n}$) we form open covers $\mathcal{U}$ (resp. $\mathcal{V}$) of all the reals greater or equal than 1 without ading points where $f$ in non-zero. My heart tells me that partitions of unity $\Phi$ and $\Psi$ subordinate to $\mathcal{U}$ and $\mathcal{V}$, respectively, will be the desired one. But alas I am lost! Does any one know how to show that the aforementioned partitions of unity are the desired ones? Other possible approaches to the solution are also welcomed.
I am reading the following paper: https://arxiv.org/pdf/math/0211450.pdf (p.10). My question is the part with red line. Why taking square of $n_i$? I read the following article: Inequivalent representations of a finite group, which says $$\deg \chi=\sum_{i=1}^k n_i\deg\chi_i$$ where $\chi_i$ is the character of the representation $\rho_i$. The above equality says the degree of the representation $\rho$ is the summation of the multiplicity of each $\rho_i$ times the degree of $\rho_i$. And $\rho = m_i\rho_i\oplus \cdots\oplus m_k\rho_k$. But I still have no idea why the group order, which represents the number of elements in $G$, is equal to that summation.
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
Composition series composition series A composition sequence is a finite subset $\{a_0,\ldots,a_n\}$ of a partially ordered set with least element $0$ and greatest element $1$ such that\[0 = a_0 < a_1 < \cdots < a_n = 1\]and all the intervals $[a_i,a_{i+1}]$ are simple (elementary) (cf. Elementary interval). One can also speak of a composition series of an arbitrary interval $[a,b]$ of a partially ordered set. Composition series certainly do not always exists. A composition series of a universal algebra is defined in terms of congruences. Since congruences in groups are defined by normal subgroups, a composition series of a group can be defined as a normal series of it (see Subgroup series) having no proper refinements (without repetition). A series \[ E = G_0 \subset \cdots \subset G_{k-1} \subset G_k = G \] is a composition series for the group $G$ if and only if every $G_{i-1}$ is a maximal normal subgroup in $G_i$. All the factors $G_i/G_{i-1}$ of a composition series are simple groups. Every normal series isomorphic to a composition series is a composition series itself. The Jordan–Hölder theorem holds for composition series of groups. Composition series of rings, and more generally of $\Omega$-groups, are defined in a similar way and have similar properties (see [Ku]). References [Co] P.M. Cohn, "Universal algebra", Reidel (1981) [Ku] A.G. Kurosh, "Lectures on general algebra", Chelsea (1963) (Translated from Russian) Comments For a universal algebra the notion of a composition series is more precisely defined as follows [Co]. Let $A$ be an $\Omega$-algebra and $E$ a subalgebra. A normal chain from $E$ to $A$ is then a finite chain of subalgebras of $A$, \[ E = A_0 \subset A_1 \subset \cdots \subset A_m = A \] together with a congruence $\mathfrak{A}_i$ on $A_i$ for $i=1,\ldots,m$ such that $A_{i-1}$ is precisely a $\mathfrak{A}_i$-class. There is a natural notion of refinement and isomorphism of normal chains: normal chains from $E$ to $A$ are isomorphic if and only if they are equally long and if there is a permutation $\sigma$ of $1,\ldots,m$ such that \[ A_i/\mathfrak{A}_i \simeq A'_{\sigma(i)}/\mathfrak{A}'_{\sigma(i)}. \] Then one has the Schreier refinement theorem to the effect that if $A$ is an $\Omega$-algebra with subalgebra $E$ such that on any subalgebra of $A$ all congruences commute, then any two normal chains from $E$ to $A$ have isomorphic refinements, and the Jordan–Hölder theorem that any two composition series from $E$ to $A$ on such an algebra are isomorphic. A subgroup $H$ of a group $G$ is called subnormal if there is a chain of subgroups \[ H = H_0 \subset H_1 \subset \cdots \subset H_m = G \] such that $H_i$ is normal in $H_{i+1}$, $i=0,\ldots,m-1$. Consider the lattice of subnormal subgroups $L$ of $G$. Then a composition series for the partially ordered set $L$ defines in fact a composition series for $G$, and vice versa. Something analogous can be formulated for universal algebras (These statements of course do not hold for, respectively, the lattice of normal subgroups and the lattice of congruences.) References [Hu] B. Huppert, "Endliche Gruppen", 1, Springer (1967) How to Cite This Entry: Composition series. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Composition_series&oldid=27315
Difference between revisions of "Talk:Absolute continuity" From Encyclopedia of Mathematics (Created page with "I moved some portions of the old article in Signed measure. I have not had the time to add all Mathscinet and Zentralblatt references. ~~~~") Line 1: Line 1: I moved some portions of the old article in [[Signed measure]]. I have not had I moved some portions of the old article in [[Signed measure]]. I have not had the time to add all Mathscinet and Zentralblatt references. [[User:Camillo.delellis|Camillo]] 22:54, 29 July 2012 (CEST) the time to add all Mathscinet and Zentralblatt references. [[User:Camillo.delellis|Camillo]] 22:54, 29 July 2012 (CEST) + + + + + + Revision as of 12:57, 30 July 2012 Could I suggest using $\lambda$ rather than $\mathcal L$ for Lebesgue measure since it is very commonly used, almost standard it would be consistent with the notation for a general measure, $\mu$ calligraphic is being used already for $\sigma$-algebras --Jjg 12:57, 30 July 2012 (CEST) How to Cite This Entry: Absolute continuity. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Absolute_continuity&oldid=27246
Equivalence of Well-Ordering Principle and Induction/Proof/WOP implies PFI Theorem That is: implies: Principle of Finite Induction: Given a subset $S \subseteq \N$ of the natural numbers which has these properties: $0 \in S$ $n \in S \implies n + 1 \in S$ then $S = \N$. Proof To save space, we will refer to: We assume the truth of WOP. Let $S \subseteq \N$ which satisfy: $(D): \quad 0 \in S$ $(E): \quad n \in S \implies n+1 \in S$. We want to show that $S = \N$, that is, the PFI is true. Aiming for a contradiction, suppose that: $S \ne \N$ Consider $S' = \N \setminus S$, where $\setminus$ denotes set difference. From Set Difference is Subset, $S' \subseteq \N$. A lower bound of $\N$ is $0$. By hypothesis, $0 \in S$. From the definition of set difference, $0 \notin S'$. So this minimal element of $S'$ has the form $k + 1$ where $k \in \N$. We can consider the Natural Numbers as Elements of Minimal Infinite Successor Set. Thus $k \in S$ but $k + 1 \notin S$. From $(E)$, this contradicts the definition of $S$. Thus if $S' \ne \O$, it has no minimal element. So $S = N$. $\blacksquare$ Sources 1982: P.M. Cohn: Algebra Volume 1(2nd ed.) ... (previous) ... (next): $\S 2.1$: Chapter $2$: Integers and natural numbers: The integers
This question already has an answer here: Real Analysis Prove that 2/π ≤(sinx)/x ≤ 1 for all |x|≤ π/2 ? Just need the 2/π greater than part. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: Real Analysis Prove that 2/π ≤(sinx)/x ≤ 1 for all |x|≤ π/2 ? Just need the 2/π greater than part. Hint: On the interval $\bigl[0,\frac\pi2\bigr]$, the sine function is concave. As a consequence, the slopes of the chords joining a point of the curve to the origin are decreasing, and $\frac2\pi$ is the slope of the chord joining the local maximum $\bigl(\frac\pi2,1\bigr)$ to the origin. We can assume that $$0<x\le \frac{\pi}{2}$$ since $$\frac{\sin(x)}{x}$$ is even. We multiply by $x>0$ and we have to prove that $$\frac{2}{\pi}x\le\sin(x)\le x$$. Defining $$f(x)=x-\sin(x)$$ then we get $$f'(x)=1-\cos(x)>0$$ and let $$g(x)=\sin(x)-\frac{2}{\pi}x$$ then we get $$g'(x)=\cos(x)-\frac{2}{\pi}$$ and $$g''(x)=-\sin(x)<0$$ Can you finish?
Refine Year of publication 1998 (21) (remove) Document Type Article (21) (remove) Keywords We prove that there exists a positive \(\alpha\) such thatfor any integer \(\mbox{$d\ge 3$}\) and any topological types \(\mbox{$S_1,\dots,S_n$}\) of plane curve singularities, satisfying \(\mbox{$\mu(S_1)+\dots+\mu(S_n)\le\alpha d^2$}\), there exists a reduced irreducible plane curve of degree \(d\) with exactly \(n\) singular points of types \(\mbox{$S_1,\dots,S_n$}\), respectively. This estimate is optimal with respect to theexponent of \(d\). In particular, we prove that for any topological type \(S\) there exists an irreducible polynomial of degree \(\mbox{$d\le 14\sqrt{\mu(S)}$}\) having a singular point of type \(S\). Anwendungen effizienter Verfahren in Automation - Universität Karlsruhe auf der SPS97 in Nürnberg - (1998) We present a parallel path planning method that is able to automatically handle multiple goal configurations as input. There are two basic approaches, goal switching and bi-directional search, which are combined in the end. Goal switching dynamically selects a fa-vourite goal depending on some distance function. The bi-directional search supports the backward search direction from the goal to the start configuration, which is probably faster. The multi-directional search with goal switching combines the advantages of goal switching and bi-directional search. Altogether, the planning system is enabled to select one of the pref-erable goal configuration by itself. All concepts are experimentally validated for a set of benchmark problems consisting of an industrial robot arm with six degrees of freedom in a 3D environment. Es wird die Aufgabe der vollständigen räumlichen Abdeckung von Regionen in durch mobile Roboter betrachtet. Da-bei können die Regionen in vollständig, teilweise oder nicht bekannten Umgebungen liegen. Zur Lösung wird ein Verfahren aus der Computer-grafik zum Füllen von Bildregionen zugrunde gelegt. Das Verfahren hat eine lokale Sichtweise und läßt somit den Einsatz von Sensordaten und das Auftreten von unvorhergesehenen Hindernissen zu. Die Regionen können durch Karten off-line vorgegeben sein oder durch Sensordaten on-line aufgebaut werden. Dennoch ist eine vollständige und genau einma-lige Flächenbearbeitung garantiert. Dies wird an Beispielen in einer graphischen Visualisierung der Realzeit-Steuerung des Roboters validiert. We present a parallel control architecture for industrial robot cells. It is based on closed functional components arranged in a flat communication hierarchy. The components may be executed by different processing elements, and each component itself may run on multiple processing elements. The system is driven by the instructions of a central cell control component. We set up necessary requirements for industrial robot cells and possible parallelization levels. These are met by the suggested robot control architecture. As an example we present a robot work cell and a component for motion planning, which fits well in this concept. Beim Greifen deformierbarer oder zerbrechlicher Werkstücke kommen der Greifgeschwindigkeit sowie der Greifkraft besondere Bedeutung zu. In dieser Arbeit wird eine universelle Steuerung für pneumatische Greifer beschrieben, die eine einfache Einstellung dieser Größen über zwei spannungsgesteuerte Proportionalventile gestattet. Diese Anordnung wird für eine Einflußanalyse von Greifkraft und Greifgeschwindigkeit beim Greifen von Kabeln und Kabelbäumen genutzt, welche sich als robust und unproblematisch erwiesen haben. Enhancing the quality of surgical interventions is one of the main goals of surgical robotics. Thus we have devised a surgical robotic system for maxillofacial surgery which can be used as an intelligent intraoperative surgical tool. Up to now a surgeon preoperatively plans an intervention by studying twodimensional X-rays, thus neglecting the third dimension. In course of the special research programme "Computer and Sensor Aided Surgery" a planning system has been developed at our institute, which allows the surgeon to plan an operation on a threedimensional computer model of the patient . Transposing the preoperatively planned bone cuts, bore holes, cavities, and milled surfaces during surgery still proves to be a problem, as no adequate means are at hand: the actual performance of the surgical intervention and the surgical outcome solely depend on the experience and the skill of the operating surgeon. In this paper we present our approach of a surgical robotic system to be used in maxillofacial surgery. Special stress is being laid upon the modelling of the environment in the operating theatre and the motion planning of our surgical robot . This paper is based on a path planning approach we reported earlier for industrial robot arms with 6 degrees of freedom in an on-line given 3D environment. It has on-line capabilities by searching in an implicit and descrete configuration space and detecting collisions in the Cartesian workspace by distance computation based on the given CAD model. Here, we present different methods for specifying the C-space discretization. Besides the usual uniform and heuristic discretization, we investigate two versions of an optimal discretization for an user-predefined Cartesian resolution. The different methods are experimentally evaluated. Additionally, we provide a set of 3- dimensional benchmark problems for a fair comparison of path planner. For each benchmark, the run-times of our planner are between only 3 and 100 seconds on a Pentium PC with 133 MHz. In this paper, the problem of path planning for robot manipulators with six degrees of freedom in an on-line provided three-dimensional environment is investigated. As a basic approach, the best-first algorithm is used to search in the implicit descrete configuration space. Collisions are detected in the Cartesian workspace by hierarchical distance computation based on the given CAD model. The basic approach is extended by three simple mechanisms and results in a heuristic hierarchical search. This is done by adjusting the stepsize of the search to the distance between the robot and the obstacles. As a first step, we show encouraging experimental results with two degrees of freedom for five typical benchmark problems. This paper presents a new approach to parallel path planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based a best-first search algorithm and needs no essential off-line computations. The algorithm works in an implicitly discrete configuration space. Collisions are detected in the Cartesian workspace by hierarchical distance computation based on polyhedral models of the robot and the obstacles. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel path planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows very good speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.