text
stringlengths
256
16.4k
Cardinal characteristics of the continuum The subject known as cardinal characteristics of the continuum explores the rich territory---sometimes hidden from view, depending on the set-theoretic background---between the countably infinite cardinal $\aleph_0$ and the uncountable cardinality of the continuum. The subject begins with Cantor's theorem laying out the basic dichotomy that the continuum $\frak{c}=2^{\aleph_0}$ is strictly larger than $\aleph_0$, and goes on to explore the various ways that properties of $\aleph_0$ might extend to uncountable cardinals. For example, the union of countably many measure zero subsets of $\mathbb{R}$ has measure $0$; the union of countably many meager sets is meager; every countable number of functions $f:\omega\to\omega$ is bounded by a single function under eventual domination; every countable set of reals has measure $0$. To what extent can we hope to extend such properties to uncountable collections? The various cardinal characteristics of the continuum, many of which are described below, are defined exactly to be the cardinalities where these and other similar such properties first begin to fail for uncountable collections. Each cardinal characteristic measures the extent to which a particular mathematical phenomenon extends from the countable to the uncountable, and the lesson of the subject is that there is an enormous diversity of such characteristics, exhibiting diverse combinations in various models of set theory. When the continuum is small, the characteristics are pressed together---under the continuum hypothesis, for example, they are all equal to the continuum---but in other models, the different characteristics are teased apart and seen to express fundamentally different inequivalent properties. The subject breaks into two major components: first, proving the positive relations amongs the characteristics, such as $\omega_1\leq\frak{b}\leq\frak{d}\leq\frak{c}$; and second, constructing models of set theory, generally by forcing, which reveal the range of possibility, such as a model in which $\omega_1\lt\frak{b}\lt\frak{d}\lt\frak{c}$. Thus, the philosophy of the subject naturally exhibits an unusual degree of contingency for set-theoretic truth: we understand the cardinal characteristic more deeply because we know the range of possibility for their relations to each other. An excellent general resource on the subject is [1]. Contents The bounding number The bounding number $\frak{b}$ is the size of the smallest family of functions $f:\omega\to\omega$ that is not bounded with respect to eventual domination. The dominating number The dominating number $\frak{d}$ is the size of the smallest family of functions $f:\omega\to\omega$, such that every function is eventually dominated by a function in the family. The covering numbers The additivity numbers Cichoń's diagram ${\rm cov}({\mathcal L})$ $\longrightarrow$ ${\rm non}({\mathcal K})$ $\longrightarrow$ ${\rm cof}({\mathcal K})$ $\longrightarrow$ ${\rm cof}({\mathcal L})$ $\longrightarrow$ $2^{\aleph_0}$ $ \Bigg\uparrow $ $\uparrow$ $ \uparrow$ $ \Bigg\uparrow $ ${\mathfrak b} $ $\longrightarrow$ ${\mathfrak d} $ $\uparrow$ $\uparrow$ $\aleph_1$ $\longrightarrow$ ${\rm add}({\mathcal L})$ $\longrightarrow$ ${\rm add}({\mathcal K})$ $\longrightarrow$ ${\rm cov}({\mathcal K})$ $\longrightarrow$ ${\rm non}({\mathcal L})$ This article is a stub. Please help us to improve Cantor's Attic by adding information. References Blass, Andreas. Chapter 6: Cardinal characteristics of the continuum.Handbook of Set Theory , 2010. www bibtex
The package CircuiTikz provides a set of macros for naturally typesetting electrical and electronic networks. This article explains basic usage of this package. Contents CircuiTikz includes several nodes that can be used with standard tikz syntax. \documentclass{article} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{circuitikz} \begin{document} \begin{center} \begin{circuitikz} \draw (0,0) to[ variable cute inductor ] (2,0); \end{circuitikz} \end{center} \end{document} To use the package it must be imported with \usepackage{circuitikz} in the preamble. Then the environment circuitikz is used to typeset the diagram with tikz syntax. In the example a node called variable cute inductor is used. As mentioned before, to draw electrical network diagrams you should use tikz syntax, the examples even work if the environment tikzpicture is used instead of circuitikz; below a more complex example is presented. \begin{center} \begin{circuitikz}[american voltages] \draw (0,0) to [short, *-] (6,0) to [V, l_=$\mathrm{j}{\omega}_m \underline{\psi}^s_R$] (6,2) to [R, l_=$R_R$] (6,4) to [short, i_=$\underline{i}^s_R$] (5,4) (0,0) to [open, v^>=$\underline{u}^s_s$] (0,4) to [short, *- ,i=$\underline{i}^s_s$] (1,4) to [R, l=$R_s$] (3,4) to [L, l=$L_{\sigma}$] (5,4) to [short, i_=$\underline{i}^s_M$] (5,3) to [L, l_=$L_M$] (5,0); \end{circuitikz} \end{center} The nodes short, V, R and L are presented here, but there a lot more. Some of them are presented in the next section. Below most of the elements provided by CircuiTikz are listed: Monopoles Bipoles Diodes dynamical bipoles For more information see:
When I studied QM I'm only working with time independent Hamiltonians. In this case the unitary evolution operator has the form $$\hat{U}=e^{-\frac{i}{\hbar}Ht}$$ that follows from this equation $$ i\hbar\frac{d}{dt}\hat{U}=H\hat{U}. $$ And in this case, Hamiltonian in Heisenberg picture ($H_{H}$) is just the same as the Hamiltonian in Schrödinger picture ($H_{S}$), i.e. it commutes with $\hat{U}$. Now I have a Hamiltonian that depends explicitly on time. Specifically, $$H_{S}=\frac{\hat{p}^2}{2m}+\frac{1}{2}m\omega \hat{q}^2-F_0 \sin(\omega_0t)\hat{q}.$$ And in my problem I need to calculate $H_H$ (Hamiltonian in Heisenberg picture). I've found that differential equation for $\hat{U}$ (I've mentioned about it above.) has generally solution in the form (with $U(0)=1$) $$U(t)=1+\xi\int\limits_0^t H(t')\,dt'+ \xi^2\int\limits_0^t H(t')\,dt'\int\limits_0^t' H(t'')\,dt''+\xi^3\int\limits_0^t H(t')\,dt'\int\limits_0^t' H(t'')\,dt''\int\limits_0^t'' H(t''')\,dt'''+...$$ So my questions are: Is there other ways to calculate $\hat{U}$, could give a link or tell me about them? If you know form of the solution for my case, please tell me. If you know any articles or papers articles on this topice, please link them to me, too.
this question is giving me some issue because i know sin and cos is the same as $45$ degrees but there is a $2$ on the right side where cosine is. So how would i get $\theta$? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community this question is giving me some issue because i know sin and cos is the same as $45$ degrees but there is a $2$ on the right side where cosine is. So how would i get $\theta$? Square both sides to get $\begin{array}{l} {\sin ^2}\theta = 4{\cos ^2}\theta \Rightarrow 1 - {\cos ^2}\theta = 4{\cos ^2}\theta \Rightarrow \\ {\cos ^2}\theta = \frac{1}{5} \end{array}$ Let $\theta \in \mathbb{R}$; note that $\cos^{2}\theta = 1 - \sin^{2}\theta$. Then by assumption $\cos^{2}\theta = 1 - 4\cos^{2}\theta$, so $\cos^{2}\theta = 1/5$.
I am having trouble understanding the spectral characterization of Reed-Solomon codes. My script states the following: An evaluation codes is defined as: $$C = \{(c_0, \ldots, c_n) : c_l = a(\beta_l) \text{ for some } a(x) \in F[x] \text{ with deg } a(x) < k \text{ and F = some field}\}$$ (($\beta_0, \ldots, \beta_{n-1}) \in F^n \text{ is fixed}$). It is easy to see that this set constitutes a linear code which can thus be attributed a minimum Hamming distance (= minimum Hamming weight) as follows: If for $i \neq j: \beta_i \neq \beta_j$ (i.e. all betas are different) the minimum Hamming weight may be determined by noting that for a polynomial of degree k-1 there may be at most k-1 zeros. Thus, the Hamming weight of $(a(\beta_0), \ldots, a(\beta_{n-1}))$ is at least n-(k-1). By the Singleton bound, however, the Hamming weight is upper-bounded by n-(k-1), thus: $d_{min} = n - k + 1$. Reed-Solomon codes correspond to the above definition with $\beta_l = \alpha^l$ where $\alpha$ is primitive n-th root in F. One can thus state that we are dealing with a DFT (ignoring scale factors) and write: $$C = \{(b_0, \ldots, b_{n-1}) \in F^n : B_l = 0 \text{ for } k \leq l < n\}$$ ($B$ is the DFT of $b$.) My question concerns the following part: It is subsequently stated that one may chose to write $$C = \{(b_0, \ldots, b_{n-1}) \in F^n : B_l = 0 \text{ for } l_0 \leq l < l_0 + n - k\} \text{ with } 0 \leq l_0 \leq k$$ Supposedly (by the script) it is obvious that this definition doesn't change the minimum hamming distance (i.e. still $d_{min} = n - k + 1$) but I don't see why this follows for the argument stated in the initial part is no longer applicable. Any help on clarifying why Hamming weight is still given by $d_{min} = n - k + 1$ is greatly appreciated!
Noninertial effects on a scalar field in a spacetime with a magnetic screw dislocation Abstract We investigate rotating effects on a charged scalar field immersed in spacetime with a magnetic screw dislocation. In addition to the hard-wall potential, which we impose to satisfy a boundary condition from the rotating effect, we insert a Coulomb-type potential and the Klein–Gordon oscillator into this system, where, analytically, we obtain solutions of bound states which are influenced not only by the spacetime topology, but also by the rotating effects, as a Sagnac-type effect modified by the presence of the magnetic screw dislocation. 1 Introduction In the context of condensed matter physics, Katanaev and Volovich [1] formulated a description of defects in a three-dimensional continuous elastic solid medium, where such defects may be associated with curvature or torsion of the continuous medium. Then, Puntigam and Soleng [2] went further with this formulation through the generalization of the Volterra distortions, considering the temporal coordinate, that is, \((3+1)\) dimensions, in order to adapt these defects in a Einstein–Cartan gravity and introduce the concept of distorted spacetimes. According to this formulation, disclinations are associated with curvature, while dislocations are associated with torsion. In particular, the dislocations can be typified as spiral and screw [3]. Effects associated with the topology of a medium with dislocations have been investigated in crystal structures through differential geometry [4]. In an analysis by Landau and Lifshitz on the effects of rotation in the Minkowski spacetime with cylindrical symmetry, they showed that the radial coordinate becomes restricted in an interval, where this restriction is an effect directly related to the uniform rotation [21]. This restriction from the effects of uniform rotation has been widely used for studies in a relativistic quantum mechanics system, for example, in a Dirac particle [22], in a relativistic Landau–He–McKellar–Wilkens quantization [23], on the Dirac oscillator [24], on a scalar field in the spacetime with space-like dislocation and in the spacetime with a spiral dislocation [6], on the quantum dynamics of scalar bosons [25], in the relativistic quantum motion of spin-0 particles under in the cosmic string spacetime [26], in the Duffin–Kemmer–Petiau equation with magnetic cosmic string background [27]. In the nonrelativistic case, this restriction has been studied in a Dirac particle in the spacetime with a screw dislocation [16] and on nonrelativistic topological quantum scattering [28]. However, a point that has not been analyzed in the literature is the rotating effect on the scalar field by considering the spacetime with a magnetic screw dislocation as background, that is, the screw dislocation has in the core a magnetic field with magnetic quantum flux \(\Phi _B\) and outside the topological defect this magnetic field vanishes [7, 11]. wis the constant angular frequency of the rotating frame, which gives us the metric In this paper, we investigate the relativistic Aharonov–Bohm effect for bound states [29, 30] on a scalar field in a spacetime with a screw dislocation, where this field is subject to confinement potentials in a uniformly rotating frame. We begin our analysis with the hard-wall potential. After, we consider a scalar field with position-dependent mass interacting with a Coulomb-type central potential. And, finally, we inserted the Klein–Gordon oscillator [31] and investigated the harmonic effects coming from this model of relativistic oscillator. In all these cases, we obtain analytical solutions, where they are not only influenced by the topology of the spacetime, but also the effects of rotation. mis the rest mass of the scalar field and \(A_{\mu }=(0,0,A_{\varphi },0)\) is the electromagnetic 4-vector potential, where \(A_{\varphi }\) is given by [18, 33, 34] The structure of this paper is as follows: in the Sect. 2, for a particular case, we investigate the effects of the spacetime topology and of rotation on an electrically charged scalar field subject to the hard-wall potential, where it is possible to obtain the energy levels of this system; in the Sect. 3, we inserted a Coulomb-type potential in the Klein–Gordon equation via the mass term and, for a particular case, extracted the energy profile of this system; in the Sect. 4, through a non-minimal coupling in the Klein–Gordon equation, we insert a relativistic oscillator model and analyze the harmonic effects on the scalar field in a uniformly rotating frame in the spacetime with a magnetic screw dislocation, where we determine two energy profiles for the system; in the Sect. 5, we present our conclusions. 2 Hard-wall confining potential w, giving us, then, a Sagnac-type effect [22, 49, 50]. For \(\Phi _{B}=0\) and \(\chi \ne 0\) we recover the result obtained in the Ref. [6]; for \(\Phi _{B}\ne 0\) and \(\chi =0\) we obtain the relativistic energy levels of a charged scalar field subject to the Aharonov–Bohm effect in the Minkowski spacetime in a uniformly rotating frame. 3 Coulomb-type potential The standard procedure of inserting the Coulomb potential into relativistic wave equations is through the minimum coupling \({\hat{p}}_{\mu }\rightarrow {\hat{p}}_{\mu }-qA_{\mu }\) via temporal component \(A_{0}\) [51]. Another procedure of inserting central potentials is by modifying the mass term of the relativistic wave equations via transformation \(m\rightarrow m+V(r)\), where V( r) is a scalar potential. The latter procedure entails a feature which is known in the literature as a position-dependent mass system. This type of system has been studied in atomic physics [52], in the rotating cosmic string spacetime [53, 54], on a two-dimensional Klein–Gordon particle [55], quark–antiquark interaction [56] and on a scalar particle in a Gödel-type spacetime [57]. ais a constant that characterizes the Coulomb-type potential. The Coulomb-type potential has been studied in propagation of gravitational waves [58], in a magnetic quadrupole moment [59], in a neutral particle with permanent magnetic dipole moment [60] and in Lorentz symmetry violation scenarios [61, 62]. Then, by substituting the Eq. (15) into the Eq. (6), we obtain from the solution (7) the radial differential equation wand the effective angular momentum quantum number \(l_{\text {eff}}=l-\frac{q\Phi _B}{2\pi }\). By making \(w=0\) we recover the result obtained in the Ref. [19]. For \(\Phi _{B}=0\) and \(\chi \ne 0\), the Eq. (23) represents the relativistic energy levels of scalar field of position-dependent mass under effects of a Coulomb-type potential in a uniformly rotating frame in the spacetime with a screw dislocation; for \(\Phi _{B}\ne 0\) and \(\chi =0\), the Eq. (23) represents the relativistic energy levels of charged scalar field of position-dependent mass under effects of a Coulomb-type potential in the Minkowski spacetime. 4 Klein–Gordon oscillator It is possible to discuss two energy profiles for this system. One for any value of the angular frequency of rotation which induces a hard-wall potential (general case) and another for very small values of the angular frequency of rotation (particular case). 4.1 General case For an arbitrary value of the angular frequency of rotation implies in a similar case seen in Sect. 2, that is, the wave function must vanish in \(\varrho _0=\frac{m\omega (1-\chi ^2 w^2)}{w^2}\), restriction imposed by the rotation. This means that the charged scalar field is restricted in a region where this restriction is characterized by the presence of a hard-wall potential induced by the effects of rotation in a spacetime with a magnetic screw dislocation. This kind of confinement is described by the boundary condition given in the Eq. (12). 4.2 Particular case nimposing that \({\bar{b}}=-n\), with \(n=0,1,2,\ldots \). Then, by following the discussion made in the Eq. (22) to the (23), we obtain the expression 5 Conclusion We have investigated the effects of a uniformly rotating frame on a charged scalar field in the spacetime with a magnetic screw dislocation. Due to the rotating effects in this background, we can note that the radial coordinate is restricted and this restriction is determined by the spacetime topology. Through this restriction in the radial coordinate, we determine solutions of bound states, hence, we extract the energy profiles for the systems analyzed. Our investigation starts with the hard-wall potential, which, from the mathematical point of view, is a Dirichlet boundary condition imposed by the rotating effects, and with that we determine the relativistic energy profile of this system. Through the definition of position-dependent mass system, we inserted a Coulomb-type potential into the Klein–Gordon equation by modifying the mass term. Analytically, for very small values of the frequency of rotation, we determine solutions of bound states and we can note that the presence of the Coulomb-type potential modifies the energy spectrum of the system. We also investigate the Klein–Gordon oscillator, where, for well-defined rotating frequency scales, it is possible to determine two energetic profiles for this system. First, we determine the energy profile of the Klein–Gordon oscillator in a uniformly rotating frame for arbitrary values of the frequency of rotation, which induces a hard-wall potential. Next for very small values of the frequency of rotation. We can see that the two energy profiles are totally different. It is worth mentioning that in all cases we can note the influence of the spacetime topology by redefining the angular momentum eigenvalue which is described in terms of the parameter associated with the screw dislocation and the parameter associated with the internal quantum flux of the defect. Consequently, the Sagnac-type effect, which arises at all energy levels due to rotation, is also influenced by the internal quantum flux of the topological defect. Notes Acknowledgements The author would like to thank CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico, Brazil). Ricardo L. L. Vitória was supported by the CNPq project No. 150538/2018-9. References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.K. Bakke, L.R. Ribeiro, C. Furtado, Cent. Eur. J. Phys. 8(6), 893 (2010)Google Scholar 14. 15. 16. 17. 18. 19. 20. 21.L.D. Landau, E.M. Lifshitz, The Classical Theory of Fields, Course of Theoretical Physics, vol. 2, 4th edn. (Elsevier, Oxford, 1980)Google Scholar 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47.M. Abramowitz, I.A. Stegum, Handbook of Mathematical Functions(Dover Publications Inc., New York, 1965)Google Scholar 48. 49.M.G. Sagnac, C. R. Acad. Sci. (Paris) 157, 708 (1913)Google Scholar 50.M.G. Sagnac, C. R. Acad. Sci. (Paris) 157, 1410 (1913)Google Scholar 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 84. 85. 86. Copyright information Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Funded by SCOAP 3
The analemmatic sundial is unique with the ability to use a gnomon (the shadow casting element) of any height. This leads to the popular use of the analemmatic sundial as a "human sundial" where people use their own shadow to tell time. Unlike garden horizontal dials or vertical dials on buildings, the analemmatic sundial requires a moveable gnomon. The position of the gnomon or where the person stands must be changed depending upon the season. A central walkway or guide indicates that a person should stand closer to the sun's noon hour mark at summer solstice and in reverse, stand much further south for the winter solstice. At the equinoxes the gnomon or person stands on a east-west line connecting the solar hours of 6am and 6pm. Time on the analemmatic sundial is usually read simply with hour marks made on an ellipse. We recommend an ellipse with semi-major axis (the distance from the center of the walkway to the 6am or to the 6pm solar mark) of 6 to 8 feet (2 to 2.5 meters). The semi-minor axis and date marks on the walkway depend upon the sundial's latitude. Using the symbol \(\phi\) for the sundial latitude and \(M\) for the sundial's semi-major axis, the semi-minor axis \(m\) is given by the simple equation:\[ \begin{equation} m = M \sin(\phi) \end{equation} \] The semi-major axis is oriented exactly east-west and the semi-minor axis is aligned exactly north-south. We'll call the point where they cross the origin. At the sundial site our first task is to draw out these two perpendicular lines ensuring their east-west (x-axis) and north-south (y-axis) are correctly oriented. Next we need to draw this ellipse. Here we take advantage of the fact that the distance from one ellipse focus to the ellipse line and back to the other focus is constant everywhere around the ellipse, and this constant distance equals twice the semi-major axis. Therefore we need to find the distance of the focal points along the east-west line (x-axis). Their are two focal points: \[ \begin{equation} F_1 = -\sqrt{M^2 - m^2} \quad \text{ and } \quad F_2 = +\sqrt{M^2 - m^2}\end{equation}\]. For example, if feet at latitude 40° N, M = 6 feet (3ft 10in) giving \(F = \pm 4.6 \text{ ft}\). m = 3.86 In practice we can draw the ellipse by making a length of string going from one focus \(F_1\) to the solar noon point (semi-minor axis) to the other focus \(F_2\). Then holding the string firm at the two foci, use a piece of chalk pulling the string taught. As you move the chalk while keeping the string taught, you will trace out the sundial's ellipse. Once you have the ellipse, the hours can be easily marked. Let be the angle of the sun away from solar noon. We call this the hour angle. Remember that there are 15 degrees per hour, so for example if the time is 10am, this is 2 hours before noon or H . Likewise 3pm is 3 hours after noon or H = -30° . We compute the hour mark on the ellipse from 6am ( H = 45° ) to 6pm ( -90° ). The +90° (east-west) and x (north-south) distances from the origin are: y \[ \begin{equation} x = M \sin(H)\end{equation}\] and \[ \begin{equation} y = M\sin(\phi)\sin(H)\end{equation}\] The walkway is a bit more complicated since it depends upon the distance the sun from the celestial equator. The sun follows the annual path of 23.4° south of the equator at the winter solstices to 23.4° north of the equator at the summer solstices. This angle is called the sun's declination, noted by the symbol \(\delta\). At both the spring and fall equinox the sun is on the celestial equator at 0° declination. Let's call Z the distance from the origin up or down the y-axis that represents the place where we should stand: \[ \begin{equation} Z = M\tan(\delta)\cos(\phi)\end{equation} \] Date Declination Date Declination Jan 1 -23.1° July 1 22.0° Feb 1 -17.3° Aug 1 18.2° Mar 1 -7.9° Sep 1 8.5° Mar 22 0.0° Sep 22 0.0° Apr 1 4.2° Oct 1 -2.9° May 1 14.8° Nov 1 -14.2° June 1 22.0° Dec 1 -21.79° June 22 23.4° Dec 22 -23.4° There are two other important points on the east-west line (x-axis). They are the Bailey Points that indicate time and direction of sunrise or sunset. For example when you stand on a particular date on the walkway and you look toward the west Bailey point, the sight line tells you the time of sunrise. (During summer and fall your dial needs to mark hours before 6am.) Look 180° in the opposite direction from the west Bailey Point and that is the direction of sunrise. Likewise if you look toward the east Bailey point, the sight line tells you the time of sunset. (During summer and fall your dial needs to mark hours after 6pm.) Look 180° in the opposite direction from the east Bailey Point and that is the direction of sunset. To include the Bailey Points on your sundial, they are a distance from the origin along the east-west (x-axis) line: +/- Bailey \[ \begin{equation} BaileyPoint = \pm M \cos^2(\phi) \sqrt{1-tan^2(\delta)tan^2(23.4)} \end{equation} \] Now that you have the theory of the analemmatic dial, here are two attachments that show step by step the layout of the analemmatic sundial and a spreadsheet with all the mathematics for determining the ellipse shape, location of hour marks, Bailey Points, and the monthly distance marks on the walkway. In the NASA Photograph of the Day for 27 June 2019 is a beautiful photograph by Gianluca Belgrado using a pinhole camera. https://apod.nasa.gov/apod/ap190627.html As explained by NASA, "This persistent six month long exposure compresses the time from solstice to solstice (December 21, 2018 to June 16, 2019) into a single point of view....Fixed to a single spot at Casarano, Italy for the entire exposure, the simple [pinhole] camera continuously records the Sun's daily path as a glowing trail burned into the photosensitive paper. Breaks and gaps in the trails are caused by cloud cover. At the end of the exposure, the paper was scanned to create the digital image...." In 2011 Art Paque explained the art of solargraphy to members of the North American Sundial Society at their annual conference in Seattle. The construction steps involve creating a pinhole in thin foil, then taping the foil onto a tin can that has photographic paper inside and opposite the pinhole. The lid on the can is sealed and most important, pointed at the sky with firm support to prevent moving. The rest is up to nature as the sun crosses the sky each day. Beautiful solargaphs such as from Gianluca can be obtained with patience tracking the sun for three to six months. In the end your solargarph will be a day by day time capsule of solar observation. Type "solagraphy" into your web search engine and you will discover a host of sites showing the details of making your pinhole camera. For example: http://www2.uiah.fi/%7ettrygg/camera.html and http://www.pinholephotography.org/Solargraph%20instructions%202.htm Grimm & Parker Architects sponsored a "Green Apple Day" on October 15, 2016 to help two Baltimore City Schools - Graceland Park ES/MS and Holabird Academy ES/MS - receive analemmatic sundials on their front sidewalks. The weather was perfect as teachers and volunteers from G&P chalked out and then painted simple 16 x 5 foot analemmatic sundials. The sidwalks were aligned true North-South, making dial lay-out easy. With tape measures in hand, they marked out the focal points and north point of the analemmatic ellipse. Then, using the time-honored principle of constant distance they used a chalk line between those 3 points to maneuver a piece of chalk following the shape of an ellipse. For the sundial, the ellipse stretched from 5am to 7pm. The hour marks were made using two tape measure to check positions that were quickly followed by drawing of the hour circles with a plastic lid. While volunteers painted the hour circles others chalked out the walkway whose monthly lines and solstices were quickly painted as well. The final touch was the inclusion of the East and West Bailey points that determine the direction of the rising and setting sun. With a lot of support and good organization, both dials were finished in 3 hours! Students at the University of Waterloo School of Architecture in Cambridge, Ontario are experimenting with the benefits of 3D design and printing. In particular Joanne Yau created a set of hexagonal hollow bricks called sundial arches that lets in sunlight from different portions of the arch as the sun travels across the sky. We expect that the length to width ratio of the bricks can tailor sunlight for specific times of the year (summer, spring/fall, or winter). Joanne Yau was one of three teams challenged to learn how to operate a new industrial 3D printer capable of squirting out clay. Professor Correa, interviewed by 3Dprint.com said “There is no other way to make these kinds of façades without enormous cost and time,” said Correa, who has been involved in 3D printed research on an even more advanced level, studying how such objects respond when exposed to varying degrees of moisture and temperature. “They are completely unique.” “The printer allows us to make much more complex geometry,” said Joanne Yau, part of the team that 3D printed bricks for the ambitious arch/sundial. “To make this by hand or to extrude it would be virtually impossible.” See a video of how the 3D clay bricks are created in an article by Bridget O'Neal June 5, 2019: https://3dprint.com/245698/whistling-walls-sundial-arches-ontario-architecture-students-3d-print-clay/ You can layout a sundial using only a compass and straight edge (and yes, a ruler and book of tangents so that you can set out the gnomon lines for your latitude). Clem Rutter has created a graphical set of instructions to make horizontal sundials taken from the challenge of Fred Sawyer of the North American Sundial Society to find how many different ways can you graphically lay out the lines of a sundial. Here's where the fun begins. Clem Rutter in short order presents eleven different approaches. Do you want to follow the method of Dürer (1525), Benendetti (1574), Clavius (1586) or the more modern methods of Leybourn (1660) or Ozanam (1673)? All these methods are graphical shown. Join the centuries of gnomonists and begin your own Art of Dialing at https://en.wikipedia.org/wiki/Schema_for_horizontal_dials
Rocky Mountain Journal of Mathematics Rocky Mountain J. Math. Volume 48, Number 3 (2018), 905-912. Invariant means and property $T$ of crossed products Abstract Let $\Gamma $ be a discrete group that acts on a semi-finite measure space $(\Omega , \mu )$ such that there is no $\Gamma $-invariant function in $L^1(\Omega , \mu )$. We show that the absence of the $\Gamma $-invariant mean on $L^\infty (\Omega ,\mu )$ is equivalent to the property $T$ of the reduced $C^*$-crossed product of $L^\infty (\Omega ,\mu )$ by $\Gamma $. In particular, if $\Lambda $ is a countable group acting ergodically on an infinite $\sigma $-finite measure space $(\Omega , \mu )$, then there exists a $\Lambda $-invariant mean on $L^\infty (\Omega , \mu )$ if and only if the corresponding crossed product does not have property $T$. Moreover, if $\Gamma $ is an ICC group, then $\Gamma $ is inner amenable if and only if $\ell ^\infty (\Gamma \setminus \{e\})\rtimes _{\mathbf {i},r} \Gamma $ does not have property $T$, where $\mathbf {i}$ is the conjugate action. On the other hand, a non-compact locally compact group $G$ is amenable if and only if $L^\infty (G)\rtimes _{\mathbf {lt}, r} G_\mathrm {d}$ does not have property $T$, where $G_\mathrm {d}$ is the group $G$ equipped with the discrete topology and $\mathbf {lt}$ is the left translation. Article information Source Rocky Mountain J. Math., Volume 48, Number 3 (2018), 905-912. Dates First available in Project Euclid: 2 August 2018 Permanent link to this document https://projecteuclid.org/euclid.rmjm/1533230831 Digital Object Identifier doi:10.1216/RMJ-2018-48-3-905 Mathematical Reviews number (MathSciNet) MR3835578 Zentralblatt MATH identifier 06917353 Subjects Primary: 37A15: General groups of measure-preserving transformations [See mainly 22Fxx] 37A25: Ergodicity, mixing, rates of mixing 46L05: General theory of $C^*$-algebras 46L55: Noncommutative dynamical systems [See also 28Dxx, 37Kxx, 37Lxx, 54H20] Citation Meng, Qing; Ng, Chi-Keung. Invariant means and property $T$ of crossed products. Rocky Mountain J. Math. 48 (2018), no. 3, 905--912. doi:10.1216/RMJ-2018-48-3-905. https://projecteuclid.org/euclid.rmjm/1533230831
I looked at the proof of Archimedean Property in several places and, in all of them, it is proven using the following structure (proof by contradiction), without much variation: If $\space x \in \mathbb{R} \space,\space y \in \mathbb{R},$ and $x > 0$, then there is at least one natural number $n$ such that $nx > y$. $\bf{(*)}$ Proof: Let $A$ be the set of all $nx$, where $n$ runs through the positive integers. $$A=\left\{nx \space| \space n \in \mathbb{N}\right\}$$ If $\bf{(*)}$ were false, which is equivalent to suppose that: $$nx \leq y, \space \forall n \in \mathbb{N}$$ then $y$ would be an upper bound of $A$. But then $A$ has a leastupper bound in $\mathbb{R}$. Put $\alpha = \sup A$. Since $x > 0$, $\alpha - x < \alpha$, and $\alpha - x$ is not an upper bound of $A$. Hence $\alpha - x < mx$ for some positive integer $m$. But then $\alpha < (m+1)x \in A$, which is impossible, since $\alpha$ is an upper bound of $A$. Then, by contradiction, there exist an natural number n such that $nx > y$. $\tag*{$\blacksquare$}$ The following is a bit of what I found. All of these sources prove the property by contradiction: So, my question is: How to proof this theorem directly, without using proof by contradiction?
kidzsearch.com > wiki Explore:images videos games Division (mathematics) [math]6/3\,[/math] or [math]\frac 63[/math] or [math]6 \div 3.[/math] Each, of those three, means "6 divided by 3" giving 2 as the answer. The first number is the dividend (6), and the second number is the divisor (3). The result (or answer) is the quotient. Whole numbers, any left-over amount is called the "remainder" (such as 14/4 gives 3 with the remainer as 2, as the number 3 2⁄ 4, same as 3 1⁄ 2 or 3.5). With multiplication If c times b equals a, written as: [math]c \times b = a[/math] where b is not zero, then a divided by b equals c, written as: [math]\frac ab = c[/math] For instance, [math]\frac 63 = 2[/math] since [math]2 \times 3 = 6[/math]. In the above expression, a is called the dividend, b the divisor and c the quotient. [math]\frac x0 = ?[/math] ...is not defined. Notation Division is most often shown by placing the dividend over the divisor with a horizontal line, also called a vinculum, between them. For example, a divided by b is written [math]\frac ab.[/math] This can be read out loud as "a divided by b" or "a over b". A way to express division all on one line is to write the dividend, then a slash, then the divisor, like this: [math]a/b.\,[/math] This is the usual way to specify division in most computer programming languages since it can easily be typed as a simple sequence of characters. A typographical variation which is halfway between these two forms uses a slash but elevates the dividend, and lowers the divisor: ⁄ a . b Any of these forms can be used to display a fraction. A fraction is a division expression where both dividend and divisor are integers (although typically called the numerator and denominator). A fraction is an accepted way of writing numbers. It is not always expected that the result of the division is written in decimals. A less common way to show division is to use the obelus (or division sign) in this manner: [math]a \div b.[/math] But in elementary arithmetic this form is used rather often. The obelus is also used alone to represent the division operation itself, as for instance as a label on a key of a calculator. Other pages Other websites
Semilinear nonlocal elliptic equations with critical and supercritical exponents Department of Mathematics, Indian Institute of Science Education and Research, Dr. Homi Bhaba Road, Pune-411008, India $\left\{ \begin{align} &{{(-\Delta lta )}^{s}}u={{u}^{p}}-{{u}^{q}}\ \text{in}\ \text{ }{{\mathbb{R}}^{N}}, \\ &u\in {{{\dot{H}}}^{s}}({{\mathbb{R}}^{N}})\cap {{L}^{q+1}}({{\mathbb{R}}^{N}}), \\ &u>0\ \ \text{in}\ \ {{\mathbb{R}}^{N}}, \\ \end{align} \right.$ $s∈(0,1)$ $(-Δ)^s$ $\mathbb{R}^N$ $q>p≥q \frac{N+2s}{N-2s}$ $N>2s$ $s∈(0,1)$ $s∈(0,1)$ $p=\frac{N+2s}{N-2s}$ $p>\frac{N+2s}{N-2s}$ $p≥q \frac{N+2s}{N-2s}$ Keywords:Super-critical exponent, fractional laplacian, Pohozaev identity, nonexistence, entire solution, decay estimate, gradient estimate, radial symmetry, critical exponent, nonlocal. Mathematics Subject Classification:Primary: 35B08, 35B40, 35B44. Citation:Mousomi Bhakta, Debangana Mukherjee. Semilinear nonlocal elliptic equations with critical and supercritical exponents. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1741-1766. doi: 10.3934/cpaa.2017085 References: [1] [2] B. Barrios, E. Colorado, A. De Pablo and U. Sánchez, On some critical problems for the fractional Laplacian operator, [3] [4] M. Bhakta and S. Santra, On a singular equation with critical and supercritical exponents To appear in [5] C. Brändle, E. Colorado, A. de Pablo and U. Sánchez, A concave-convex elliptic problem involving the fractional Laplacian, [6] X. Cabré and E. Cinti, Sharp energy estimates for nonlinear fractional diffusion equations, [7] [8] [9] [10] J. Dávila, L. Dupaigne and J. Wei, On the fractional Lane-Emden equation, Trans. Amer. Math. Soc. .Google Scholar [11] [12] E. Fabes, C. Kenig and R. Serapioni, The local regularity of solutions of degenerate elliptic equations, [13] [14] P. Felmer and Y. Wang, Radial symmetry of positive solutions to equations involving the fractional Laplacian [15] N. Ghoussoub and S. Shakerian, Borderline variational problems involving fractional Laplacians and critical singularities, [16] S. Jarohs and T. Weth, Asymptotic symmetry for a class of nonlinear fractional reaction-diffusion equations, [17] T. Jin, Y. Y. Li and J. Xiong, On a fractional Nirenberg problem, part Ⅰ: blow up analysis and compactness of solutions, [18] M. K. Kwong, J. B. Mcleod, L. A. Peletier and W. C. Troy, On ground state solutions of $-\Delta u = u^p - u^q$, [19] F. Merle and L. Peletier, Asymptotic behaviour of positive solutions of elliptic equations with critical and supercritical growth, I. The radial case, [20] F. Merle and L. Peletier, Asymptotic behaviour of positive solutions of elliptic equations with critical and supercritical growth, Ⅱ. The non-radial case, [21] [22] [23] G. Palatucci and A. Pisante, Improved Sobolev embeddings, profile decomposition, and concentration-compactness for fractional Sobolev spaces, [24] [25] [26] X. Ros-Oton and J. Serra, The Dirichlet problem for the fractional Laplacian: regularity up to the boundary, [27] [28] X. Ros-Oton and J. Serra, Nonexistence results for nonlocal equations with critical and supercritical nonlinearities, [29] [30] [31] [32] [33] [34] show all references References: [1] [2] B. Barrios, E. Colorado, A. De Pablo and U. Sánchez, On some critical problems for the fractional Laplacian operator, [3] [4] M. Bhakta and S. Santra, On a singular equation with critical and supercritical exponents To appear in [5] C. Brändle, E. Colorado, A. de Pablo and U. Sánchez, A concave-convex elliptic problem involving the fractional Laplacian, [6] X. Cabré and E. Cinti, Sharp energy estimates for nonlinear fractional diffusion equations, [7] [8] [9] [10] J. Dávila, L. Dupaigne and J. Wei, On the fractional Lane-Emden equation, Trans. Amer. Math. Soc. .Google Scholar [11] [12] E. Fabes, C. Kenig and R. Serapioni, The local regularity of solutions of degenerate elliptic equations, [13] [14] P. Felmer and Y. Wang, Radial symmetry of positive solutions to equations involving the fractional Laplacian [15] N. Ghoussoub and S. Shakerian, Borderline variational problems involving fractional Laplacians and critical singularities, [16] S. Jarohs and T. Weth, Asymptotic symmetry for a class of nonlinear fractional reaction-diffusion equations, [17] T. Jin, Y. Y. Li and J. Xiong, On a fractional Nirenberg problem, part Ⅰ: blow up analysis and compactness of solutions, [18] M. K. Kwong, J. B. Mcleod, L. A. Peletier and W. C. Troy, On ground state solutions of $-\Delta u = u^p - u^q$, [19] F. Merle and L. Peletier, Asymptotic behaviour of positive solutions of elliptic equations with critical and supercritical growth, I. The radial case, [20] F. Merle and L. Peletier, Asymptotic behaviour of positive solutions of elliptic equations with critical and supercritical growth, Ⅱ. The non-radial case, [21] [22] [23] G. Palatucci and A. Pisante, Improved Sobolev embeddings, profile decomposition, and concentration-compactness for fractional Sobolev spaces, [24] [25] [26] X. Ros-Oton and J. Serra, The Dirichlet problem for the fractional Laplacian: regularity up to the boundary, [27] [28] X. Ros-Oton and J. Serra, Nonexistence results for nonlocal equations with critical and supercritical nonlinearities, [29] [30] [31] [32] [33] [34] [1] Xudong Shang, Jihui Zhang, Yang Yang. Positive solutions of nonhomogeneous fractional Laplacian problem with critical exponent. [2] Maoding Zhen, Jinchun He, Haoyuan Xu, Meihua Yang. Positive ground state solutions for fractional Laplacian system with one critical exponent and one subcritical exponent. [3] Kaimin Teng, Xiumei He. Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent. [4] Guangze Gu, Xianhua Tang, Youpei Zhang. Ground states for asymptotically periodic fractional Kirchhoff equation with critical Sobolev exponent. [5] [6] Hongjie Dong, Dapeng Du. Global well-posedness and a decay estimate for the critical dissipative quasi-geostrophic equation in the whole space. [7] A. M. Micheletti, Monica Musso, A. Pistoia. Super-position of spikes for a slightly super-critical elliptic equation in $R^N$. [8] Antonio Capella. Solutions of a pure critical exponent problem involving the half-laplacian in annular-shaped domains. [9] Lingwei Ma, Zhong Bo Fang. A new second critical exponent and life span for a quasilinear degenerate parabolic equation with weighted nonlocal sources. [10] [11] [12] Patrick Martinez, Jean-Michel Roquejoffre. The rate of attraction of super-critical waves in a Fisher-KPP type model with shear flow. [13] Lorena Bociu, Petronela Radu, Daniel Toundykov. Errata: Regular solutions of wave equations with super-critical sources and exponential-to-logarithmic damping. [14] Lorena Bociu, Petronela Radu, Daniel Toundykov. Regular solutions of wave equations with super-critical sources and exponential-to-logarithmic damping. [15] [16] Ran Zhuo, Yan Li. Nonexistence and symmetry of solutions for Schrödinger systems involving fractional Laplacian. [17] Leyun Wu, Pengcheng Niu. Symmetry and nonexistence of positive solutions to fractional [18] Wentao Huang, Jianlin Xiang. Soliton solutions for a quasilinear Schrödinger equation with critical exponent. [19] Qi-Lin Xie, Xing-Ping Wu, Chun-Lei Tang. Existence and multiplicity of solutions for Kirchhoff type problem with critical exponent. [20] Xu Zhang, Shiwang Ma, Qilin Xie. Bound state solutions of Schrödinger-Poisson system with critical exponent. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
Omega, $\omega$ The smallest infinite ordinal, often denoted $\omega$ (omega), has the order type of the natural numbers. As a von Neumann ordinal, $\omega$ is in fact equal to the set of natural numbers. Since $\omega$ is infinite, it is not equinumerous with any smaller ordinal, and so it is an initial ordinal, that is, a cardinal. When considered as a cardinal, the ordinal $\omega$ is denoted $\aleph_0$. So while these two notations are intensionally different---we use the term $\omega$ when using this number as an ordinal and $\aleph_0$ when using it as a cardinal---nevertheless in the contemporary treatment of cardinals in ZFC as initial ordinals, they are extensionally the same and refer to the same object. Countable sets A set is countable if it can be put into bijective correspondence with a subset of $\omega$. This includes all finite sets, and a set is countably infinite if it is countable and also infinite. Some famous examples of countable sets include: The natural numbers $\mathbb{N}=\{0,1,2,\ldots\}$. The integers $\mathbb{Z}=\{\ldots,-2,-1,0,1,2,\ldots\}$ The rational numbers $\mathbb{Q}=\{\frac{p}{q}\mid p,q\in\mathbb{Z}, q\neq 0\}$ The real algebraic numbers $\mathbb{A}$, consisting of all zeros of nontrivial polynomials over $\mathbb{Q}$ The union of countably many countable sets remains countable, although in the general case this fact requires the axiom of choice. A set is uncountable if it is not countable.
2019-07-18 17:03 Precision measurement of the $\Lambda_c^+$, $\Xi_c^+$ and $\Xi_c^0$ baryon lifetimes / LHCb Collaboration We report measurements of the lifetimes of the $\Lambda_c^+$, $\Xi_c^+$ and $\Xi_c^0$ charm baryons using proton-proton collision data at center-of-mass energies of 7 and 8 TeV, corresponding to an integrated luminosity of 3.0 fb$^{-1}$, collected by the LHCb experiment. The charm baryons are reconstructed through the decays $\Lambda_c^+\to pK^-\pi^+$, $\Xi_c^+\to pK^-\pi^+$ and $\Xi_c^0\to pK^-K^-\pi^+$, and originate from semimuonic decays of beauty baryons. [...] arXiv:1906.08350; LHCb-PAPER-2019-008; CERN-EP-2019-122; LHCB-PAPER-2019-008.- 2019-08-02 - 12 p. - Published in : Phys. Rev. D 100 (2019) 032001 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP; Supplementary information: ZIP; Registo detalhado - Registos similares 2019-07-02 10:45 Observation of the $\Lambda_b^0\rightarrow \chi_{c1}(3872)pK^-$ decay / LHCb Collaboration Using proton-proton collision data, collected with the LHCb detector and corresponding to 1.0, 2.0 and 1.9 fb$^{-1}$ of integrated luminosity at the centre-of-mass energies of 7, 8, and 13 TeV, respectively, the decay $\Lambda_b^0\to \chi_{c1}(3872)pK^-$ with $\chi_{c1}\to J/\psi\pi^+\pi^-$ is observed for the first time. The significance of the observed signal is in excess of seven standard deviations. [...] arXiv:1907.00954; CERN-EP-2019-131; LHCb-PAPER-2019-023; LHCB-PAPER-2019-023.- 2019-09-03 - 21 p. - Published in : JHEP 1909 (2019) 028 Article from SCOAP3: PDF; Fulltext: LHCb-PAPER-2019-023 - PDF; 1907.00954 - PDF; Related data file(s): ZIP; Supplementary information: ZIP; Registo detalhado - Registos similares 2019-06-21 17:31 Updated measurement of time-dependent CP-violating observables in $B^0_s \to J/\psi K^+K^-$ decays / LHCb Collaboration The decay-time-dependent {\it CP} asymmetry in $B^{0}_{s}\to J/\psi K^{+} K^{-}$ decays is measured using proton-proton collision data, corresponding to an integrated luminosity of $1.9\,\mathrm{fb^{-1}}$, collected with the LHCb detector at a centre-of-mass energy of $13\,\mathrm{TeV}$ in 2015 and 2016. Using a sample of approximately 117\,000 signal decays with an invariant $K^{+} K^{-}$ mass in the vicinity of the $\phi(1020)$ resonance, the {\it CP}-violating phase $\phi_s$ is measured, along with the difference in decay widths of the light and heavy mass eigenstates of the $B^{0}_{s}$-$\overline{B}^{0}_{s}$ system, $\Delta\Gamma_s$. [...] arXiv:1906.08356; LHCb-PAPER-2019-013; CERN-EP-2019-108; LHCB-PAPER-2019-013.- Geneva : CERN, 2019-08-22 - 42 p. - Published in : Eur. Phys. J. C 79 (2019) 706 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP; Supplementary information: ZIP; Registo detalhado - Registos similares 2019-06-21 17:07 Measurement of $C\!P$ observables in the process $B^0 \to DK^{*0}$ with two- and four-body $D$ decays / LHCb Collaboration Measurements of $C\!P$ observables in $B^0 \to DK^{*0}$ decays are presented, where $D$ represents a superposition of $D^0$ and $\bar{D}^0$ states. The $D$ meson is reconstructed in the two-body final states $K^+\pi^-$, $\pi^+ K^-$, $K^+K^-$ and $\pi^+\pi^-$, and, for the first time, in the four-body final states $K^+\pi^-\pi^+\pi^-$, $\pi^+ K^-\pi^+\pi^-$ and $\pi^+\pi^-\pi^+\pi^-$. [...] arXiv:1906.08297; LHCb-PAPER-2019-021; CERN-EP-2019-111.- Geneva : CERN, 2019-08-07 - 30 p. - Published in : JHEP 1908 (2019) 041 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP; Supplementary information: ZIP; Registo detalhado - Registos similares 2019-05-16 14:53 Registo detalhado - Registos similares 2019-05-16 14:31 Measurement of $CP$-violating and mixing-induced observables in $B_s^0 \to \phi\gamma$ decays / LHCb Collaboration A time-dependent analysis of the $B_s^0 \to \phi\gamma$ decay rate is performed to determine the $CP$-violating observables $S_{\phi\gamma}$ and $C_{\phi\gamma}$, and the mixing-induced observable $\mathcal{A}^{\Delta}_{\phi\gamma}$. The measurement is based on a sample of $pp$ collision data recorded with the LHCb detector, corresponding to an integrated luminosity of 3 fb$^{-1}$ at center-of-mass energies of 7 and 8 TeV. [...] arXiv:1905.06284; LHCb-PAPER-2019-015; CERN-EP-2019-077; LHCb-PAPER-2019-015; CERN-EP-2019-077; LHCB-PAPER-2019-015.- 2019-08-28 - 10 p. - Published in : Phys. Rev. Lett. 123 (2019) 081802 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP; Registo detalhado - Registos similares 2019-04-10 11:16 Observation of a narrow pentaquark state, $P_c(4312)^+$, and of two-peak structure of the $P_c(4450)^+$ / LHCb Collaboration A narrow pentaquark state, $P_c(4312)^+$, decaying to $J/\psi p$ is discovered with a statistical significance of $7.3\sigma$ in a data sample of $\Lambda_b^0\to J/\psi p K^-$ decays which is an order of magnitude larger than that previously analyzed by the LHCb collaboration. The $P_c(4450)^+$ pentaquark structure formerly reported by LHCb is confirmed and observed to consist of two narrow overlapping peaks, $P_c(4440)^+$ and $P_c(4457)^+$, where the statistical significance of this two-peak interpretation is $5.4\sigma$. [...] arXiv:1904.03947; LHCb-PAPER-2019-014 CERN-EP-2019-058; LHCB-PAPER-2019-014.- Geneva : CERN, 2019-06-06 - 11 p. - Published in : Phys. Rev. Lett. 122 (2019) 222001 Article from SCOAP3: PDF; Fulltext: PDF; Fulltext from Publisher: PDF; Related data file(s): ZIP; Supplementary information: ZIP; External link: SYMMETRY Registo detalhado - Registos similares 2019-04-03 11:16 Measurements of $CP$ asymmetries in charmless four-body $\Lambda^0_b$ and $\Xi_b^0$ decays / LHCb Collaboration A search for $CP$ violation in charmless four-body decays of $\Lambda^0_b$ and $\Xi_b^0$ baryons with a proton and three charged mesons in the final state is performed. To cancel out production and detection charge-asymmetry effects, the search is carried out by measuring the difference between the $CP$ asymmetries in a charmless decay and in a decay with an intermediate charmed baryon with the same particles in the final state. [...] arXiv:1903.06792; LHCb-PAPER-2018-044; CERN-EP-2019-13; LHCb-PAPER-2018-044 and CERN-EP-2019-13; LHCB-PAPER-2018-044.- 2019-09-07 - 30 p. - Published in : Eur. Phys. J. C 79 (2019) 745 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP; Registo detalhado - Registos similares 2019-04-01 11:42 Observation of an excited $B_c^+$ state / LHCb Collaboration Using $pp$ collision data corresponding to an integrated luminosity of $8.5\,\mathrm{fb}^{-1}$ recorded by the LHCb experiment at centre-of-mass energies of $\sqrt{s} = 7$, $8$ and $13\mathrm{\,Te\kern -0.1em V}$, the observation of an excited $B_c^+$ state in the $B_c^+\pi^+\pi^-$ invariant-mass spectrum is reported. The state has a mass of $6841.2 \pm 0.6 {\,\rm (stat)\,} \pm 0.1 {\,\rm (syst)\,} \pm 0.8\,(B_c^+) \mathrm{\,MeV}/c^2$, where the last uncertainty is due to the limited knowledge of the $B_c^+$ mass. [...] arXiv:1904.00081; CERN-EP-2019-050; LHCb-PAPER-2019-007.- Geneva : CERN, 2019-06-11 - 10 p. - Published in : Phys. Rev. Lett. 122 (2019) 232001 Article from SCOAP3: PDF; Fulltext: PDF; Fulltext from Publisher: PDF; Related data file(s): ZIP; Supplementary information: ZIP; Registo detalhado - Registos similares 2019-04-01 09:46 Near-threshold $D\bar{D}$ spectroscopy and observation of a new charmonium state / LHCb Collaboration Using proton-proton collisiondata, corresponding to an integrated luminosity of 9 fb$^{-1}$, collected with the~LHCb detector between 2011 and 2018, a new narrow charmonium state, the $X(3842)$ resonance, is observed in the decay modes $X(3842)\rightarrow D^0\bar{D}^0$ and $X(3842)\rightarrow D^+D^-$. The mass and the natural width of this state are measured to be \begin{eqnarray*} m_{X(3842)} & = & 3842.71 \pm 0.16 \pm 0.12~ \text {MeV}/c^2\,, \\ \Gamma_{X(3842)} & = & 2.79 \pm 0.51 \pm 0.35 ~ \text {MeV}\,, \end{eqnarray*} where the first uncertainty is statistical and the second is systematic. [...] arXiv:1903.12240; CERN-EP-2019-047; LHCb-PAPER-2019-005; LHCB-PAPER-2019-005.- Geneva : CERN, 2019-07-08 - 23 p. - Published in : JHEP 1907 (2019) 035 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP; Supplementary information: ZIP; Registo detalhado - Registos similares
As he described it some time later the situation was as follows. You have a spectrometer. The point of spectrometry is to find the frequency of light (or electromagnetic radiation more generally — but for convenience I’ll just say “light” from now on). Given a light source, spectrometry aims to find which frequencies (or colours) of light occur in it, and how they are distributed across the optical spectrum. The spectrometer Golay had in mind was a cleverly designed “multislit” one. As the name suggests, it had many slits. Each slit could be open or closed. Light would come in on one side, pass through the contraption, and then exit on the other side, where detectors would be placed to record the output. Both the entrance side and the exit side had many slits — the same number \(4N\) on either side. (Why a multiple of 4? It’s all part of the clever design, read on…) Moreover, each entrance slit had a natural pathway through to an exit slit. The slits were designed so that light entering a particular entrance slit would pass through to a specific exit slit. The entrance and exit slits were thus matched up in a one-to-one fashion. This “matching up” in fact “inverted” the light: light coming in through the top slit on the left, would exit through the bottom slit on the right; light entering through the second-from-top slit on the left would exist through the second-from-bottom slit on the right; and so on. At least, that’s what would happen for one particular colour of light, i.e. one particular frequency — let’s say pure crimson red. The point of the spectrometer is to pick out distinct frequencies, and so this contraption is “tuned” to perfectly align the slits for crimson red light. What about other frequencies? They get shifted. When light of another frequency, let’s say green, passed through an entrance slit, it did not end up in the same place as crimson red light, opposite to where it came in; rather, it ended up shifted across by some number \(j\) of slits. In other words, if red light and green light enter through the same lit, they exit through slits which are \(j\) spots apart from each other. Golay’s idea was to arrange the slits and detectors in a clever way, so as to eliminate all the light of other freuqencies, and isolate the preferred (red) light. By an ingenious arrangement of detectors and open and closed slits, the red light would be greatly enhanced, with other colours (frequencies) completely filtered out. How did this arrangement go? In a slightly complicated way. The entrance slits would be split into four equal length sections, each of length \(N\), as would the exit slits. Light entering through a slit in a particular section would go out a slit in the corresponding (opposite) section of exit slits. These sections were separated from each other. In particular, non-red light could be shifted across slits within one section, but it could not cross over to another section. Golay imagined there to be two detectors. The first detector \(D_1 \) would cover the bottom two exit sections, measuring the total amount of light exiting the top half of the slits, i.e. the bottom \( 2N \) exit slits. The other detector \(D_2\) would cover the top half of the exit slits, i.e. the bottom two sections, the bottom \( 2N \) exit slits. The detectors \( D_1 \) and \( D_2 \) simply capture the amount of light coming out of the bottom and top \( 2N \) slits respectively, or equivalently for our purposes, the number of those slits through which light emerges. So in effect the whole contraption is in four separated parts, and there are two detectors, each detecting the output from two of the parts. Now, how to arrange the open and closed slits? Let’s denote open slits by a \( +1 \) (or just \( + \) for short), and closed slits by a \( -1 \) (or just \( – \) for short). So a sequence of open and closed slits can be denoted by a sequence of \( + \) and \( – \) symbols. (You might think \( 1 \)s and \( 0 \)s are more appropriate for open and closed slits then \( +1\)s and \( -1 \)s. You could indeed use \(1\)s and \(0\)s; in that case I’ll leave it up to you to adjust the mathematics below.) Now Golay suggested taking two sequences \(a\) and \(b\) of \(+\)s and \(–\)s, each of length \(N\) . They would be used to configure the slits. Let’s write \( a = (a_1, \ldots, a_N) \) and \(b = (b_1, \ldots, b_N)\) , where every \(a_i \) or \( b_i \) is either a \( +1 \) or a \( -1 \) . Now, sequences \(a \) and \( b \) each have length \( N\), but there are \( 4N \) entrance slits and \( 4N \) exit slits. What to do? Golay said what to do. Golay said also to take the negatives of \( a \) and \( b\). The negative of a sequence is given by multiplying all its terms by \(-1 \) (just like how you take the negative of a number). In other words, to take the negative of the sequence \( a\), you replace each \( + \) with a \( – \), and each \(– \) with \(+\). We can write \(-a\) for the negative of \(a\), and \(-b\) for the negative of \(b\). Golay suggested, very cleverly, that the \(4N \) entrance slits, from top to bottom, should be should be arranged using \(a \) (for the top \( N \) slits), then \( -a \) (for the next \( N\)), then \( b \) (for the next \(N\)), and finally \(-b\) (for the bottom \(N\) slits). So as we read down the slits we read the sequences \( a,-a,b,-b\). On the exit side, because the light is “inverted”, we now read bottom to top. Golay suggested that, as we read up the slits, we use the sequences \( a,-a,-b,b\). That’s not quite the same as what we did on the entrance side. The top \(N\) entrance slits, set according to the sequence \(a\), correspond to the bottom \(N \) exit slits, also set according to the sequence \(a\). The next \(N\) entrance slits are set according to \(-a\), as are the next \( N \) exit slits. But after that, the entrance slits set according to \( b \) correspond to the exit slits set to \( -b \) ; and the final \( N \) entrance slits are set to \( -b\), with corresponding exit slits set to \( b\). So the \(a \) and \( -a \) slits “match”, but the \( b \) and \( -b \) “anti-match”. We can now see what the a’s and b’s mean in Golay’s diagram. (Golay writes \( a’ \) and \( b’ \) rather than \(-a\) and \(-b\).) One final twist: the output of the contraption is measured by the two detectors \( D_1 \) and \( D_2\). But Golay proposed not to add their results, but to subtract them. So the final number we want to look at is not \(D_1 + D_2\), but \( D_1 – D_2\). Anyway, that was Golay’s prescription. So what happens to light going through this spectroscopic contraption, now with its numerous slits configured in this intricate way? First let’s consider red light — which, recall, means the light goes straight from entrance slit to opposite exit slit. We’ll take the four sections separately, which, we recall, are labelled \(a,-a,b,-b \) at the entrance, and \( a,-a,-b,b \) at the exit. For light hitting one of the top \( N \) entrance slits, one encoded by \( a_i\), it is blocked if \( a_i = -1\). But if \(a_i = 1 \) then the light sails through the open slit, out to the corresponding exit slit, which is also labelled \( a_i = 1\), and through to the detector \( D_1\). Similarly, consider one of the entrance slits in the next section, encoded by some \( -a_i\). Light is blocked if \(-a_i = -1 \) but if \( -a_i = 1 \) then the light sails over the the corresponding exit slit, also labelled \( -a_i = 1 \) , through to the detector \( D_1\). Now consider the third section, where entrance slits are encoded by \( b \) but exit slits are encoded by \( -b\). Light hits a slit encoded by some \(b_i\). If \(b_i = -1\), the entrance slit is closed, and the light is blocked there. If \( b_i = 1\), the entrance slit is open, and the light enters, but then the exit slit is encoded by \( -b_i = -1\), so is closed, and the light is blocked here. Either way, the light is blocked. The final section is similar. The entrance slit is labelled by some \( -b_i\), and the exit slit by \(b_i\). If \( -b_i = -1\), the entrance slit is closed and light is blocked; if \( -b_i = 1\), then the entrance slit is open, but as \( b_i = -1\), the exit slit is blocked. Either way the light is blocked. Now detector \(D_1 \) counts the number of slits in the first two sections from which light emerges. In the first section, those slits are the ones encoded by \( a_i \) such that \( a_i = 1\). In the second section, those slits are the ones encoded by \( -a_i \) such that \( -a_i = 1 \) , i.e. \( a_i = -1\). On the other hand, \(D_2 \) detects nothing, as everything is blocked. So we have \[ D_1 = ( \# i \text{ such that } a_i = 1) + ( \#i \text{ such that } a_i = -1), \\ D_2 = 0. \] The expression for \( D_1 \) simplifies rather dramatically, because every \( a_i \) is either \( +1 \) or \( -1\). If you add up the number of \(+1\)s and the number of \(-1\)s, you simply get the number of terms in the sequence, which is \(N\). Thus in fact \[ D_1 = N, \quad D_2 = 0, \] and the final result (remember we subtract the results of the two detectors) is \[ D_1 – D_2 = N. \] So, we end up with a nice result, when we feed Golay’s spectroscope light of the colour it’s designed to detect (i.e. red). Now, what happens with other colours? Let’s now feed Golay’s spectroscope some other colour (i.e. frequency, i.e. wavelength) of light, which means that the light gets shifted across \( j \) slots. Let’s say the light is green. Consider green light hitting one of the top \( N \) entrance slits, encoded by \( a_i\). The light is blocked if \( a_i = -1\). But if \(a_i = 1 \) then the light sails through the open slit, over to the corresponding exit slits, which are also encoded by the sequence \( a\). The light is shifted across \(j \) slots in the process, and so arrives at the exit slit encoded by \( a_{i+j}\). If \(a_{i+j} = 1\), the light proceeds to detector \( D_1\); otherwise, the light is blocked. In other words, the green light gets to the detector if and only if \( a_i = a_{i+j} = 1\). (Note also that if \( i+j > N \) or \( i+j < 1\), then the light beam gets shifted so far across that it hits the end of the section of the machine; and the sections are separated from each other. So we only need to consider those \( i \) (which are between \( 1 \) and \( N \) ) such that \( i+j \) is also between \( 1 \) and \( N\). In other words, (assuming \( j \) is positive) \( i \) only goes from \( 1 \) up to \( N-j\). Now consider green light hitting the second section, where entrance and exit slits are labelled by \( -a_i\). If \( a_i = -1\), then light is blocked at the entrance. If \( -a_i = 1\), light enters, and proceeds with a shift over to an exit slit encoded by \( -a_{i+j}\). If \( -a_{i+j} = -1\), light is blocked at the exit, but if \( -a_{i+j} = 1\), then the light proceeds to detector \( D_2\). In other words, light gets to the detector \( D_1 \) if and only if \( -a_i = -a_{i+j} = 1\), or equivalently, \( a_i = a_{i+j} = -1\). In the third section, entrance slits encoded by \( b \) and exit slits by \( -b\). For light to get through, we must have \( b_i = 1 \) and \( -b_{i+j} = 1\). Finally, in the fourth section, entrance slits are encoded by \( -b \) and exit slits by \( b\). Light gets through when \( -b_i = 1 \) and \( b_{i+j} = 1 \). Putting these together, we have \[ D_1 = ( \# i \text{ such that } a_i = 1 \text{ and } a_{i+j} = 1) + ( \# i \text{ such that } a_i = -1 \text{ and } a_{i+j} = -1), \\ D_2 = ( \# i \text{ such that } b_i = 1 \text{ and } b_{i+j} = -1) + ( \# i \text{ such that } b_i = -1 \text{ and } b_{i+j} = 1). \] Now let’s manipulate these sums a little. Note that, for any \(i\), \( a_i = \pm 1 \) and \( a_{i+j} = \pm 1\). Thus the product \( a_i a_{i+j} = \pm 1\). But note that \( a_i a_{i+j} = 1 \) precisely when \( a_i = a_{i+j} = 1\), or \( a_i = a_{i+j} = -1[latex], i.e. when [latex] a_i \) and \( a_{i+j} \) are equal. These are precisely the cases counted in the sum for \( D_1 \) above. When \( a_i \) and \( a_{i+j} \) are not equal, they multiply to \( -1 \) instead. Similarly, consider \( b_i \) and \( b_{i+j}\). The product \( b_i b_{i+j} \) is equal to \( -1 \) precisely when \( b_i = 1 \) and \( b_{i+j} = -1 \) , or when \( b_i = -1 \) and \( b_{i+j} = 1 \) . And these are precisely the cases counted above for \( D_2 \) . So we have \[ D_1 = ( \# i \text{ such that } a_i a_{i+j} = 1 ),\\ D_2 = ( \# i \text{ such that } b_i b_{i+j} = -1). \] Now, as we’ve said, for each \(i\), \( a_i a_{i+j} \) is \( 1 \) or \( -1\). For how many \( i \) do we get \( +1\)? Precisely \(D_1 \) times! Because that’s exactly what the equation above for \( D_1 \) says. All the other terms must be \( -1\). And we said above that \( i \) goes from \( 1 \) up to \( N-j\). So there are \( N-j-D_1 \) times that \( a_i a_{i+j} = -1\). Let’s now just add up all the terms \( a_i a_{i+j}\), all the way from \( i=1\), i.e. the term \( _1 a_{1+j}\), to \( i=N-j\), i.e. the term \(a_{N-j} a_{N-j+j}\). We get \(+1 \) sometimes — precisely \( D_1 \) times — and \( -1 \) sometimes — precisely \( N-j-D_1 \) times. It follows that \[ a_1 a_{1+j} + \cdots + a_{N-j} a_{N-j+j} = 1 \cdot D_1 + (-1) \cdot (N-j-D_1) \] or if we tidy up, \[ \sum_{i=1}^{N-j} a_i a_{i+j} = 2D_1 – N + j. \] We can do the same for the terms \( b_i b_{i+j}\). We get \( -1 \) precisely \( D_2 \) times, as the equation for \( D_2 \) says above. And we get \( +1 \) all the other times, but there are \( N-j \) times overall, so we get \( +1 \) precisely \( N-j-D_2 \) times. Hence \[ b_1 b_{1+j} + \cdots + b_{N-j} b_{N-j+j} = 1 \cdot (N-j-D_2) + (-1) \cdot D_2, \] or equivalently, \[ \sum_{i=1}^{N-j} b_i b_{i+j} = -2D_2 + N – j. \] We want to get the final result of the detectors, which is \( D_1 – D_2\) . So let’s rearrange the equations above to obtain \( D_1 \) and \( D_2\), \[ D_1 = N – j + \frac{1}{2} \sum_{i=1}^{N-j} a_i a_{i+j}, \\ D_2 = N – j – \frac{1}{2} \sum_{i=1}^{N-j} b_i b_{i+j}, \] and subtract. When we do so, things simplify considerably! \[ D_1 – D_2 = \frac{1}{2} \sum_{i=1}^{N-j} a_i a_{i+j} + b_i b_{i+j} \] This is a very nice result. And it reduces what Golay wanted to a very interesting maths problem. Two sequences \( a = (a_1, \ldots, a_N) \) and \( b = (b_1, \ldots, b_N) \) of \( \pm 1\)s are called a complementary pair or a Golay pair if, for all \( j \neq 0\), this sum is zero: \[ \sum_{i=1}^{N-j} a_i a_{i+j} + b_i b_{i+j} = 0. \] Sums like these are often called autocorrelations. So the property we are looking for is a property of autocorrelations. Golay pairs are all about autocorrelations. Hence the title of this post. If you can find a pair of Golay complementary sequences, then you can configure all the slits in the multislit spectrometer according to the sequence, and for any colour except the one you are looking for (red), the detectors will perfectly cancel out that colour! So your spectrometry will be greatly enhanced. Now you might wonder, do any such pairs exist? Yes, that do. Oh yes, they do. And that is also a very interesting question — not yet completely solved, with lots of ongoing research. Stay tuned for more. P.S. Yes, the title of this blog post is based on a song by Chumbawumba. It’s a very excellent song.
Lethbridge Number Theory and Combinatorics Seminar: Nathan Ng Date: 01/29/2018 University of Lethbridge Mean values of long Dirichlet polynomials A Dirichlet polynomial is a function of the form $A(t)=\sum_{n \le N} a_n n^{-it}$ where $a_n$ is a complex sequence, $N \in \mathbb{N}$, and $t \in \mathbb{R}$. For $T \ge 1$, the mean values$$\int_{0}^{T} |A(t)|^2 \, dt$$play an important role in the theory of L-functions. I will discuss work of Goldston and Gonek on how to evaluate these integrals in the case that $T < N < T^2$. This will depend on the correlation sums \[ \sum_{n \le x} a_n a_{n+h} \text{ for } h \in \mathbb{N}. \]If time permits, I will discuss a conjecture of Conrey and Keating in the case that $a_n$ corresponds to a generalized divisor function and $N > T$. Time: 12:00-12:50pm Location: B543 University Hall
Supercompact cardinal Supercompact cardinals are best motivated as a generalization of measurable cardinals, particularly the characterization of measurable cardinals in terms of elementary embeddings and strong closure properties. The notion of supercompactness and its consequences was initially developed by Solovay and Reinhardt and further elaborated on by Magidor and Gitik, among many others. Assuming the existence of a supercompact is a very strong assumption and the large cardinal strength of supercompact cardinals is seen in a wide (and bewildering) array of set-theoretic contexts, especially the development of strong forcing axioms and establishing regularity properties of sets of reals. The inner model program has yet to reach the level of a supercompact cardinal and this is considered a prominent open problem in the program itself. Curiously, by results of Woodin, should the inner program reach the level of a supercompact, there is a sense in which it will have reached all greater large cardinals, a startling contrast to previous advances in the program. Contents Formal definition and equivalent characterizations Generalizing the elementary embedding characterization of measurable cardinal, a cardinal $\kappa$ is $\theta$-supercompact if there is an elementary embedding $j:V\to M$ with $M$ a transitive class, such that $j$ has critical point $\kappa$ and $M^\theta\subset M$, i.e. $M$ is closed under arbitrary sequences of length $\theta$. Under the axiom of choice, one may assume without loss of generality that $j(\kappa)\gt\theta$. $\kappa$ is then said to be supercompact if it is $\theta$-supercompact for all $\theta$. It is worth noting that, using this formulation, $H_{\theta^+}$ must be contained in the transitive class $M$. There is an alternative formulation that is expressible in $\text{ZFC}$ using certain ultrafilters with somewhat technical properties: for $\theta\geq\kappa$, $\kappa$ if $\theta$-supercompact if there is a normal fine measure on $\mathcal{P}_\kappa(\theta)$. $\kappa$ is supercompact if for every set $A$ with $|A|\geq\kappa$, there is a normal fine measure on $\mathcal{P}_\kappa(A)$. One can see the equivalence of the two formulations by first considering the ultrafilter $U$ arising from the seed $j''\theta$, so that $X\in U\iff j''\theta\in j(X)$. It is easy to check that $U$ is a normal fine measure on $\mathcal{P}_\kappa(\theta)$. Conversely, the ultrapower by a normal fine measure $U$ on $\mathcal{P}_\kappa(\theta)$ gives rise to an embedding $j:V\to M$ (here $M$ is identified with the transitive collapse of the ultrapower by $U$). It is then straightforward to check that $\theta$ is the critical point of this embedding and that $M$ is sufficiently closed, thus witnessing $\theta$-supercompactness of $\kappa$. A third characterization was given by Magidor [Mag71] in terms of elementary embeddings from initial segments of $V$ into other (larger) initial segments of $V$, but in this characterization, the supercompact cardinal $\kappa$ is the image of the critical point of this embedding, rather than the critical point itself: $\kappa$ is supercompact if and only if for every $\eta>\kappa$ there is $\alpha<\kappa$ such that there exists a nontrivial elementary embedding $j:V_\alpha\to V_\eta$ such that $j(\mathrm{crit}(j))=\kappa$. (Remarkable cardinals could be called virtually supercompact, because one of their definitions is an exact analogue of this one (with forcing extension))[1] Properties If $\kappa$ is supercompact, then there are $2^{2^\kappa}$ normal fine measures on $\kappa$, also for every $\lambda\geq\kappa$ there are $2^{2^{\lambda^{<\kappa}}}$ normal fine measures on $\mathcal{P}_\kappa(\lambda)$. Every supercompact has Mitchell order $(2^\kappa)^+\geq\kappa^{++}$. If $\kappa$ is $\lambda$-supercompact then it is also $\mu$-supercompact for every $\mu<\lambda$. If $\lambda\geq\kappa$ is regular, $\kappa$ is $\lambda$-supercompact, then every $\alpha<\kappa$ that is $\gamma$-supercompact for all $\gamma<\kappa$ (if any exists) is also $\lambda$-supercompact. In the same vein, for every cardinals $\kappa<\lambda$, if $\lambda$ is supercompact and $\kappa$ is $\gamma$-supercompact for all $\gamma<\lambda$, then $\kappa$ is also supercompact. Laver's theorem asserts that if $\kappa$ is supercompact, there exists a function $f:\kappa\to V_\kappa$ such that for every $x$ and $\lambda\geq\kappa$ with $|tc(x)|\leq\lambda$ there exists a normal fine measure $U$ on $\mathcal{P}_\kappa(\lambda)$ such that $j_U(f)(\kappa)=x$, where $j_U$ is the elementary embedding generated from $U$. Here $tc(x)$ is the transitive closure of $x$ (i.e. the smallest transitive set containing $x$), and $f$ is called a Laver function. Supercompact cardinals and forcing The continuum hypothesis and supercompact cardinals If $\kappa$ is $\lambda$-supercompact and $2^\alpha=\alpha^{+}$ for every $\alpha<\kappa$, then $2^\alpha=\alpha^{+}$ for every $\alpha\leq\lambda$. Consequently, if the generalized continuum hypothesis holds below a supercompact cardinal, then it holds everywhere. The existence of a supercompact implies the consistency of the failure of the singular cardinal hypothesis, i.e. it is consistent that the generalized continuum hypothesis fails at a strong limit singular cardinal. It also implies the consistency of the failure of the $\text{GCH}$ at a measurable cardinal. By combining results of Magidor, Shelah and Gitik, one can show that the existence of a supercompact also implies the existence of a generic extension in which $2^{\aleph_\alpha}<\aleph_{\omega_1}$ for all $\alpha<\omega_1$, but also $2^{\aleph_{\omega_1}}>\aleph_{\omega_1+\alpha+1}$ for any prescribed $\alpha<\omega_2$. Similarly, one can have a generic extension in which the $\text{GCH}$ holds below $\aleph_\omega$ but $2^{\aleph_\omega}>\aleph_{\omega+\alpha+1}$ for any prescribed $\alpha<\omega_1$. Woodin and Cummings furthermore showed that if there exists a supercompact, then there is a generic extension in which $2^\kappa=\kappa^{++}$ for every cardinal $\kappa$, i.e. the $\text{GCH}$ fails everywhere(!). The ultrapower axiom, if consistent with a supercompact, implies that the $\text{GCH}$ holds above the least supercompact. Laver preparation Indestructibility, including the Laver diamond. Proper forcing axiom Baumgartner proved that if there is a supercompact cardinal, then the proper forcing axiom holds in a forcing extension. $\text{PFA}$'s strengthening, $\text{PFA}^{+}$, is also consistent relative to the existence of a supercompact cardinal. Martin's Maximum Relation to other large cardinals Every cardinal $\kappa$ that is $2^\kappa$-supercompact is a stationary limit of superstrong cardinals, but need not be superstrong itself. In fact $2^\kappa$-supercompact are stationary limits of quasicompacts, themselves stationary limits of 1-extendibles. If $\theta=\beth_\theta$ then every $\theta$-supercompact cardinal is $\theta$-strong. This is because $H_{\theta^+}\in M$ so $H_{\theta^+}\subset M$ by transitivity and $V_\theta\subset H_\theta\in M$ so $V_\theta\subset M$, as desired. If a cardinal $\theta$-supercompact then it also $\theta$-strongly compact. Consequently, every supercompact cardinal is also strongly compact. It is consistent with $\text{ZFC}$ that every strongly compact cardinal is also supercompact, but it is not currently known whether the existence of a strongly compact cardinal is equiconsistent with the existence of a supercompact cardinal. The ultrapower axiom gives a positive answer to this, but itself isn't known to be consistent with the existence of a supercompact in the first place. If $\kappa$ is supercompact, then there is a forcing extension in which $\kappa$ remains supercompact and is also the least strongly compact cardinal. If there exists a measurable cardinal that is a limit of strongly compact cardinals, then the least such cardinal is strongly compact but not supercompact, in fact not even $2^\kappa$-supercompact. Under the axiom of determinacy, $\omega_1$ is <$\Theta$-supercompact, where $\Theta$ is at least an aleph fixed point, and under $V=L(\mathbb{R})$ is even weakly hyper-Mahlo. The existence of a supercompact cardinals also implies the axiom $\text{AD}^{L(\mathbb{R})}$. If $\kappa$ is $|V_{\kappa+\eta}|$-supercompact with $\eta<\kappa$ then it is preceeded by a stationary set of $\eta$-extendible cardinals. If $\kappa$ is $(\eta+2)$-extendible then it is $|V_{\kappa+\eta}|$-supercompact. The least supercompact is not 1-extendible, in fact any cardinal that is both supercompact and 1-extendible is preceeded by a stationary set of cardinals that are both supercompact and limits of supercompact cardinals. The least supercompact is larger than the least huge cardinal (if such a cardinal exists). It is also larger than the least n-huge cardinal, for all n. If $\kappa$ is supercompact and there is an n-huge cardinal above $\kappa$, then there are $\kappa$-many n-huge cardinals below $\kappa$. From [2]: If κ is $2^κ$-supercompact and belongs to $C^{(n)}$, then there is a $κ$-complete normal ultrafilter $U$ over $κ$ such that the set of $C^{(n)}$-superstrong cardinals smaller than $κ$ belongs to $U$. $VP(Π_1) \iff VP(κ, Σ_2)$ for some $κ \iff$ There exists a supercompact cardinal. ($VP$ — Vopěnka's principle) $VP(\mathbf{Π_1}) \iff VP(κ, \mathbf{Σ_2})$ for a proper class of cardinals $κ \iff$ There is a proper class of supercompact cardinals. This article is a stub. Please help us to improve Cantor's Attic by adding information. References Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex
If I have volatility smile quoted with respect to the delta of an option on the forward, how can I convert this delta into the moneyness or strike of the option? Is there any bult-in function of Matlab financial toolbox? Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. It only takes a minute to sign up.Sign up to join this community The call delta in a Black framework is: $$\Delta = N(d_1)$$ with $d_1=\frac{\ln(F_t(T)/K)+(T-t)\frac{\sigma^2}{2}}{\sigma\sqrt{T-t}}$. Then the strike of the option is: $$K=F_t(T) e^{-(N^{-1}(\Delta)+1/2) \sigma \sqrt{T-t}}$$ The same thing is done if the option is a put and we obtain: $$K=F_t(T) e^{-(N^{-1}(\Delta+1)+1/2) \sigma \sqrt{T-t}}$$ In matlab it can be solved by doing: fzero(@(Strike) blsdelta(Price,Strike,Rate,Time,Volatility,Yield)-Delta, K0) where the initial guess can be K0 = Price
from the POV of sinusoidal modeling or identifying sinusoids, two very good basic reasons why a Gaussian window is good are: The Fourier Transform of a Gaussian is a Gaussian. (and Gaussians have essentially no side lobes.) $$ \mathscr{F} \{ e^{-\pi t^2} \} = e^{-\pi f^2} $$ The Gaussian function is just like the linearly-swept chirp, except for an imaginary unit, so they share lotsa stuff in common and can be modeled together elegantly in the math. just FYI, the exact definition of (unitary) Fourier Transform used is: $$ X(f) \triangleq \mathscr{F} \{ x(t) \} = \int\limits_{-\infty}^{\infty} x(t) \, e^{-j 2 \pi f t} \ dt $$ and inverse Fourier Transform: $$ x(t) \triangleq \mathscr{F}^{-1} \{ X(f) \} = \int\limits_{-\infty}^{\infty} X(f) \, e^{+j 2 \pi f t} \ df $$ i am taking advantage of the symmetry of the forward and inverse Fourier Transforms. they are identical except on replaces $-j$ for $j$, but $-j$ and $j$ are qualitatively identical. they both have equal claim to squaring to be $-1$ and to call themselves "the imaginary unit". this old paper of mine spells out some of this but i might add to this answer a little mathematical expression to the reasons 1 and 2. Gaussian window of width $\sqrt{\frac{1}{\alpha}}$ : $$ \mathscr{F} \{ e^{-\pi \alpha t^2} \} = \sqrt{\frac{1}{\alpha}} e^{-(\pi/\alpha) f^2} $$ we gotta restrict $\alpha > 0$. Linear-swept chirp with sweep rate of $\beta$ : $$ \mathscr{F} \{ e^{j \pi \beta t^2} \} = \sqrt{\frac{j}{\beta}} e^{-j (\pi/\beta) f^2} $$ so here's a linearly-swept chirp windowed with a Gaussian window: $$\begin{align}\mathscr{F} \{ e^{-\pi \alpha t^2} e^{j \pi \beta t^2} \} &= \mathscr{F} \{ e^{-\pi (\alpha - j \beta) t^2} \} \\\\ &= \sqrt{\tfrac{1}{\alpha - j \beta}} e^{-\pi \frac{1}{\alpha - j \beta} f^2} \\\\ &= \sqrt{\tfrac{\alpha + j \beta}{\alpha^2 + \beta^2}} e^{-\pi \frac{\alpha + j \beta}{\alpha^2 + \beta^2} f^2} \\\end{align}$$ now here's a linearly-swept chirp windowed with a Gaussian window that has, in the center of the window a specific frequency $f_0$ for the sinusoid: $$\begin{align}\mathscr{F} \{ e^{-\pi \alpha t^2} e^{j \pi \beta t^2} e^{j 2 \pi f_0 t} \} &= \mathscr{F} \{ e^{-\pi \alpha t^2} e^{j \pi \beta t^2} \} \Bigg|_{f \leftarrow f-f_0} \\\\ &= \sqrt{\tfrac{\alpha + j \beta}{\alpha^2 + \beta^2}} e^{-\pi \frac{\alpha + j \beta}{\alpha^2 + \beta^2} (f-f_0)^2} \\\end{align}$$ finally we can generalize it a little more by adding a ramp in the amplitude in addition to linearly swept frequency. we can think of it as sorta a linear ramp, but we're really gonna use an exponential ramp because it makes the math do much easier. $$ 1 + 2 \pi \lambda t \ \approx \ e^{2 \pi \lambda t} \qquad \text{for } |\lambda t| \ll 1 $$ $$\begin{align}\mathscr{F} \{ e^{-\pi \alpha t^2} e^{j \pi \beta t^2} e^{j 2 \pi f_0 t} e^{2 \pi \lambda t}\} &= \mathscr{F} \{ e^{-\pi \alpha t^2} e^{j \pi \beta t^2} e^{j 2 \pi (f_0 - j \lambda) t} \} \\\\ &= \mathscr{F} \{ e^{-\pi \alpha t^2} e^{j \pi \beta t^2} \} \Bigg|_{f \leftarrow f-(f_0-j\lambda)} \\\\ &= \sqrt{\tfrac{\alpha + j \beta}{\alpha^2 + \beta^2}} e^{-\pi \frac{\alpha + j \beta}{\alpha^2 + \beta^2} (f-f_0 + j\lambda)^2} \\\\ &= \sqrt{\tfrac{\alpha + j \beta}{\alpha^2 + \beta^2}} e^{-\pi \frac{\alpha + j \beta}{\alpha^2 + \beta^2} (f-f_0)^2} e^{\pi \frac{\alpha + j \beta}{\alpha^2 + \beta^2} \lambda (\lambda - j 2 (f-f_0) )} \\\end{align}$$ so, if you use a Gaussian window, you can model each sinusoidal component with frequency $f_0$ and sweep rate of $\beta$ and ramp rate of $2 \pi \lambda$. and you have a function of the very same form in the frequency domain. the paper pointed to above says how you can extract $f_0$ and $\beta$ and $\lambda$ out of the $\log(\cdot)$ of each Gaussian lobe in the frequency domain data. this is why you might consider using the Gaussian window with the Short Time Fourier Transform. what's really tits is that this is basically true for any exponential raised to a quadratic power: $$ \mathscr{F} \{ e^{a t^2 + b t + c} \} = e^{A f^2 + B f + C} $$ where $A, B, C$ are constants that are some deterministic functions of $a, b, c$.
I am reading about Dihedral Groups and I have following questions: Elements of $D_n$ act as linear transformations of plane. My thought:I know that $D_n=\{\langle a,b\rangle :a^n=b^2=1,bab=a^{-1}\}$which comprises of rotations and reflections of the n-gon. But then How to prove that rotations and reflections are linear transformations of plane? Matrices for elements of $D_n$ have the form: $r_k$=\begin{bmatrix} \cos{\frac{2\pi k}{n}}&-\sin{\frac{2\pi k}{n}}\\\sin{\frac{2\pi k}{n}}&\cos{\frac{2\pi k}{n}}\end{bmatrix} which is obtained by rotation of a n-gon by $\frac{2k\pi}{n}$. and $s_k$=\begin{bmatrix} \cos{\frac{2\pi k}{n}}&\sin{\frac{2\pi k}{n}}\\\sin{\frac{2\pi k}{n}}&-\cos{\frac{2\pi k}{n}}\end{bmatrix} is a reflection about a line which makes an angle $\frac{k\pi}{n}$ with x-axis. My thought: I know that rotation matrix is given by \begin{bmatrix} \cos{\theta}&-\sin{\theta}\\\sin{\theta}&\cos{\theta}\end{bmatrix}. But I dont know how a reflection matrix looks like? How can I prove that the elements of $D_n$ can be represented like this? NOTE:The two lines have been adopted from Wikipedia.But I need a proof of these facts which I cant prove using my knowledge.
Your observation is correct! The issue is one that is regrettably rather commonly left unexplained in physics texts: A single coordinate system on a manifold does not define the spacetime. Often, the coordinates useful for computations in physics do not even cover the entire space, i.e. they are not defined everywhere on the object of study: The (spacetime) manifold. This is the case with the Schwarzschild coordinates, as well as the Eddington-Finkelstein coordinates. In your particular case, there does happen to exist a set of globally defined coordinates, known as Kruskal-Szekeres coordinates, but it is important to note that this does not have to be possible Some spaces just do not admit a global chart. One particularly simple example is the sphere, $S^2$. The "largest" chart that one can construct is that obtained by "stereographic projection" (see the picture below) from one of the poles: However, it is clear that such a construction leaves a single point "uncharted" (can you see which?) There is a lot more that can be said about this, because it ties into a larger issue: Reliance of coordinate-dependent expressions versus invariant objects, or local vs. global techniques. Physicists love to work locally. For instance, it is very normal for physicists to deal with equations involving "tensors" with indices, such as Einstein's famous field equations $G_{\mu\nu}=8\pi T_{\mu\nu}$. They come with transformation rules for switching between different coordinate systems, and are quite handy to deal with. Historically speaking, this was the approach taken by the first differential geometers (such as Levi-Civita). However, the modern (mathematical) understanding of tensors is a bit different: They are understood as abstract objects that do not depend on any coordinate system and do not have any indices attached to them. Only their components (think of vector components) do, when expressed in terms of an---as we saw, generally speaking only locally defined---coordinate system, carry indices: For example, a tensor $\mathbf T$ could be expressed in terms of local coordinates $\{e_i\}$ as $\mathbf{T}=\sum_i T_i e_i$. However, it is not accurate to identify $T_i$ with $\mathbf T$. Nowadays, most mathematicians prefer working with abstractly defined, coordinate-independent objects such as $\mathbf T$. The reason why physicists keep on working with coordinates is probably a combination of (1) history and (2) convenience: It is quite easy to manipulate tensor components, as most of us know very well. However, as you rightly noted, one must take care not to get confused about what is what: The coordinates only show one of the many faces of geometric objects such as manifolds or tensors. A single coordinate expression does not define the manifold (although the collection of all compatible coordinate systems taken together, known as the atlas, does fix the manifold!).
In Griffith's Introduction into Particle Physics (p. 251, eq. 7.125) we derive Casimir's trick $$ \sum_{s_1,s_2}[\bar{v}(s_1,p_1)\Gamma_1 v(s_2,p_2)][\bar{v}(s_a,p_a)\Gamma_2v(s_b,p_b)]^* = \text{Tr}[\Gamma_1(\gamma_\mu p_b^\mu-m_bc)\gamma^0\Gamma_2^\dagger\gamma^0(\gamma_\nu p_a^\nu-m_ac)] $$ wherein $\bar{v}=v^\dagger\gamma^0$ is the Dirac adjoint spinor and $\Gamma\in\mathbb{C}^{4\times4}$. The theorem is useful if we want to evaluate the average of spin configurations. I have some questions with regard to the derivation used in the book. I read $$\bar{v} \Gamma v=\langle v\gamma^0,\Gamma v\rangle=\langle v,\gamma^0\Gamma v\rangle$$ as the complex scalar product. However if I do so, it does not make sense to use the complex conjugate $*$ on the second square bracket in Casimir's trick equation as the scalar product is real anyway. Am I wrong to interpret $\bar{v}\Gamma v$ as scalar product? In the derivation Griffith uses $\bar{v}=v\gamma^0$ but usually the dirac adjoint is defined as $\bar{v}=\gamma^0 v^\dagger$. Do $\gamma^0,v^\dagger$ commute?
Rocky Mountain Journal of Mathematics Rocky Mountain J. Math. Volume 48, Number 3 (2018), 1019-1030. Orthogonal rational functions on the extended real line and analytic on the upper half plane Abstract Let $\{\alpha _k\}_{k=1}^\infty$ be an arbitrary sequence of complex numbers in the upper half plane. We generalize the orthogonal rational functions $\phi _n$ based upon those points and obtain the Nevanlinna measure, together with the Riesz and Poisson kernels, for Caratheodory functions $F(z)$ on the upper half plane. Then, we study the relation between ORFs and their functions of the second kind as well as their interpolation properties. Further, by using a linear transformation, we generate a new class of rational functions and state the necessary conditions for guaranteeing their orthogonality. Article information Source Rocky Mountain J. Math., Volume 48, Number 3 (2018), 1019-1030. Dates First available in Project Euclid: 2 August 2018 Permanent link to this document https://projecteuclid.org/euclid.rmjm/1533230838 Digital Object Identifier doi:10.1216/RMJ-2018-48-3-1019 Mathematical Reviews number (MathSciNet) MR3835585 Zentralblatt MATH identifier 06917361 Subjects Primary: 30C15: Zeros of polynomials, rational functions, and other analytic functions (e.g. zeros of functions with bounded Dirichlet integral) {For algebraic theory, see 12D10; for real methods, see 26C10} 30C20: Conformal mappings of special domains 41A20: Approximation by rational functions 42C05: Orthogonal functions and polynomials, general theory [See also 33C45, 33C50, 33D45] Citation Xu, Xu; Zhu, Laiyi. Orthogonal rational functions on the extended real line and analytic on the upper half plane. Rocky Mountain J. Math. 48 (2018), no. 3, 1019--1030. doi:10.1216/RMJ-2018-48-3-1019. https://projecteuclid.org/euclid.rmjm/1533230838
Search Now showing items 1-2 of 2 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
How many power series of the form $1+\sum_{k=1}^{\infty} a_{k}x^{k}$ with $a_{k}\in \{-1,0,1 \}$, that have a double zero $f(x)=f'(x)=0$ in $(0,1)$, are there. Ok, there are many ways to understand the question: set theoretical, topological, measure theoretical. I would be especially interested in the Bernoulli measures of the coefficient space $C\subseteq \{-1,0,1\}^{\mathbb{N}}$ of such series. At least the set-theoretical question can be answered: the are the cardinality of the continuum many such series, as can be deduced from the results in this paper (not all of them attributed by the authors to themselves): MR2293600 (2007k:30003) Reviewed Shmerkin, Pablo(FIN-JVS-MS); Solomyak, Boris(1-WA) Zeros of {−1,0,1} power series and connectedness loci for self-affine sets. (English summary) Experiment. Math. 15 (2006), no. 4, 499–511. Some more examples with polynomials: $$\matrix{\left( {z}^{6}+{z}^{5}-{z}^{3}+z+1 \right) \left( z+{z}^{4}-1 \right) ^{2}\cr \left( {z}^{8}+{z}^{7}-{z}^{5}-{z}^{4}-{z}^{3}+z+1 \right) \left( z+ {z}^{6}-1 \right) ^{2}\cr \left( {z}^{9}+{z}^{8}-{z}^{6}-{z}^{5}-{z}^{4}-{z}^{3}+z+1 \right) \left( z+{z}^{7}-1 \right) ^{2}\cr \left( {z}^{4}-{z}^{3}+{z}^{2}-z+1 \right) \left( {z}^{2}+{z}^{5}-1 \right) ^{2}\cr \left( {z}^{6}-{z}^{5}+{z}^{4}-{z}^{3}+{z}^{2}-z+1 \right) \left( {z }^{2}+{z}^{7}-1 \right) ^{2}\cr \left( {z}^{6}-{z}^{5}+{z}^{3}-z+1 \right) \left( {z}^{3}+{z}^{4}-1 \right) ^{2}\cr \left( {z}^{7}-{z}^{5}+{z}^{4}+{z}^{3}-{z}^{2}+1 \right) \left( {z}^ {3}+{z}^{5}-1 \right) ^{2}\cr \left( -{z}^{10}+{z}^{8}-{z}^{7}+{z}^{6}+{z}^{5}-2\;{z}^{4}+{z}^{3}-z +1 \right) \left( {z}^{3}+{z}^{7}-1 \right) ^{2}\cr \left( {z}^{4}-{z}^{3}+{z}^{2}-z+1 \right) \left( {z}^{4}+{z}^{5}-1 \right) ^{2}\cr \left( {z}^{4}-{z}^{2}+1 \right) \left( {z}^{4}+{z}^{6}-1 \right) ^{ 2}\cr \left( {z}^{6}-{z}^{5}+{z}^{4}-{z}^{3}+{z}^{2}-z+1 \right) \left( {z }^{4}+{z}^{7}-1 \right) ^{2}\cr \left( {z}^{8}-{z}^{7}+{z}^{5}-{z}^{4}+{z}^{3}-z+1 \right) \left( {z }^{5}+{z}^{6}-1 \right) ^{2}\cr \left( {z}^{6}-{z}^{5}+{z}^{4}-{z}^{3}+{z}^{2}-z+1 \right) \left( {z }^{6}+{z}^{7}-1 \right) ^{2}\cr \left( -{z}^{14}-{z}^{13}-2\;{z}^{12}-{z}^{11}+{z}^{9}+2\;{z}^{8}-2\;{z}^{5}+2\;{z}^{2}+z+1 \right) \left( {z}^{5}-{z}^{3}+{z}^{2}+z-1 \right) ^{2}\cr \left( {z}^{5}+{z}^{4}-{z}^{3}-{z}^{2}+z+1 \right) \left( {z}^{5}+{z }^{3}-{z}^{2}+z-1 \right) ^{2}\cr \left( {z}^{15}+{z}^{14}-{z}^{11}-{z}^{10}+{z}^{9}+{z}^{8}+{z}^{7}+{z }^{6}-{z}^{5}-{z}^{4}+z+1 \right) \left( {z}^{5}-{z}^{4}+{z}^{3}+z-1 \right) ^{2}\cr }$$ I am going to address the question for $\mathrm{Bernoulli}(1/2)$ measures, using probabilistic language. This is not a complete answer, but I am trying to relate your question to the properties of the distribution of $f(x)$. Clearly, for $x<1/2$ we never even reach zero, but my guess is that for $x>1/2$ this distribution is absolutely continuous, though I am unable to prove this at the moment. So formally, at least, $$\displaystyle \mathsf{E} \, \sum_{f(x)=0} \mathsf{1}\{|f^\prime(x)| < \epsilon\} = \intop_0^1 \mathsf{E} \, \delta(f(x)) \mathsf{1}\{|f(x)|<\epsilon\} |f^\prime(x)| dx \le \epsilon \intop_0^1 \mathsf{E} \, \delta(f(x)) dx.$$ $\mathsf{E} \, \delta$ is the density at zero, and it can be made perfect sense of, provided that the law of $f(x)$ has continuous density at zero. I don't know whether it has continuous density, but if we manage to prove that $f(x)$ has at least bounded density for $x>1/2$, then we can write inequalities with approximations of $\delta$ to get the same results...
In Fluid Mechanics we often see the term inertial force when discussing Reynolds number. The problem is, I didn't really get what's this inertial force. Basically, the notion of inertia I have is that given by Newton's laws where we think of inertia as the resistance of a body to change its state of motion. This inertial force, in Fluid Mechanics seems to be associated to the left hand side of Navier-Stokes equation: $$\rho \left(\dfrac{\partial \mathbf{u}}{\partial t} + (\mathbf{u}\cdot \nabla )\mathbf{u}\right) = -\nabla p+(\lambda + \mu)\nabla(\nabla\cdot\mathbf{u}) + \mu \nabla^2\mathbf{u}$$ But I don't really get why is that. So, what really is inertial force in a more general context? And how, in Fluid Mechanics, we associated inertial force with the left hand side of Navier-Stokes equation? EDIT: there's the following piece of text on Wikipedia's article A fictitious force, also called a pseudo force,[1] d'Alembert force[2][3] or inertial force,[4][5] is an apparent force that acts on all masses whose motion is described using a non-inertial frame of reference, such as a rotating reference frame.. This makes clearer what is one inertial force in general. But I can't see how this relates to that term in Navier-Stokes equation. I don't get why one would consider that to have any relationship with description of the motion using a non-inertial frame.
This will be a talk for the CUNY Set Theory Seminar, March 6, 2015. I shall describe the current state of knowledge concerning the question of whether there can be an embedding of the set-theoretic universe into the constructible universe. Question.(Hamkins) Can there be an embedding $j:V\to L$ of the set-theoretic universe $V$ into the constructible universe $L$, when $V\neq L$? The notion of embedding here is merely that $$x\in y\iff j(x)\in j(y),$$ and such a map need not be elementary nor even $\Delta_0$-elementary. It is not difficult to see that there can generally be no $\Delta_0$-elementary embedding $j:V\to L$, when $V\neq L$. Nevertheless, the question arises very naturally in the context of my previous work on the embeddability phenomenon, Every countable model of set theory embeds into its own constructible universe, where the title theorem is the following. Theorem.(Hamkins) Every countable model of set theory $\langle M,\in^M\rangle$, including every countable transitive model of set theory, has an embedding $j:\langle M,\in^M\rangle\to\langle L^M,\in^M\rangle$ into its own constructible universe. The methods of proof also established that the countable models of set theory are linearly pre-ordered by embeddability: given any two models, one of them embeds into the other; or equivalently, one of them is isomorphic to a submodel of the other. Indeed, one model $\langle M,\in^M\rangle$ embeds into another $\langle N,\in^N\rangle$ just in case the ordinals of the first $\text{Ord}^M$ order-embed into the ordinals of the second $\text{Ord}^N$. (And this implies the theorem above.) In the proof of that theorem, the embeddings $j:M\to L^M$ are defined completely externally to $M$, and so it was natural to wonder to what extent such an embedding might be accessible inside $M$. And I realized that I could not generally refute the possibility that such a $j$ might even be a class in $M$. Currently, the question remains open, but we have some partial progress, and have settled it in a number of cases, including the following, on which I’ll speak: If there is an embedding $j:V\to L$, then for a proper class club of cardinals $\lambda$, we have $(2^\lambda)^V=(\lambda^+)^L$. If $0^\sharp$ exists, then there is no embedding $j:V\to L$. If $0^\sharp$ exists, then there is no embedding $j:V\to L$ and indeed no embedding $j:P(\omega)\to L$. If there is an embedding $j:V\to L$, then the GCH holds above $\aleph_0$. In the forcing extension $V[G]$ obtained by adding $\omega_1$ many Cohen reals (or more), there is no embedding $j:V[G]\to L$, and indeed, no $j:P(\omega)^{V[G]}\to V$. More generally, after adding $\kappa^+$ many Cohen subsets to $\kappa$, for any regular cardinal $\kappa$, then in $V[G]$ there is no $j:P(\kappa)\to V$. If $V$ is a nontrivial set-forcing extension of an inner model $M$, then there is no embedding $j:V\to M$. Indeed, there is no embedding $j:P(\kappa^+)\to M$, if the forcing has size $\kappa$. In particular, if $V$ is a nontrivial forcing extension, then there is no embedding $j:V\to L$. Every countable set $A$ has an embedding $j:A\to L$. This is joint work of myself, W. Hugh Woodin, Menachem Magidor, with contributions also by David Aspero, Ralf Schindler and Yair Hayut. See my related MathOverflow question: Can there be an embedding $j:V\to L$ from the set-theoretic universe $V$ to the constructible universe $L$, when $V\neq L$?
An important consideration in the implementation of any practical numerical algorithm is numerical accuracy: how quickly do floating-point roundoff errors accumulate in the course of the computation? Fortunately, FFT algorithms for the most part have remarkably good accuracy characteristics. In particular, for a DFT of length \(n\) computed by a Cooley-Tukey algorithm with finite-precision floating-point arithmetic, the worst-case error growth is \(O(\log n)\) and the mean error growth for random inputs is only \(O(\sqrt{\log n})\). This is so good that, in practical applications, a properly implemented FFT will rarely be a significant contributor to the numerical error. The amazingly small roundoff errors of FFT algorithms are sometimes explained incorrectly as simply a consequence of the reduced number of operations: since there are fewer operations compared to a naive \(O(n^2)\) algorithm, the argument goes, there is less accumulation of roundoff error. The real reason, however, is more subtle than that, and has to do with the ordering of the operations rather than their number. For example, consider the computation of only the output \(Y[0]\) in the radix-2 algorithm of Pre ignoring all of the other outputs of the FFT. \(Y[0]\) is the sum of all of the inputs, requiring \(n-1\) additions. The FFT does not change this requirement, it merely changes the order of the additions so as to re-use some of them for other outputs. In particular, this radix-2 DIT FFT computes \(Y[0]\) as follows: it first sums the even-indexed inputs, then sums the odd-indexed inputs, then adds the two sums; the even- and odd-indexed inputs are summed recursively by the same procedure. This process is sometimes called cascade summation, and even though it still requires \(n-1\) total additions to compute \(Y[0]\) by itself, its roundoff error grows much more slowly than simply adding \(X[0],X[1],X[2]\) and so on in sequence. Specifically, the roundoff error when adding up \(n\) floating-point numbers in sequence grows as \(O(n)\) in the worst case, or as \(O(\sqrt{n})\) on average for random inputs (where the errors grow according to a random walk), but simply reordering these n-1 additions into a cascade summation yields \(O(\log n)\) worst-case and \(O(\sqrt{\log n})\) average-case error growth. However, these encouraging error-growth rates only apply if the trigonometric “twiddle” factors in the FFT algorithm are computed very accurately. Many FFT implementations, including FFTW and common manufacturer-optimized libraries, therefore use precomputed tables of twiddle factors calculated by means of standard library functions (which compute trigonometric constants to roughly machine precision). The other common method to compute twiddle factors is to use a trigonometric recurrence formula—this saves memory (and cache), but almost all recurrences have errors that grow as \(O(\sqrt{n})\), \(O(n)\) or even \(O(n^2)\) which lead to corresponding errors in the FFT. For example, one simple recurrence is \(e^{i(k+1)\theta }=e^{ik\theta }e^{i\theta }\), multiplying repeatedly by \(e^{i\theta }\) to obtain a sequence of equally spaced angles, but the errors when using this process grow as \(O(n)\). A common improved recurrence is \(e^{i(k+1)\theta }=e^{ik\theta }+e^{ik\theta }(e^{i\theta }-1)\) where the small quantity \(e^{i\theta }-1=\cos (\theta )-1+i\sin (\theta )\) is computed using \(\cos (\theta )-1=-2\sin ^2(\theta/2 )\), unfortunately, the error using this method still grows as \(O(\sqrt{n})\) far worse than logarithmic. 12 There are, in fact, trigonometric recurrences with the same logarithmic error growth as the FFT, but these seem more difficult to implement efficiently; they require that a table of \(\Theta (\log n)\) values be stored and updated as the recurrence progresses. Instead, in order to gain at least some of the benefits of a trigonometric recurrence (reduced memory pressure at the expense of more arithmetic), FFTW includes several ways to compute a much smaller twiddle table, from which the desired entries can be computed accurately on the fly using a bounded number (usually of complex multiplications. For example, instead of a twiddle table with \(n\) entries \(\omega _{n}^{k}\), FFTW can use two tables with \(\Theta (\sqrt{n})\) entries each, so that \(\omega _{n}^{k}\) is computed by multiplying an entry in one table (indexed with the low-order bits of \(k\)) by an entry in the other table (indexed with the high-order bits of \(k\)). There are a few non-Cooley-Tukey algorithms that are known to have worse error characteristics, such as the “real-factor” algorithm but these are rarely used in practice (and are not used at all in FFTW). On the other hand, some commonly used algorithms for type-I and type-IV discrete cosine transforms have errors that we observed to grow as \(\sqrt{n}\) even for accurate trigonometric constants (although we are not aware of any theoretical error analysis of these algorithms), and thus we were forced to use alternative algorithms. Footnote 12 In an FFT, the twiddle factors are powers of \(\omega _n\), so \(\theta\) is a small angle proportional to \(1/n\) and \(e^{i\theta }\) is close to 1. Contributor ContribEEBurrus
Recall that the category of $\sigma Set$ of symmetric simplicial sets is the category of presheaves on $\Sigma$, the category of finite nonempy sets and all functions. The inclusion $v: \Delta \to \Sigma$ transfers the Kan-Quillen model structure via a Quillen equivalence with $sSet$. In this "canonical" model structure on $\sigma Set$, not every object is cofibrant (the cofibrant objects are the ones where the $\Sigma_n$ action on the nondegenerate $n$-simplices is free). Nevertheless, Cisinski has shown that $v_!$ preserves all weak equivalences (in fact, $v_!$ is also a left Quillen equivalence for the Cisinski model structure on $\sigma Set$, which has the same weak equivalences and the cofibrations are the monomorphisms), and moreover that $v^\ast$ also preserves all weak equivalences (in fact $v^\ast$ itself has a right adjoint $v_\ast$, and is also a left Quillen equivalence, in both model structures I believe). By composition with the usual geometric realization, there is a "geometric realization" $|v^\ast-|: \sigma Set \to Top$, which computes the correct homotopy type for all symmetric simplicial sets. The only drawback is that $|v^\ast-|$ is not the most economical geometric realization one could imagine. For example, $|v^\ast[1]|$ has two 1-cells and I believe is infinite-dimensional. A more economical and "natural" geometric realization may be obtained by bypassing $sSet$ altogether. That is, let $\|-\|: \sigma Set \to Top$ be induced by the functor $\Sigma \to Top$, $[n] \mapsto \Delta^n$. Then $\|[n]\| = \Delta^n$ with the obvious CW structure. Question 1: Is $\|-\|$ a left Quillen equivalence with respect to the canonical model structure? It's clear that the Serre cofibrations and acyclic Serre cofibrations are generated by the image of the canonical cofibrations and canonical acyclic cofibrations, so if the answer is "yes", then the model structure on $Top$ is even transferred along $\|-\|$ from $\sigma Set$. Question 2: Is $\|-\|$ a left Quillen equivalence with respect to the Cisinski model structure? Question 3: Does $\|-\|$ preserve weak equivalences between arbitrary objects? Of course, an affirmative answer to Question 2 would imply an affirmative answer to Question 3.
Question: For which of the following matrices $A_i$ is there A complex matrix $B$ such that $B^2 = A_i$; A self-adjoint complex matrix $B$ such that $B^2 = A_i$; A real matrix $B$ such that $B^2 = A_i$? $A_1 = \begin{pmatrix} 2 & 1\\1 & 2\end{pmatrix}$, $A_2 = \begin{pmatrix} 1 & 2\\2 & 1\end{pmatrix}$, $A_3 = \begin{pmatrix} 1 & 4\\1 & 1\end{pmatrix}$. Working: $A_1$ and $A_2$ are both real symmetric matrices, so by the Real Spectral Theorem, there exists orthogonal matrices $P_1$ and $P_2$ such that $P_1^TA_1P_1$ and $P_2^TA_2P_2$ are both diagonal. A self-adjoint matrix must have real eigenvalues. The spectra of $A_1$, $A_2$ and $A_3$ are $\{1,3\}$, $\{-1,3\}$ and $\{-1,3\}$ respectively. If we can find an invertible matrix $M_i$ such that $M_i^{-1}A_iM=D_i$, for some diagonal matrix $D_i$, then $B = \pm M_i \sqrt{D_i}M_i^{-1}$.
I've already found that the fundamental group of the connected sum $P^2\#T$, by the labelling scheme $aabcb^{-1}c^{-1}$, to be $F_3/<aabcb^{-1}c^{-1}>$. How would I find the first homology group? The first homology group is defined to be: $ H_1(X) = \pi_1(X,x_0)/[\pi_1(X,x_0),\pi_1(X,x_0)] $. I believe I have to use the following theorem: Let $F$ be a group; let $N$ be a normal subgroup of $F$; let $q$: $F \to F/N$ be the projection. The projection homomorphism $P: F \to F/[F,F]$ induces an isomorphism $\phi: q(F)/[q(F),q(F)] \to p(F)/p(N)$. But I can't seem to figure out a nicer representation for the first homology group with this theorem. As Lee Mosher has pointed out in the comments, the correct representation for the fundamental group should be $F_3/\langle aabcb^{-1}c^{-1} =1\rangle$, or in other words, $\langle a, b, c|aabcb^{-1}c^{-1} = 1\rangle$. $H_1$ is precisely abelianization of this group, and the abelianization just attaches the relators $[a, b] = [b, c] = [a, c] = 1$ to the presentation, i.e., $\langle aabcb^{-1}c^{-1} = [a, b] = [b, c] = [a, c] = 1\rangle$. As $[b, c] = 1$, $aabcb^{-1}c^{-1}$ reduces to $aa$. Thus, your group is $\langle a, b, c | a^2 = [a, b] = [b, c] = [a, c] = 1\rangle$ which is isomorphic to $\Bbb Z/2\Bbb Z \oplus \Bbb Z^2$ Hence, $H_1(P^2 \# T) \cong \Bbb Z^2 \oplus \Bbb Z/2\Bbb Z$.
So I want a function that is zero on the reals only on the prime integers and which doesn't depend on knowing the primes. I construct: $$f(x) = e^{-x^2} - \sum\limits_{n=2}^\infty e^{-n^2} \frac{ \sin(\pi x)^2 }{ n^2\sin(\pi x/n)^2}$$ Which has zeros on the real line only on the positive and negative prime integers. ( $c_n(x)=\frac{\sin(\pi x)^2}{n^2\sin(\pi x/n)^2}$ has a well defined Taylor series and can be defined everywhere. It is $1$ when $x$ is a multiple of $n$ and 0 otherwise. The exponentials are just to make the whole thing converge.) So my question is, is a function like this useful? As in, would it tell us anything about the primes? Edit: if it is analytic, as well as zeros on the real axis it will have many complex zeros. I also note that it is quite "easy" to convert this into a series of the form: $f(x)=\sum\limits_{k=0}^\infty a_{k} x^{k}$ where the coefficients $a_{2k} =\frac{(-1)^k}{k!} - \sum\limits_{n=2}^\infty c^{(2k)}_n(0)e^{-n^2}$. It begins $f(x)=0.981-0.97722x^2+...$ although you would need a lot of terms when $x$ is big!!! But we could say $f^{-1}(0)\subset$primes. Edit 2: I think the function actually also has zeros at points very close to the primes and only the zeros where $f'(x)<0$ are primes. (Basically every second zero.) This works equally well replacing the exponentials with $1/x^2$.
Let $k$ be a field, and $f:k[x_1,\ldots,x_n]\to k[y_1,\ldots,y_m]$ a $k$-algebra homomorphism. Given $r_1,\ldots,r_k\in k[y_1,\ldots,y_m]$, is there an algorithm for producing a finite generating set for the ideal $f^{-1}((r_1,\ldots,r_k))$? The answer is yes. The question is asking to compute the kernel of $$f : k[x_1, \dots, x_n] \rightarrow S$$ where $S$ is some quotient of a polynomial ring. Macaulay2 can calculate such a kernel.
Codeforces Round #526 (Div. 1) Finished The Fair Nut is going to travel to the Tree Country, in which there are $$$n$$$ cities. Most of the land of this country is covered by forest. Furthermore, the local road system forms a tree (connected graph without cycles). Nut wants to rent a car in the city $$$u$$$ and go by a simple path to city $$$v$$$. He hasn't determined the path, so it's time to do it. Note that chosen path can consist of only one vertex. A filling station is located in every city. Because of strange law, Nut can buy only $$$w_i$$$ liters of gasoline in the $$$i$$$-th city. We can assume, that he has infinite money. Each road has a length, and as soon as Nut drives through this road, the amount of gasoline decreases by length. Of course, Nut can't choose a path, which consists of roads, where he runs out of gasoline. He can buy gasoline in every visited city, even in the first and the last. He also wants to find the maximum amount of gasoline that he can have at the end of the path. Help him: count it. The first line contains a single integer $$$n$$$ ($$$1 \leq n \leq 3 \cdot 10^5$$$) — the number of cities. The second line contains $$$n$$$ integers $$$w_1, w_2, \ldots, w_n$$$ ($$$0 \leq w_{i} \leq 10^9$$$) — the maximum amounts of liters of gasoline that Nut can buy in cities. Each of the next $$$n - 1$$$ lines describes road and contains three integers $$$u$$$, $$$v$$$, $$$c$$$ ($$$1 \leq u, v \leq n$$$, $$$1 \leq c \leq 10^9$$$, $$$u \ne v$$$), where $$$u$$$ and $$$v$$$ — cities that are connected by this road and $$$c$$$ — its length. It is guaranteed that graph of road connectivity is a tree. Print one number — the maximum amount of gasoline that he can have at the end of the path. 3 1 3 3 1 2 2 1 3 2 3 5 6 3 2 5 0 1 2 10 2 3 3 2 4 1 1 5 1 7 The optimal way in the first example is $$$2 \to 1 \to 3$$$. The optimal way in the second example is $$$2 \to 4$$$. Name
Salts, when placed in water, will often react with the water to produce H 3O + or OH -. This is known as a hydrolysis reaction. Based on how strong the ion acts as an acid or base, it will produce varying pH levels. When water and salts react, there are many possibilities due to the varying structures of salts. A salt can be made of either a weak acid and strong base, strong acid and weak base, a strong acid and strong base, or a weak acid and weak base. The reactants are composed of the salt and the water and the products side is composed of the conjugate base (from the acid of the reaction side) or the conjugate acid (from the base of the reaction side). In this section of chemistry, we discuss the pH values of salts based on several conditions. When is a salt solution basic or acidic? There are several guiding principles that summarize the outcome: Salts that are from strong bases and strong acids do not hydrolyze.The pH will remain neutral at 7. Halides and alkaline metals dissociate and do not affect the H +as the cation does not alter the H +and the anion does not attract the H +from water. This is why NaCl is a neutral salt. In General:Salts containing halides (except F -) and an alkaline metal (except Be 2 +) will dissociate into spectator ions. Salts that are from strong bases and weak acids do hydrolyze, which gives it a pH greater than 7.The anion in the salt is derived from a weak acid, most likely organic, and will. The cation will be from a strong base, meaning from either the alkaline or alkaline earth metals and, like before, it will dissociate into an ion and not affect the H acceptthe proton from the water in the reaction. This will have the water act as an acid that will, in this case, leaving a hydroxide ion (OH -) +. Salts of weak bases and strong acids do hydrolyze, which gives it a pH less than 7. This is due to the fact that the anion will become a spectator ion and fail to attract the H +, while the cation from the weak base will donate a protonto the water forming a hydronium ion. Salts from a weak base and weak acid also hydrolyze as the others, but a bit more complex and will require the KWhichever is the stronger acid or weak will be the dominate factor in determining whether it is acidic or basic. The cation will be the acid, and the anion will be the base and will form either form a hydronium ion or a hydroxide ion depending on which ion reacts more readily with the water. aand K bto be taken into account. Salts of Polyprotic Acids Do not be intimidated by the salts of polyprotic acids. Yes, they're bigger and "badder" then most other salts. But they can be handled the exact same way as other salts, just with a bit more math. First of all, we know a few things: It's still just a salt. All of the rules from above still apply. Luckily, since we're dealing with acids, the pH of a salt of polyprotic acid will always be greater than 7. The same way that polyprotic acids lose H +stepwise, salts of polyprotic acids gain H +in the same manner, but in reverse order of the polyprotic acid. Take for example dissociation of H 2CO 3, carbonic acid. Take for example dissociation of H \[H_2CO_{3(aq)} + H_2O_{(l)} \rightleftharpoons H_3O^+_{(aq)} + HCO^-_{3(aq)} \;\;\; K_{a1} = 2.5 \times 10^{-4}\] \[HCO^-_{3(aq)} + H_2O_{(l)} \rightleftharpoons H_3O^+_{(aq)} + CO^{2-}_{3(aq)} \;\;\; K_{a2} = 5.61 \times 10^{-11}\] This means that when calculating the values for K b of CO 3 2 -, the K b of the first hydrolysis reaction will be \(K_{b1} = \dfrac{K_w}{K_{a2}}\) since it will go in the reverse order. Type of Solution Cations Anions pH Acidic From weak bases NH From strong acids: Cl < 7 Basic From strong bases: Group 1 and Group 2, but not Be 2+ From weak acids: F > 7 Neutral From strong bases: Group 1 and Group 2, but not Be From strong acids: Cl = 7 Questions Predict whether the pH of each of the following salts placed into water is acidic, basic, or neutral. NaOCl (s) KCN (s) NH 4NO 3 (s) NaOCl Find the pH of a solution of .200 M NH 4NO 3where (K a= 1.8 * 10 -5). Find the pH of a solution of .200 M Na 3PO 4where (K a 1= 7.25 * 10 -5, K a2= 6.31 * 10 -8, K a3= 3.98 * 10 -3). Answers 1 The ions present are Na +and OCl -as shown by the following reaction: \(NaOCl _{(s)} \rightarrow Na^+_{(aq)} + OCl^-_{(aq)}\) While Na + will not hydrolyze, OCl - will (remember that it is the conjugate base of HOCl). It acts as a base, accepting a proton from water. \(OCl^-_{(aq)} + H_2O_{(l)} \rightleftharpoons HOCl_{(aq)} + OH^-_{(aq)}\) Na + is excluded from this reaction since it is a spectator ion. Therefore, with the production of OH -, it will cause a basic solution and raise the pH above 7. \(pH>7\) The KCN (s)will dissociate into K + (aq)and CN _ (aq)by the following reaction: \[KCN_{(s)}\rightarrow K^+_{(aq)} + CN^-_{(aq)}\] K + will not hydrolyze, but the CN - anion will attract an H +away from the water: \[CN^-_{(aq)} + H_2O_{(l)}\rightleftharpoons HCN_{(aq)} + OH^-_{(aq)}\] Because this reaction produces OH -, the resulting solution will be basic and cause a pH>7. \(pH>7\) The NH 4NO 3 (s)will dissociate into NH 4 +and NO 3 -by the following reaction: \[NH_4NO_{3(s)} \rightarrow NH^+_{4(aq)} + NO^-_{3(aq)}\] Now, NO 3 - won't attract an H + because it is usually from a strong acid. This means the K b will be very small. However, NH 4 + will lose an electron and act as an acid (NH 4 + is the conjugate acid of NH 3) by the following reaction: \[NH^+_{4(aq)} + H_2O_{(l)} \rightleftharpoons NH_{3(aq)} + H_3O^+_{(aq)}\] This reaction produces a hydronium ion, making the solution acidic, lowering the pH below 7. \(pH<7\) \(NH^+ _{4(aq)} + H_2O {(l)} \rightleftharpoons NH_{3(aq)} + H_3O_{(aq)}\) \(\dfrac{x^2}{0.2-x} = \dfrac{1*10^{-14}}{1.8 \times 10^{-5}}\) \(x = 1.05*10^-5 M = [H_3O^+]\) \(pH = 4.98\) \(PO^3-_{4(aq)} + H_2O_{(l)} \rightleftharpoons HPO^{2-}_{4(aq)} + OH^-_{(aq)}\) The majority of the hydroxide ion will come from this first step. So only the first step will be completed here. To complete the other steps, follow the same manner of this calculation. \[\dfrac{x^2}{0.2-x}=\dfrac{1*10^-14}{3.98 \times 10{-13}}\] \[x = 0.0594 = [OH^-]\] \[pH = 12.77\] Practice Questions Why does a salt containing a cation from a strong base and an anion from a weak acid form a basic solution? Why does a salt containing a cation from a weak base and an anion from a strong acid form an acidic solution? How do the K aor K bvalues help determine whether a weak acid or weak base will be the dominant driving force of a reaction? The answers to these questions can be found in the attached files section at the bottom of the page. Outside Links Here is a link to solubility rules. This will tell you whether or not to rule out certain spectator ions: http://www.csudh.edu/oliver/chemdata/solrules.htm Here are links for refreshers on acid-base equilibria: http://en.wikipedia.org/wiki/Le_Chat...%27s_principle http://www.utc.edu/Faculty/Gretchen-...p/acidbase.htm References Petrucci, Ralph H., William S. Harwood, F G. Herring, and Jeffry D. Madura. General Chemistry: Principles and Modern Applications. 9tth ed. Upper Saddle River, NJ: Pearson Prentice Hall, 2007. Timberlake, Karen C. Chemistry 101 Introductory Chemistry. 2nd ed. San Francisco: Pearson Education, 2007. Print. Contributors Christopher Wu (UCD), Christian Dowell (UCD), Nicole Hooper (UCD)
3.1: Generalize Construction 3.2.1 to be an n-out-of- n secret-sharing scheme, and prove that your scheme is correct and secure. 3.2: Prove Theorem 3.2.2. 3.3: Fill in the details of the following alternative proof of Theorem 3.2.1: Starting with \(\mathscr{L}_{\text{tsss-L}}\), apply the first step of the proof as before, to duplicate the main body into 3 branches of a new if-statement. Then apply Exercise 2.8 to the second branch of the if statement. Argue that mL can be replaced with mR and complete the proof. 3.4: Suppose \(T\) is a fixed (publicly known) invertible \(n \times n\) matrix over \(\mathbb{Z}_p\), where \(p\) is a prime. (a). Show that the following two libraries are interchangeable: \[\mathscr{L}_{\text{left}}\space\space\space\space\space\space\space\mathscr{L}_{\text{right}}\\\underline{\text{QUERY}():}\space\space\space\space\space\space\space\underline{\text{QUERY}():}\\r\leftarrow (\mathbb{Z}_p)^n \space\space\space\space\space\space\space r\leftarrow (\mathbb{Z}_p)^n\\ \text{return}\space r \space\space\space\space\space\space\space\text{return}\space T\times r\nonumber\] (b). Show that the following two libraries are interchangeable: 3.5: The text gives a proof of Lemma 3.4.1 for the special case where the calling program always calls \(\text{QUERY}\) with \(|U| = t − 1\). This exercise shows one way to complete the proof. Define the following “wrapper” library: (a). Argue that \(\mathscr{L}_{sss-real} \equiv \mathscr{L}_{\text{wrap}} \diamondsuit \mathscr{L}^{’}_{\text{sss-real}}\), where on the right-hand side \(\mathscr{L}^{’}_{\text{sss-real}}\) refers to \(\mathscr{L}_{\text{sss-real}}\) but with its QUERY subroutine renamed to QUERY’. (b). Argue that \(\mathscr{L}_{sss-rand} \equiv \mathscr{L}_{\text{wrap}} \diamondsuit \mathscr{L}^{’}_{\text{sss-rand}}\), with the same interpretation as above. (c). Argue that for any calling program ?, the combined program \(? \diamondsuit \mathscr{L}_{\text{wrap}}\) only calls QUERY’ with \(|U| = t − 1\). Hence, the proof presented in the text applies when the calling program has the form \(? \diamondsuit \mathscr{L}_{\text{wrap}}\). (d). Combining the previous parts, show that \(\mathscr{L}_{\text{sss-real}} \equiv \mathscr{L}_{\text{sss-rand}}\) (i.e., the two libraries are interchangeable with respect to arbitrary calling programs). 3.6: Let S be a TSSS with \(? = \{ 0,1 \} ^ { \ell }\) and where each share is also a string of bits. Prove that if \(S\) satisfies security then every user’s share must be at least \(\ell\) bits long. Hint: Prove the contrapositive. Suppose the first user’s share is less than \(\ell\) bits (and that this fact is known to everyone). Show how users 2 through t can violate security by enumerating all possibilities for the first user’s share. 3.7: n users have shared two secrets using Shamir secret sharing. User \(i\) has a share \(s_i = (i,y_i)\) of the secret m, and a share \(s^{’}_i = (i,y^{’}_i)\) of the secret \(m^{’}\) . Both sets of shares use the same prime modulus \(p\). Suppose each user \(i\) locally computes \(z_i = y_i + y^{’}_i \% p\). (a). Prove that if the shares of m and shares of m’ had the same threshold, then the resulting \(\{(i,z_i) | i \le n\}\) are a valid secret-sharing of the secret \(m + m^‘\). (b). Describe what the users get when the shares of \(m\) and \(m^‘\) had different thresholds (say, \(t\) and \(t^‘\), respectively). 3.8: Suppose there are 5 people on a committee: Alice (president), Bob, Charlie, David, Eve. Suggest how they can securely share a secret so that it can only be opened by: Alice and any one other person Any three people Describe in detail how the sharing algorithm works and how the reconstruction works (for all authorized sets of users). 3.9: Suppose there are 9 people on an important committee: Alice, Bob, Carol, David, Eve, Frank, Gina, Harold, & Irene. Alice, Bob & Carol form a subcommittee; David, Eve & Frank form another subcommittee; and Gina, Harold & Irene form another subcommittee. Suggest how a dealer can share a secret so that it can only be opened when a majority of each subcommittee is present. Describe why a 6-out-of-9 threshold secret-sharing scheme does not suffice Hint: 3.10: Generalize the previous exercise. A monotone formula is a boolean function \(\Phi : \{ 0,1 \} ^ { \mathrm { n } } \rightarrow \{ 0,1 \}\) that when written as a formula uses only AND and OR operations (no NOTS). For a set \(A \subseteq \{1,. . . , n\}\), let \(χ_A\) be the bitstring where whose \(i\)th bit is 1 if and only if \(i \in A\). For every monotone formula \(\phi : \{ 0,1 \} ^ { n } \rightarrow \{ 0,1 \}\), construct a secret-sharing scheme whose authorized sets are \(A \subseteq \{1,. . . , n\} | \phi(χ_A) = 1\}\). Prove that your scheme is secure. Hint: express the formula as a tree of AND and OR gates. 3.11: Prove that share \(s_2\) in Construction 3.5.1 is distributed independently of the secret \(m\). 3.12: Construct a 3-out-of-3 visual secret sharing scheme. Any two shares should together reveal nothing about the source image, but any three reveal the source image when stacked together. 3.13: Using actual transparencies or with an image editing program, reconstruct the secret shared in these two images:
What are some conditions that ensure that a function $f(x) : \mathbb{R} \to \mathbb{R}$ which is in $L^1_{loc}$ and almost everywhere differentiable (in the classical sens ) with derivative in $L^1_{loc}$ has its derivative equal to its weak derivative (its derivative in the sens of distributions) ie : $$ \forall \phi \in C^{\infty}_c(\mathbb{R}) ; \int f'\phi = - \int f \phi'$$ For example this is false for the characteristic function $\xi_{[0;1]} $which weak derivative is the difference of two dirac measures while its classical derivative is almost everywhere 0. It works on the other hand for $C^1$ functions. Does it work for Lipschitz functions ? Are there some necessary conditions ? Edit : It does work for lipshitz function thx to the dominated convergence theorem
Nonstandard Constraints and the Power of Weak Contributions Have you ever wanted to add a certain boundary or domain condition to a physics problem but couldn’t find a built-in feature? Today, we will show you how to implement nonstandard constraints using the so-called weak contributions. Weak contributions are, in fact, what the software internally uses to apply the built-in domain and boundary conditions. They provide a flexible and physics-independent way to extend the applicability of the COMSOL Multiphysics® software. Introduction to Weak Contributions Many of the problems solved in COMSOL Multiphysics can be thought of as finding functions that minimize some quantity. In equilibrium problems of elasticity, for example, we look for displacements that minimize the total strain energy. In our blog series on variational problems and constraints, we showed how to use the Weak Form PDE interface to solve both constrained and unconstrained variational problems. We used a generalized constraint framework to deal with all kinds of restrictions on the solution. There, we showed you how to implement both the unconstrained and constrained problem. Often, the quantity that has to be minimized is well understood and what we have to prescribe in our specific situations are the constraints. Ideally, we should not reinvent the wheel on the unconstrained problem. Frequently, constraints are boundary conditions, but sometimes they can be requirements to be satisfied at every point or by an integral of the solution. Several options for boundary conditions and other constraints are built into COMSOL Multiphysics, but from time to time, you may want to add a novel constraint or two. Today, we will see how to do so using weak contributions. In this blog post, we: Give a quick recap of adding constraints to variational problems Use weak contributions to add a nonstandard constraint to a rather well-known equation Compare this strategy with a more physically motivated implementation Adding Constraints to an Extremization Problem In our blog series on variational problems and constraints, we discussed in detail the analytical and numerical aspects of the subject as well as the COMSOL® software implementation. Readers unfamiliar with the subject will benefit from going over that series. In this section, we summarize the main ideas needed to work through today’s examples. The method of Lagrange multipliers is used to recast constrained variational problems to equivalent unconstrained problems. Consider the constrained variational problem (1) (2) The feasible critical points of this constrained problem are the stationary points of the augmented functional (3) Let’s take the variational derivative of this functional. (4) Say the unconstrained part of the problem is already taken care of and we just want to add what is necessary to enforce a constraint. Our responsibility then is only the second term in the above equation. In COMSOL Multiphysics, when we add a physics interface, it can be thought of as adding an unconstrained variational problem. Afterward, constraints on boundaries and domains can be added through one or several built-in standard boundary conditions. What if we have a nonstandard constraint that is not built in? Using weak contributions gives great flexibility to add such conditions. Going back to the functional above, let us focus on the contributions coming from the constraint. For the distributed constraint above, the Lagrange multiplier \lambda (x) is a function defined over the geometric entity subject to the constraint. For a global constraint such as an integral or average constraint, on the other hand, the Lagrange multiplier is one number. Say we want to impose the global integral constraint The augmented functional is and its variation is (5) Thus, if the boundary condition or other constraint you want to enforce on your solution is not built in, but there is a built-in physics interface for the physics, all you need to do is add the last two terms in the above equation using the Weak Contribution node. Let’s demonstrate this with an example. Constraining the Average Vertical Displacement of a Spring In this example, a spring is rigidly fixed at the bottom end and we want the top end (boundary 4 in the model below) to have an average vertical displacement of 2 cm. This is a linear elasticity problem and this physics is built in. Also, rigidly fixing a face is a standard boundary condition. On the other hand, specifying an average displacement on a face is not. Note that we are not asking for the vertical displacement of each point on the face to be 2 cm. That could have been specified with the built-in Prescribed Displacement node. What we have is the global constraint (6) where A is the face in question and dh is the desired average vertical displacement (2 cm in our case). Here, dh is not a differential of any quantity. It is just the name of a parameter used for the average vertical displacement on the face. We could directly have written 2 cm in its place. With all but this constraint implemented using standard features, our variational problem becomes finding the stationary point of the augmented functional The corresponding stationary condition is (7) Let us add these two contributions using a boundary weak contribution and a global weak contribution. In the Model Builder, we can distinguish between boundary and global weak contribution from the icons. Boundary contributions have the same icons as boundary conditions whereas global contributions have icons with an integral sign (\int). Additionally, the Settings window for boundary contributions contains a boundary selection section whereas there is no geometric entity selection for a global contribution. Boundary and global weak contributions to enforce constraint on an average displacement over a surface. Finally, the variable \lambda is an auxiliary global variable we defined in our Lagrange multiplier method. Any new variable related to the constraint has to be defined either in the Auxiliary Variable subnode of a Weak Contribution node or in the Global Equations node based on the nature of the constraint. In our example, we have a global constraint and, as such, we have to define it using a Global Equations node. Often, an equation will be entered in the Global Equations Settings window as well. This is not necessary here, as we have included in the weak contribution a term containing the variation of the Lagrange multiplier. An alternative — using the Global Equation node to define both the global degree of freedom and its equation — will be discussed later. A global equation adds one degree of freedom to our problem. Defining an auxiliary global unknown. If we solve this problem, we get the solution shown below. We can see the value of the Lagrange multiplier and the average displacements in Results > Derived Values. If we look at the vertical displacement on the constrained surface, it is not uniform; it just averages to 2 cm as per the constraint. We would like to clarify two items about the above implementation. Both the second and third terms in (Eq. 7) contain integrals. In the boundary weak contribution, Weak Contribution 1, we add just the integrand. The integral in the global weak contribution, on the other hand, needs the integration operator to be explicitly called. Alternatively, we could have added the integrand test(lam)*w/Areato Weak Contribution 1and kept only -test(lam)*dhin Weak Contribution 2. The constraint in this example is global. For a distributed constraint, the Lagrange multiplier is a function of location and it has to be defined as an auxiliary variable under the boundary weak contribution. See our blog post on variational constraints for more on this distinction. Alternative Implementation The term multiplying the variation of the Lagrange multiplier, \delta \lambda, can be specified in the Global Equation node, where the Lagrange multiplier itself is defined. The screenshots below show how to do so. This only replaces the third term in (Eq. 7). The term not containing \delta \lambda still has to be specified as a boundary weak contribution. Note that intop1() is an integration operator defined over boundary 4 and Area is the area of that boundary given by intop1(1). Alternative specification of the weak term containing a variation of a global Lagrange multiplier. Physical Interpretation of the Lagrange Multiplier The above solution gives us the displacements and stresses induced by moving boundary 4 by an average vertical displacement of 2 cm. The question is: How do we physically force the structure to conform to our wish? You guessed it: We apply a force. The Lagrange multiplier is related to the force (flux) needed to enforce a constraint. The operative word here is related. Let us see what we mean here in detail. First, let’s try an alternative formulation of the constraint. The constraint in (Eq. 6) is mathematically equivalent to (8) The augmented functional corresponding to this form of the constraint is and the corresponding stationary condition is (9) If we enter the last two terms in this equation as weak contributions and solve, we get a Lagrange multiplier much different from what was obtained in our first implementation. The displacements and stresses remain the same nevertheless. So, we can suspect that the Lagrange multiplier in and of itself is not a physical quantity and, as such, cannot tell us what to physically do to enforce a desired constraint. One reliable way to find out what we should do physically is to postprocess the results to see reaction forces (fluxes). To rigorously establish what the Lagrange multiplier is physically, we have to look at the unconstrained part of the equation that we have been hiding so far. In today’s example, that means looking at the weak form of the solid mechanics equation. For a deformable solid in equilibrium, the weak form, also known as the virtual work equation, is given by (10) where \mathbf{u} = (u,v,w) is the displacement vector and \sigma, \varepsilon, \mathbf{b}, and \mathbf{\Gamma} are respectively the stress, strain, body load per unit volume, and boundary load. The weak form for any COMSOL Multiphysics physics interface can be viewed by enabling the Equation View. If we compare (Eq. 10) with (Eq. 9) and (Eq. 7), we see that the Lagrange multipliers in today’s example appear in the same place as the boundary load on the constrained surface. One difference is that the Lagrange multiplier appears outside the surface integral, whereas in the boundary load is inside the integral. This stems from the global nature of our constraint. A second difference is the Lagrange multiplier in our example goes with the vertical displacement w, whereas the surface load in the solid mechanics equation is dot multiplied by the variation of the displacement vector. Let us reconcile these items one at a time. Now we see that our Lagrange multiplier is related to the vertical component of a boundary load. Finally, if the boundary load is constant, we have Comparing this with (Eq. 7), we see that in our first implementation, the Lagrange multiplier corresponds to the total vertical boundary load. Now that we know for the specific physics and a specific form of the constraint equation, the Lagrange multiplier is the total vertical load on a face, we can use the built-in Boundary Load node to enforce the constraint instead of the weak contribution. This process is shown in the loaded spring example in the Application Gallery. Alternative implementation when we know what the Lagrange multiplier physically corresponds to. This last implementation can be thought of as asking the software to apply whatever vertical total force is required to enforce an average vertical displacement. We could do that because, by looking at the weak form of the solid mechanics equation, we identify the correspondence between the Lagrange multiplier and the total force. It is not always possible to make such connections with a standard force (flux) term. Note that we could have used the default distributed boundary load, which would only have changed the number of the extra degrees of freedom used internally. Generically speaking, for dimensional consistency, in (Eq. 3), the product of the Lagrange multiplier and the constraint g should give the density of the “energy” E per unit volume, area, or length depending on the geometric entity the constraint is applied on. In many engineering problems, E literally represents energy and, as such, we say the Lagrange multiplier is energetically conjugate with the constraint. This means, for example, that if we scale, square, or do any operation that mathematically changes the unit of the constraint equation, then the unit and thus the physical meaning of the Lagrange multiplier changes. What doesn’t change, however, is that of \lambda \frac{\partial g}{\partial u}, as it is always energetically conjugate with u (Eq. 5). It is this product of the Lagrange multiplier and the constraint Jacobian that is a force (flux) density in a generalized sense. If \frac{\partial g}{\partial u}=1, which is often the case with linear constraints, the Lagrange multiplier is indeed the generalized force (flux). The beauty of the weak contribution is that you can enforce the constraint without having to go through the weak form of the built-in physics. Then, you can postprocess the result to find out the physical course of action. The implementation is physics independent. Concluding Thoughts on Weak Contributions Today, we have discussed how the COMSOL Multiphysics software facilitates the implementation of nonstandard boundary conditions. Using weak contributions, we have a flexible and physics-independent strategy to add constraints that are not used frequently enough to be standard features in the software. The mathematical roots of this method are in problems where the solution minimizes some quantity, but the strategy can be used for problems given by partial differential equations that do not have a corresponding variational solution. For more background information on this topic, we recommend our blog series on the weak form and on variational problems. Next Steps If you have any questions on weak contributions or another topic, or want to learn more about how the features and functionality in COMSOL Multiphysics suit your modeling needs, you’re welcome to contact us. Check out the following Application Gallery examples and blog posts for more demonstrations of using weak contributions and extra equations in various physics areas: Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
In some book about continuum mechanics I read that from principle of virtual work follows balance of rotational momentum when $\delta \boldsymbol{r} = \boldsymbol{\delta \varphi} \times \boldsymbol{r}, \; \boldsymbol{\delta \varphi} = \boldsymbol{\mathsf{const}}$ ($\boldsymbol{r}$ is location vector, $\delta \boldsymbol{r}$ is its variation, $\boldsymbol{\delta \varphi}$ is not variation, just denoted as it for some reason like being small enough for infinitesimal $\delta \boldsymbol{r}$). Then there is written without any explaination $\boldsymbol{\nabla} \delta \boldsymbol{r} = - \boldsymbol{E} \times \boldsymbol{\delta \varphi}$. I know that $\boldsymbol{E}$ is bivalent “metric unit identity” tensor (the one which is neutral to dot product operation), and that $\boldsymbol{\nabla} \boldsymbol{r} = \boldsymbol{E}$. And that $\boldsymbol{a} \times \boldsymbol{E} = \boldsymbol{E} \times \boldsymbol{a} \:\: \forall\boldsymbol{a}$, no minus here. To get minus, transposing is needed: $\left( \boldsymbol{E} \times \boldsymbol{\delta \varphi} \right)^{\mathsf{T}} \! = - \boldsymbol{E} \times \boldsymbol{\delta \varphi}$. Thus I can’t get why $\boldsymbol{\nabla} \delta \boldsymbol{r} = - \boldsymbol{E} \times \boldsymbol{\delta \varphi}$ has minus sign. For constant $\boldsymbol{\delta \varphi}$, $\boldsymbol{\nabla} \boldsymbol{\delta \varphi} = {^2\boldsymbol{0}}$ (bivalent zero tensor). Isn’t it true that $\boldsymbol{\nabla} \! \left( \boldsymbol{\delta \varphi} \times \boldsymbol{r} \right) = \boldsymbol{\delta \varphi} \times \boldsymbol{\nabla} \boldsymbol{r} = \boldsymbol{\delta \varphi} \times \boldsymbol{E} = \boldsymbol{E} \times \boldsymbol{\delta \varphi}$? Searching for how to get gradient of cross product of two vectors gives gradient of dot product, divergence ($\boldsymbol{\nabla} \cdot$) of cross product, and many other relations. But no gradient of cross product $\boldsymbol{\nabla} \! \left( \boldsymbol{a} \times \boldsymbol{b} \right) = \ldots$ Is it impossible or unknown how to find it? At least for the case when first vector is constant. update As “gradient” I mean tensor product with “nabla” $\boldsymbol{\nabla}$: $\operatorname{^{+1}grad} \boldsymbol{A} \equiv \boldsymbol{\nabla} \! \boldsymbol{A}$, here $\boldsymbol{A}$ may be tensor of any valence (and I don’t use “$\otimes$” or any other symbol for tensor product). Nabla (differential Hamilton’s operator) is $\boldsymbol{\nabla} \equiv (\sum_i)\, \boldsymbol{r}^i \partial_i$, $\:(\sum_i)\, \boldsymbol{r}^i \boldsymbol{r}_i = \boldsymbol{E} \,\Leftrightarrow\, \boldsymbol{r}^i \cdot \boldsymbol{r}_j = \delta^{i}_{j}$ (Kronecker’s delta), $\,\boldsymbol{r}_i \equiv \partial_i \boldsymbol{r}$ (basis vectors), $\,\partial_i \equiv \frac{\partial}{\partial q^i}$, $\:\boldsymbol{r}(q^i)$ is location vector, and $q^i$ $(i = 1, 2, 3)$ are coordinates.
First, note that isomorphism is a map between two objects which preserves the structure. The more structure, the "less" isomorphisms you might have. Isomorphism is an equivalence relation: the identity map gives us reflexivity; the fact the inverse of an isomorphism is also an isomorphism gives symmetry; and by composing isomorphisms we have transitivity. This means that "locally" isomorphism is an equivalence relation on a set. What does that mean locally? If you take a set collection of structures, then isomorphism classes are equivalence classes which are sets. Example: If we have two countable sets, without any structure then any bijection is an isomorphism of sets. However, if we give an ordering we may have less maps, lastly if we insist on well-ordering the sets then there is a unique isomorphism (if it exists to begin with). When we say that $X$ is a topological space, or rather $(X,\tau)$ is a topological space we endow the set $X$ with some structure. In this case, a family of subsets of $X$ which has some properties. So what is an isomorphism between two topological spaces? Firstly it has to be a bijection (as any isomorphism), but it has to preserve open sets. It is an open map, as well as a continuous map (as we want the inverse map to be open as well). If so, an isomorphism between topological spaces is exactly a homeomorphism. Of course, if the topology allows us - we can ask for more. If the topology is metric, then we can ask for an isometric map - which is not only preserving open sets but also the distance. We can ask for differentiable or measurable homeomorphism if the topological structure allows us. For your second point, here are three possible solutions: Firstly if we agree that sets of same cardinality may have the same topologies - we can choose a representative for a set of each cardinality, and consider only topologies defined on this set. This allows us simply to take the set of all topologies defined on $X$, which is a set since it is a subset of $P(P(X))$. For example, if we wish to work with finite dimensional real vector spaces, for the sake of argument we can assume that the underlying set is always the same. It is a set theoretical cheat, since it prevents us from having $\mathbb R\subseteq\mathbb R^2$. However the latter is already an abuse of notation since $\mathbb R^2$ is a set of pairs, while $\mathbb R$ is not. Secondly, we can use Scott's trick (named after Dana Scott) which is to use the axiom of foundations (known as axiom of regularity in some places) and define the equivalence classes as sets in the following way: $$[(X,\tau)] = \{(Y,\rho)\mid (Y,\rho)\cong(X,\tau)\land\operatorname{rank}(Y)\text{ is minimal}\}$$ That is, we use the fact that every set can be given a rank, and the collection of sets in a given rank is indeed a set. Now we can take all the topological spaces homeomorphic to $(X,\tau)$ whose rank is the least possible. Lastly, we can stay with classes (or move to a set theory which allows classes, such as Von Neumann–Bernays–Gödel set theory). Classes are syntactical objects. They are defined by a function, perhaps parameterized. In this case, the equivalence class of topological spaces homeomorphic to $(X,\tau)$ can be defined using $(X,\tau)$ as a parameter.
Operator product expansion says that, the product of two primary fields(of same dimension in this case) can be expanded as sum of primaries and their descendants $$\phi_1(x)\phi_2(0) = {\Large \Sigma_\mathcal{O}}\lambda_\mathcal{O}C_\mathcal{O}(x,\partial_y)\mathcal{O}(y)|_{y=0} $$ where the summation $\Sigma_\mathcal{O}$ is over primaries. Descents appear when acted upon by the derivatives in $C_\mathcal{O}(x,\partial_y)$. Considering the three point function and since we know two point functions are diagonal we get, \begin{equation}\langle\phi_1(x)\phi_2(0)\Phi(z)\rangle = \lambda_\Phi C_\Phi(x,\partial_y)\langle\Phi(y)|_{y=0}\Phi(z)\rangle \hspace{0.2cm} (1) \end{equation} Now using known forms of two and three point functions below $$\langle\phi_1(x)\phi_2(x_2)\Phi(x_3)\rangle = \frac{\lambda_\Phi }{|x_{12}|^{\Delta_1+\Delta_2-\Delta_3}|x_{23}|^{\Delta_2+\Delta_3-\Delta_1}|x_{13}|^{\Delta_1+\Delta_3-\Delta_2}} $$ $$ \langle \Phi(y)\Phi(z)\rangle = \frac{1}{|y-z|^{2\Delta_\Phi}} $$ one is supposed to fix the the constants $\alpha, \beta$ in $C_\Phi(x,\partial_y)$ by assuming a form $$ C_\Phi(x,\partial_y) = \frac{1}{|x|^{2\Delta -\Delta_\Phi}}\Big[1+ \frac{1}{2}x^\mu\partial_\mu + \alpha x^\mu x^\nu \partial_\mu\partial_\nu + \beta x^2\partial^2 + ...\Big] $$ Here, the dimensions of $\phi_1$ and $\phi_2$ are each $\Delta$ and that of $\Phi$ is $\Delta_\Phi$. Now I can see using the three point function with insertions at $x,0,z$ on the LHS of (1) is $$\langle\phi_1(x)\phi_2(0)\Phi(z)\rangle = \frac{\lambda_\Phi }{|x|^{2\Delta -\Delta_\phi}|z|^{\Delta_\Phi}|z-x|^{\Delta_\Phi}} $$ the leading term when expanding about $x$, $\frac{\lambda_\Phi }{|x|^{2\Delta -\Delta_\phi}|z|^{2\Delta_{\Phi}}} $ matches with the RHS of eq. (1) but can't figure out how to find the the coefficients of higher order terms. I find myself trying to evaluate a binomial expansion of $|z-x|^{\Delta_\Phi}$ where now the points are in $D$ dimensional space and that I am not able to do. I am only trying to get till two orders. Any help is appreciated.
Definition:Natural Logarithm/Complex Contents Definition Let $z = r e^{i \theta}$ be a complex number expressed in exponential form such that $z \ne 0$. The complex natural logarithm of $z \in \C_{\ne 0}$ is the multifunction defined as: $\map \ln z := \set {\map \ln r + i \paren {\theta + 2 k \pi}: k \in \Z}$ The complex natural logarithm of $z$ is the multifunction defined as: $\map \ln z := \set {w \in \C: e^w = z}$ $\map \Ln z = \map \ln r + i \theta$ for $\theta \in \hointr 0 {2 \pi}$ $\map \Ln z = \map \ln r + i \theta$ for $\theta \in \hointl {-\pi} \pi$ It is important to specify which is in force during a particular exposition. The notation for the natural logarithm function is misleadingly inconsistent throughout the literature. It is written variously as: $\ln z$ $\log z$ $\log_e z$ The first of these is commonly encountered, and is the preferred form on $\mathsf{Pr} \infty \mathsf{fWiki}$. However, many who consider themselves serious mathematicians believe this notation to be unsophisticated. The second is ambiguous (it doesn't tell you which base it is the logarithm of). While the third option is more verbose than the others, there is no confusion about exactly what is meant. $\ln \paren {-1} = \paren {2 k + 1} \pi i$ for all $k \in \Z$. $\ln \paren {-2} = \ln 2 + \paren {2 k + 1} \pi i$ for all $k \in \Z$. $\ln \paren i = \paren {4 k + 1} \dfrac {\pi i} 2$ for all $k \in \Z$. $\ln \paren {1 - i \tan \alpha} = \ln \sec \alpha + i \paren {-\alpha + 2 k \pi}$ for all $k \in \Z$. Also see Results about logarithmscan be found here.
You actually do recover the convolution, but as it is discussed in the comments, there is a normalization issue due to discretization. According to the documentation, fft is implemented like this: $$ A_k = \sum_{m=0}^{n-1} a_m \exp \{ - 2\pi i \frac{mk}{n} \} $$ with $A_k$ being the Fourier-coefficients, $a_m$ the $m$-th element of your signal vector and $n$ the length of the signal. Squaring this gives you $$ A_k^2 = \sum_{m=0}^{n-1} \sum_{m'=0}^{n-1} a_m a_{m'} \exp\{ - 2\pi i \frac{(m+m')k}{n} \} \} $$ Now, applying ifft to the squared Fourier-transform gives you, using the ifft-definition from the documentation: $$ \text{ifft}(A_k^2)_{m''} = \frac{1}{n} \sum_{k=0}^{n-1} \sum_{m=0}^{n-1} \sum_{m'=0}^{n-1}a_m \exp\{ - 2\pi i \frac{(m+m'-m'')k}{n} \} \}$$ With the observation, that $$ \frac{1}{n} \sum_{k=0}^{n-1} \exp\{ - 2\pi i \frac{(m+m'-m'')k}{n} \} = \delta_{m+m', m''} $$ you end up with $$ \text{ifft}(A_k^2)_{m''} = \sum_{m=0}^{n-1} \sum_{m'=0}^{n-1} a_m a_{m'} \delta_{m+m', m''} = \sum_{m=0}^{n-1} a_m a_{m'' - m}$$ This is actually how np.convolve is defined (except for some padding). If you use np.convolve on your data, you end up with the same result (except for some padding), so within the numpy-world, you did exactly what you set out to do, i.e. verify, the convolution property of the Fourier transform. As noted in the comments however, neither fft nor convolve "know" anything about your descretization, so you have to take care of that manually by multiplying the results with dt.
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
I want to know the hybridization of the central atom in $\ce{(SiH3)3N}$. I think it should be $\mathrm{sp^3}$, because $\ce{N}$ is attached to three silicon atoms and one lone pair. But actually it is supposedly $\mathrm{sp^2}$. How is this so? Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.Sign up to join this community I want to know the hybridization of the central atom in $\ce{(SiH3)3N}$. I think it should be $\mathrm{sp^3}$, because $\ce{N}$ is attached to three silicon atoms and one lone pair. But actually it is supposedly $\mathrm{sp^2}$. How is this so? Ordinarily and according to Bent’s rule, we would expect nitrogen’s lone pair to be in an $\mathrm s$ orbital and nitrogen using its three $\mathrm p$ orbitals to form three bonds to the three silicon atoms. This configuration would allow for the greatest stabilisation. However, due to nitrogen’s small size, this perfect world already falls apart for ammonia ($\ce{NH3}$), where nitrogen is bound to three otherwise tiny hydrogen atoms. Because an electronically perfect angle of $90^\circ$ would generate much too much steric strain between the hydrogen atoms, $\mathrm s$ contribution is mixed into the bonding $\mathrm p$ orbitals to a certain extent; for ammonia, this extent happens to be almost perfect $\mathrm{sp^3}$ — results for other amines will vary. This electronic situation is not ideal, however it is clearly better than having $\mathrm{sp^2}$ hybridisation and the lone pair in a $\mathrm p$ type orbital. An $\mathrm{sp^2}$ hybridisation of nitrogen in ammonia can be reached, but only as the transition state of nitrogen inversion. Carrying on to the compound $\ce{N(SiH3)3}$, we would be inclined to again assume a hybridisation of $\mathrm{sp^3}$ in line with the previous paragraph. However, Beagley and Conrad performed electron diffraction studies on $\ce{N(SiH3)3}$ and found the molecule to be practically planar within experimental error. [1,2] A planar molecule without doubt means that nitrogen is $\mathrm{sp^2}$-configured in $\ce{N(SiH3)3}$. The question remains why. There must be some kind of stabilising interaction of nitrogen’s remaining $\mathrm p$ orbital with something else to keep that molecule planar. Beagley and Conrad suggest — in line with what was thought at the time — that this be due to π bonds with silicon’s remote $\mathrm d$ orbitals. [1]. Numerous evidence, much of which is collected on this site, speaks the opposite (namely that $\mathrm d$ orbitals do not play any role in the bonding situation of main group metals). Instead, I think we are dealing with something you may call ‘inverse hyperconjugation’. Remember that $\chi(\ce{Si}) = 1.9$ which is less than hydrogen, meaning that the $\ce{Si-H}$ bonds are polarised towards hydrogen. This in turn means that $\sigma^*_{\ce{Si-H}}$ is a silicon-centred orbital with its primary lobe pointing towards nitrogen. Therefore, nitrogen’s $\mathrm p$ orbital can favourably interact with the antibonding $\sigma_{\ce{Si-H}}^*$ orbital, increasing the $\ce{Si-N}$ bond order and decreasing the $\ce{Si-H}$ bond order. The effects are the same as with hyperconjugation stabilisation of secondary or tertiary carbocations but the electronic demand is reversed. We could attempt to draw the following resonance structures in Lewis formalism to explain this: $$\ce{H-SiH2-N(SiH3)2 <-> \overset{-}{H}\bond{...}SiH2=\overset{+}{N}(SiH3)2}\tag{1}$$ In this Lewis formalism, the double bond would be generated from a $\mathrm p$ orbital on both silicon and nitrogen. Note and reference: [1]: B. Beagley, A. R. Conrad, Trans. Faraday Soc. 1970, 66, 2740–2744. DOI: 10.1039/TF9706602740. [2]: Actually, $\angle(\ce{Si-N-Si}) \approx 119.5^\circ < 120^\circ$. The authors state: [1] The apparant slight deviation from planarity is associated with a shrinkage effect 11on the $\ce{Si\dots Si}$ distance of about $\require{mediawiki-texvc}\pu{0.007 \AA}$ (see [table]). Spectroscopic results 12are entirely in agreement that the molecule is planar. 11 A. Allmenningen, O. Bastiansen and T. Munthe-Kaas, Acta Chem. Scand., 1956, 10,261. [sic!] 12 E. A. V. Ebsworth, J. R. Hall, M. J. Mackillop, D. C. McKean, N. Sheppard and L. A. Woodward, Spectrochim Acta, 1958, 13,202. [sic!] In order to test Jan's argument, I did a NBO analysis of your structure (optimised at PBE-D3/def2-SVP with NWChem 6.6 using a conformational search with MMFF94s and Avogadro as the starting point; a frequency calculation determined it was a true mininum). Figure 1: optimised geometry (angles in degrees and distances in angstrom) The obtained geometry is in perfect agreement with Jan's answer, showing a $\ce{Si-N-Si}$ angle of 120°. The five most significative NBO second order stabilisation energies are: | | E(2) | E(j)-E(i) | F(i,j)Donor NBO (i) | Acceptor NBO (j) | kcal/mol | a.u. | a.u.==============|==================|==========|===========|========LP ( 1) N 1 | BD*( 1)Si 2- H 6 | 5.08 | 0.43 | 0.043LP ( 1) N 1 | BD*( 1)Si 2- H 7 | 5.08 | 0.43 | 0.043LP ( 1) N 1 | BD*( 1)Si 3- H 8 | 5.08 | 0.43 | 0.043LP ( 1) N 1 | BD*( 1)Si 3- H 9 | 5.08 | 0.43 | 0.043LP ( 1) N 1 | BD*( 1)Si 4- H12 | 5.08 | 0.43 | 0.043LP ( 1) N 1 | BD*( 1)Si 4- H13 | 5.08 | 0.43 | 0.043 That is, a six-fold $\ce{n_\ce{N}} \rightarrow \sigma^*(\ce{Si-H})$ donation, worth of 5.08 kcal/mol each, seems to be the most significant delocalisation. Figure 2: $\ce{n_\ce{N}} \rightarrow \sigma^*(\ce{Si-H})$ delocalisation scheme On the other hand, the natural electron configurations are as follows: Atom | Natural Electron Configuration------|---------------------------------- N | [core]2s( 1.53)2p( 5.12) Si | [core]3s( 1.01)3p( 1.96)3d( 0.02) H | 1s( 1.15) Thus, the electron configuration of $\ce{N}$, according to NBO analysis, is $\ce{1s^{2} 2s^{1.53} 2p^{5.12}}$.Furthermore, the nitrogen lone pair is of pure $\pi$ character.Looking closer to the $\ce{N-Si}$ bond we see: (Occupancy) Bond orbital/ Coefficients/ Hybrids------------------------------------------------------------------------------- 1. (1.97614) BD ( 1) N 1-Si 2 ( 81.00%) 0.9000* N 1 s( 33.32%)p 2.00( 66.66%)d 0.00( 0.02%) ( 19.00%) 0.4359*Si 2 s( 23.84%)p 3.16( 75.42%)d 0.03( 0.74%) Thus, In agreement with Jan's answer. We call this weird thing as back bonding. The lone pair kinda delocalises or seeks refuge in the empty d orbital of Si, basically providing each N-Si, bond, on an average, a one third of an extra bond.
Let me look at the Hamiltonian of a charged particle in a plane in a constant magnetic field ($\vec{B}$) pointing upwards - then in usual notation it is, $$\hat{H} = \frac{1}{2m}\biggl(\hat{p} + \frac{e}{c}\hat{A}(\hat{r})\biggr)^2$$ To convert this in a Feynman-path-integral language, I pick say a gauge $\vec{A}=(-\frac{B}{2}y,\frac{B}{2}x)$and then in this gauge $\hat{p}$ and $\hat{A}$ commute and that makes rewriting in the path-integral language much easier. If I put this through the usual process of "deriving" a Feynman path-integral then I would get the expression, $$\int \bigl[\mathcal{D}\vec{r}\bigr]\bigl[\mathcal{D}\vec{p}\bigr] \exp\biggl[i \int dt \biggl(\vec{p}\cdot\dot{\vec{r}} - \frac{1}{2m}\Bigl(\vec{p}+\frac{e}{c}\vec{A}\Bigr)^2\biggr)\biggr]$$ Is it obvious (or true ) that the above expression is independent of the gauge I chose to the calculation? Is there a way to write the system in the path-integral language without explicitly choosing a gauge? (I have worked through calculations in Yang-Mills theory in the Fadeev-Popov gauge which does exactly that in those cases but I can't see a way out here...) Can't I have written down the above path-integral without even going through the usual process of finding infinitesimal transition amplitudes and then collecting them together? I mean how often is it safe to say that for a Hamiltonian $H(p,q)$ the path integral representation of the transition amplitude will be $\int [\mathcal{D}p][\mathcal{D}q]e^{i\int dt (p\dot{q}-H(q,p)) }$? Now the Heisenberg's equation of motion will tell that, $$\frac{d\vec{r}}{dt} = \frac{1}{m} \biggl(\vec{p} + \frac{e}{c}\vec{A}\biggr)^2$$ In the Feynman path-integral the position and the momentum vectors are treated to be independent variables and hence it would be wrong to substitute the above expression into the path-integral but outside one can I guess do this substitution and one would get for the action (whatever sits in the exponent), $$S = \int dt \biggl(\frac{p^2}{2m} - \frac{e^2}{2mc^2}A^2\biggr)$$ But the above expression doesn't look right!? The integrand isn't what the Lagrangian should be. right? Now in the derivation of the Feynman-path-integral if one integrates out the momentum for every infinitesimal transition amplitude and then reconstitutes the path-integral then one would get the expression, $$\int [\mathcal{D}\vec{r}] \exp\biggl[i\int dt \biggl(\frac{m}{2}\dot{\vec{r}}^2 - \frac{e}{c}\dot{\vec{r}}\cdot\vec{A}\biggr)\biggr]$$ Now what seems to sit in the exponent is what is the "correct" Lagrangian - I would think. Why did the answer differ in the two different ways of looking at it? I wonder if given a Hamiltonian its corresponding Lagrangian can be "defined" as whatever pops out in the exponent if that system is put through this Feynman re-writing. In this case keeping to just classical physics I am not sure how to argue that the $\frac{m}{2} \dot{\vec{r}}^2 - \frac{e}{c} \dot{\vec{r}}\cdot\vec{A}$ is the Lagrangian for the system with the Hamiltonian $\frac{1}{2m}\bigl(\vec{p} + \frac{e}{c}\vec{A}(\vec{r})\bigr)^2$ I think I have seen examples on curved space-time where the "classical" Lagrangian differs from what pops out in the exponent when the system is path-integrated.
I am attempting to calculate the unconditional variance of an E-GARCH model: $$\log(h_{t+1}) = \beta_{0} + \beta_{1}\log(h_{t}) + \beta_{2}\left[|\varepsilon_{t} - \lambda| + \gamma(\varepsilon_{t} - \lambda) \right]$$ where $\varepsilon_{t} \sim \mathcal{N}(0,1)$ under the LRNVR measure, $\mathcal{Q}$. I can calculate the unconditional variance of the GARCH and the GJR-GARCH models quite easily though I am completely stumped on this one. I manipulated it down to: $$h_{t+1} = h_{t}^{\beta_{1}}\beta_{1}e^{\beta_{0} - \gamma\beta_{2}\lambda}e^{\beta_{2}(|\varepsilon - \lambda| + \gamma\varepsilon)}$$ though I may be wrong in this manipulation (Thankyou for pointing out my error Quantuple). If I am correct, I do not know how to continue from here. I eventually want to use this results for option pricing under an E-GARCH model, hence the need to calculate this parameter. EDIT: After reading Daniel Nelson's paper, and Option Pricing Using GARCH Models: An Empirical Examination by Caroline Sasseville, I have an expression for $\mathbb{E}[h_{t}]$. It is quite large and it has to be estimated, but the start of it is as follows: $$\mathbb{E}[h_{t}] = (h_{1})^{\beta_{1}^{i-1}}\left(e^{\beta_{0}\frac{\beta_{1}^{i-1}-1}{\beta_{1}-1}}\right)2^{1-i}\prod_{k=1}^{i-1} f(\lambda, \gamma, k)$$ I understand how to compute this but I am perplexed as to what $h_{1}$ is. I interpret $h_{1}$ as the initial conditional variance which is usually set to the unconditional variance. However, my thoughts are that as k tends towards infinity, $(h_{1})^{\beta_{1}^{i-1}}$ would tend towards one, as based off Sasseville's paper, $\beta_{1}$ is less than one and hence $\beta_{1}^{i-1}$ would tend towards zero. This is not a rigorous answer, just intuitive thinking but would love if someone had some clarification on this. Possibly I am missing something incredibly simple as to what $h_{1}$ is :).
Why don't we consider them as linear? I don't understand. You just have to check for factorization up to sqrt of n. So it's even faster than linear. I assume it's not linear only if we compare the number of operations relative to the input in terms of binary representation. But why would we do so? It seems to me wrong. The growth in calculation should be calculated compared to the number itself. Why do we compare it to the binary representation? It seems that the main sticking point of the question here is: Why express runtime in terms of the size of the input, rather than the numeric value that the input represents? And indeed in some cases it doesn't make much difference which way you choose to express it. For instance, we could say that the time to read all values in an $N\times N$ matrix is quadratic in the number of columns, or we could say it is linear in the number of cells, and the meaning of these is the same, just with different conventions. So let's look at some reasons why it is conventional to express operations on numbers in terms of the length of the number rather than its numeric value: It is more easily comparable to operations on other kinds of data, since every possible form of input has a length but not all forms of input have a numeric value. By consistently using the length of the input as our reference point across a variety of problem types, we also get some nice properties like "the runtime of an algorithm that reads the entire input can never be better than linear." It provides more useful time complexities in context. For instance, a common use for primality testing is for cryptography. When we're doing cryptography, we might use say a 512-bit number as a key. We would like to have algorithms that scale proportional to the length of the number (512), rather than its numeric value (about $2^{512}$), since $2^{512}$ is such an astronomically large number that even a "linear" time algorithm would never realistically finish. It relates better to the actual operations performed by the computer. Many people are accustomed to implicitly treating all numbers as capped at some fairly large constant like $2^{64}$, and thus all arithmetic operations are constant-time and the actual internal representation of the number is irrelevant. But when we are analyzing the big-O performance of operations on the number itself we cannot assume that numbers are always small enough to ignore their internal representation. Ultimately, these operations are performed on bits so the number of bits is a good reference point to use for describing the performance. As a thought experiment, try analyzing the performance of the addition operation. You may have always considered it a constant-time operation, but what happens if the numbers in question get arbitrarily large? Ultimately, you'll need to sum each digit one-by-one, carrying as necessary. It makes sense to describe this as a linear-time operation based on the length of the input, rather than logarithmic time based on the numeric value of the input. Simple. When you give the number one trillion as input to your algorithm, do you give it as 1'000'000'000'000, or as a terabyte large string of ones? And by all means, you are free to choose whichever representation you feel comfortable with. We analyze the runtime as a function of the size of the input, not as the magnitude of the number represented by the input were the input to be a number. There are two sensible ways to define a variable that can be used in the runtime complexity. $m$ is the value of the input number (your definition). $n$ is the number of bits required to represent the input (the input size). Neither is better than the other, because there's a 1-to-1 correspondence between the two: $n = O(\log m)$, or equivalently $m = O(2^{n})$ Most scientists use $n$. Using that definition, sorting is e.g. $O(n \log n)$ and there's no known $O(n)$ algorithm for primality. You showed that there is a $O(\sqrt m)$ algorithm (sublinear in $m$), which is $O(\sqrt 2^n)$ (exponential in $n$). Using the definition of $m$, we're not looking for a (sub)linear algorithm for primality; we're looking for a polylogarithmic algorithm, $O((\log m)^c)$ for some constant $c$. I just got a couple of students curious on this confusion and I tried answering it as follows. (Different to above responses). First, note that if we were not concerned about the magnitude of input value but only its n-bit representation then the same would apply to analysing sorting algorithms too. But irrespective of what the input numbers magnitude or bits are sorting is nlgn. Agreed? So why bother on n-bit when dealing with primality or GCD? To me (and I can be wrong) this is done so to have a worst case complexity analysis. Notice that this part is where my response is different from above. So going back to sorting, when we say to analyse an input array of size 10^6 we see different cases. And give our big-oh analysis for the worst case of size 10^6. Stay with me! If we were to plot the nlgn time complexity on x-y what we will do is that we will take an array of size 1, 2, 3,..., 100, 101,..., 1000, ..., 10^5.... and for each of them find the number of basic operation count. The size of array goes on x-axis and count of basic operation on y-axis. Right? We don't do that too often but that's how we would do it. This would give us an increasing function. I hope you are still reading. So now back to primality testing. If I plot the same x-y time complexity function and take magnitude on x-axis and count of steps on y-axis, can you imagine what will happen? (Raise of hands, how many of you have guessed the next part?) For above plot we get following values: x | y 2 | 1 3 | 2 4 | 1 5 | 4 ... 1000 | 2 (why? its divisible by 2) So we get a zig-zag shape like efficiency function. That's not right, order of growth were supposed to be monotonically increasing, I mean, they should GROW not SHRINK. So what's actually needed in this case is that we take the number of digits of number of bits of input as the size of input. Now our analysis is meaningful like the sorting example. Why? It's because for every n-bit (or n-digit) number we take the worst case from the variety of cases. What we can say is that any even number of size n is a best case for primality and any prime number of size n is its worst case. So we plot the time complexity as a function of bits of the input and not its magnitude.
Let $n \in \mathbb{N}$ be a fixed positive integers and $B \in \mathbb{R}_+$ also be fixed. For a fixed $M>0$, let $f:[-B,B]^n \to \mathbb{R}$ be given by $f(x_1,\ldots,x_n)=\sum_{i=1}^n x_i M^i$. My aim is to prove the following: Prove that one can always find a fixed positive real number $M$ (dependent only on $B$ and $n$) such that $f$ is an injective map. If such an $M$ does not exist in the above question, one can assume that each $x_i$ can take only a fixed set of countable values in $[-B,B]$. In other words, the map $f$ is defined only for a countable subset of $[-B,B]^n$. In this reduced case, can $f$ be injective? My attempt: Let $f(x_1,\ldots,x_n)=f(y_1,\ldots,y_n)$ for some $(x_1,\ldots,x_n),(y_1,\ldots,y_n) \in [-B,B]^n$. Thus, $\sum_{i=1}^n (x_i-y_i)M^i=0$. After this step, I am stuck and unable to procced further about how to choose such an $M$ such that the above equation does't have any solution.
We know that $H_A\otimes H_B\neq H_B\otimes H_A$ (in general). Theoretically, we know the formalism and what observables to construct from the two compositions possible, but we never talk about both the possibilities. I wish to know that how experimentally the Measurements or Evolutions are done over such composite systems (let's just assume a bipartition as above). How does the experimentalist know whether he is working in the $A\otimes B$ or $B\otimes A$ composite Hilbert Space? For many questions that appear on this site, and about quantum information and computation in general, it is possible to ask a completely classical version of the question, and often the (sometimes obvious) answer that one finds in the more familiar classical setting translates directly to the quantum setting. In this case, a reasonable classical version of the question asks what role the non-commutativity of the Cartesian product plays in experimental classical computing (or, let's say, in practical implementations of classical computation). Suppose we have system $A$ that can be in any classical state drawn from a set $\mathcal{A}$, and a system $B$ that can be in any classical state drawn from the set $\mathcal{B}$. If we put system $A$ and system $B$ next to each other on the table, then we can represent the classical state of the two systems together as an element of the Cartesian product $\mathcal{A}\times\mathcal{B}$. Note that there is an implicit assumption here, which is that the two systems are distinguishable, and we're deciding more or less arbitrarily that when we talk about a state $(a,b)\in\mathcal{A}\times\mathcal{B}$ that the state $a$ of system $A$ is listed first and the state $b$ of system $B$ is listed second. We could just as easily have decided to represent the classical state of the two systems together as an element of the Cartesian product $\mathcal{B}\times\mathcal{A}$, with the understanding that the state of system $B$ now gets listed first. As an aside, if the two systems were indistinguishable, implying that $\mathcal{A} = \mathcal{B}$, and further we placed the two systems in a bag rather than on the table, then I guess there would really be no difference between $(a,b)$ and $(b,a)$. For this reason we would probably not use the Cartesian product to represent states of the bagged systems -- maybe we would use the set of all multisets of size 2 instead -- but let us forget about this situation and assume $A$ and $B$ are distinguishable for simplicity. Now, what role does this play in experiments or practical applications of classical computing? How does an experimenter or programmer know he or she is working in the $\mathcal{A}\times\mathcal{B}$ or $\mathcal{B}\times\mathcal{A}$ state space? When you think about the question this way, I believe it may come into focus. My answer, which is consistent with the other answers that concern the quantum setting, is that it really doesn't play any role at all, and the experimenter/programmer knows because it was his or her decision which order to use. We know the difference between the systems $A$ and $B$, and the decision to represent states of the two systems together by elements of $\mathcal{A}\times\mathcal{B}$ or $\mathcal{B}\times\mathcal{A}$ is totally arbitrary -- but once the decision is made we stick with it to avoid confusion. The decision will not affect any calculations we do, so long as the calculations are consistent with the decision of which order to use. To my eye, at a fundamental level there is no difference between the classical version of this question and the quantum version. We decide whether to represent states of the compound quantum system using the space $H_A\otimes H_B$ or $H_B\otimes H_A$, and that's all there is to it. You'll get exactly the same results of any calculations you perform, so long as your calculations are consistent with the choice to use $H_A\otimes H_B$ or $H_B\otimes H_A$. When you say $\neq$ I presume you are talking about the implied basis in usual ordering like (00, 01, 02, 10 etc). Otherwise you would have the isomorphism of Hilbert spaces vs an equality statement. That is, AB implies a certain ordered basis and BA a different one. The experiment has it's observables on the combined system in a basis independent way. If the experimentalist wants to put their results down, they can choose whatever basis they like. The distinction goes into the question being asked. What is the second entry of vector v in Hilbert space that combines A and B is not a well defined question. What is the second entry with respect to a given ordered basis is. The experimentalist has to ask the second in order to get an answer. You have to ask a sensible question if you want a sensible answer. The order in the tensor product is a convention and has nothing to do with experiments. As an example, if I have a cavity (with photons in it, $H_A$) and an atom (with internal states, $H_B$), it is clear which is the atom and which is the cavity, regardless of the order ones chooses for their Hilbert spaces in the tensor product when describing the setup theoretically. The two spaces $A$ and $B$ are just labels, with arbitrary ordering. For distinguishable qubits (or more general), the experimentalist can just say "this one's $A$, and this other one's $B$". If you swap the labels, you need to swap the labels everywhere - in both the Hamiltonian and the state (including eigenvectors, density matrix etc). In other words, if I define a swap operator $S$ such that $$ S(H_A\otimes H_B)S=H_B\otimes H_A, $$ then evolution of states can be calculated either using $$ e^{-i H_A\otimes H_B t}|\psi_{AB}\rangle \quad\text{or}\quad e^{-i H_B\otimes H_A t}|\psi_{BA}\rangle $$ where $|\psi_{BA}\rangle=S|\psi_{AB}\rangle$. Or, if you're working with a density matrix, you have $\rho_{BA}=S\rho_{AB}S$.
Given a discrete-time (DT) sequence $g[n]$, I want to represent it as a continuous-time (CT) signal. I can do this by representing this sequence as a weighted sum of Dirac delta impulses. Would it make a difference if I pass the DT signal through a DT filter first and then represent it as a weighted sum of Dirac impulses or pass the CT signal through a CT filter. The two cases are as follows: Case 1: DT sequence converted first to CT signal and passed through CT filter. The DT signal can be represented as a CT signal as: $$g(t)=\sum_kg[k]\delta(t-kT)\tag{1}$$ If $g(t)$ in $(1)$ is the input to a continuous-time LTI system with impulse response $h(t)$, the output is given by $$y(t)=\int h(\tau)g(t-\tau)d\tau = \int h(\tau)\left(\sum_kg[k]\delta(t-\tau-kT)\right)d\tau$$ or $$y(t) =\sum_kg[k]\int h(\tau)\delta(t-\tau-kT) d\tau$$ or $$y(t) = \sum_kg[k]h(t-kT)\tag{2}$$ Case 2: DT sequence passed through DT filter and then converted first to CT signal. The DT signal is passed through a DT filter $h[n] = h(nT)$ to obtain a DT signal $z[n]$: $$z[n]=\sum_kg[k]h[n-k]\tag{3}$$ This can be represented as a CT signal as $$z(t)=\sum_kz[k]\delta(t-kT)\tag{4}$$ Is $z(t)$ same as $y(t)$? Can I express one in terms of the other? Any help wuld be greatly appreciated? -ryan
Background: I have seen lots of people asking whether multiplication and pseudo-random sequences can be approximated by a NN without providing whether the inputs and outputs are bounded or not, and people have answered it (lot of upvotes) based on conventional NN knowledge. without taking into consideration the aforementioned fact. TL;DR How good/Is it possible by a Neural Network to approximate an unbounded function provided it is trained on a subset of the number line and the test inputs are significantly outside the subset? Can a Neural Network do regression for an unbounded function? To me it is impossible if the output function is sigmoid, since the best approximation (basis of all signal decomposition and reconstruction schemes) of a function the Fourier Series ($\star$) demands Dirichlet's condition to be satisfied, one of which more or less states that the value should be absolutely integrable ($\int_{-\infty}^{\infty}|f(x)|^2dx < \infty$). The sigmoid can be more or less thought in terms of a sinusoidal function as its value is bounded like a sinusoid. Now, if the output function used is ReLu then the output is unbounded. But still it is just some linear combination of weights gone through some non-linear functions (in the previous layers which at best might be linearly unbounded if previous layers are ReLu). So one can assume, that even though the Neural Net can approximate an unbounded linear function, can it approximate an unbounded Polynomial function or an exponential function? $\star$ Although the regression problem might seem more suitable to Fourier Transform analogy than Fourier Series, I have used the FS analogy based on the fact that FT output is is continuous function as opposed to FS (in NN regression we are adding outputs of several nodes, similar to what we do in FS where $number_{nodes} << \infty$.
Forgot password? New user? Sign up Existing user? Log in Bored with 2048 ? Try this one 2584. >>>> Click Here <<<< It is the Fibonacci version of 2048. Some say it is even more difficult than the regular 2048. Post your high score here! Note by Nelvson Shine 5 years, 4 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: My first try!!! hscore I think this game is much easier than 2048. Although I have a score in excess of 50,000 in 2048 now, in my first try in 2048 I got around 6000 only. It took a while to get a hang of it initially, especially as it gets more confusing if u play 2048 often. Still it seems easier. Log in to reply Nice game..I got 4115: alt text Hey! I would say that game theory is important and hence I would prefer that everyone should tell about their favorite games in the comment section of my note Games-a good way to learn(just search it) BTW, I haven't tried 2584 but my high score on 2048 is 5188. New score : 10957 alt text can you please tell me how to add images ? Read This @Siddhartha Srivastava – thanks.. How do you add images? I got 19144. :D Nice game. Pretty confusing once you get used to 2048. Exactly! The only ones that felt natural were... the ones. D: cool game , i got up to the 1597 tile & highscore 26433 http://www.crazygames.com/game/243 is also an incredible game highscore My Highscore Is 23000 ,but this game is actually easier than 2048 as a number can be added with 2 different numbers ex:- 8 can be added to 5 and 13.In 2048 8 can be added only with 8 http://www.crazygames.com/game/2048-backwards This game is also SUPER COOL. Check it out guys! Ya Its a good game i reached till tile 4 I reached tile 8 ! :) I got 7671 on this game on first try!!! HAHAHA I beat you by 8 on my first try. :D So close! It was difficult at first, wasn't it? :D Great game. I got to 2584 on my first try. High score 31245. Test I got it in the first try too! (Narrowly missed your score) Its easier than 2048, isn't it? I think it is easier than 2048 on first try because we have played 2048 so much already and transfer the experience over to this new variation. By the way, how do you include an image in your post here? I can't seem to get it to work. @Tong Choo – First I go to tinypic.com & upload my screenshot there. After uploading it , they provide 4 links. Copy the text in Direct link for layouts. Then upload it by ![anytexthere]_(pastethelinkyoujustcopied) Just remove the underscore. I really don't know what to write in the square brackets, anything seems to work. And yes, I leave a space after the open rounded bracket.(don't know if its required) sorry I don't know much I also learnt it from somebody. It works for me. Tell me if u have any problem uploading it. Edit: I don't know if its just a problem in my mobile view, but there is no enter after the exclamation sign. The entire thing's in 1 line. @Siddharth Brahmbhatt – I tried your format but it did not work. But after a few tweaks, it now works. Thanks!! cool I got 10000 My high score is 35,810. I also got to 2,584 on this game. Problem Loading... Note Loading... Set Loading...
If \(A\) and \(B\) are numbers such that the polynomial \(x^{2017} + Ax + B\) is divisible by \((x + 1)^2\), what is the value of \(B\)? This probably isn't how this problem is intended to be done but it's all I can come up with. \(\text{Let }p(x) = x^{2017}+Ax+B\\ \text{We'll expand }p(x) \text{ as a Taylor series about }(x+1)\\ p(x) = \sum \limits_{k=0}^\infty~\dfrac{p^{(k)}(-1)(x+1)^k}{k!}\\ p(x) \text{ being divisible by }(x+1)^2 \text{ means that the first two terms must be 0}\\ p^{(0)}(-1) = p(-1) = -1 -A +B = 0\\ p^{(1)}(-1) = 2017(-1)^{2016} + A = 2017+A = 0\\ A=-2017\\ B = -2016\) \(\text{Note: }p^{(k)}(x) = \dfrac{d^kp}{dx^k}(x)\).
In the last post I showed how to use purrr to perform a simple Monte Carlo simulation.Since simulation studies are usually computationally expensive, it is benifical towrite efficient code and make use of parallelization. The latter even more important when working on a modern computer. My PC has a Ryzen 3700X CPU with 8 cores and 16 threads. For longer computations it would be a waste ofressources not to go parallel when possible. If there is code using purrr::map_*() it is extremly simple to do so by replacing it with furrr::future_map_*(). I will use an example from econometrics where we will compare heteroscedasticity robust with non-robust standard errors when testing the hypothesis \(H_0: \beta_i = 0\) in the simple linear regression model \[y_i = \beta_1 + \beta_2 x_i + u_i\] where \(u_i \sim N(0, \sigma_i^2)\). Similar to the last post I start by writing a function which… …generates some data (according to the simple linear regression model with either homoscedastic or heteroscedastic errors). …does some statistical computations (here fitting a linear model with OLS, computing standard errors, t-statistics and p-values). …returns a data frame with the results and the used parameters. library(tidyverse) # contains purrr and some other packages I will uselibrary(furrr)sample_t_stat <- function(n = 100, beta = c(0.5, 0.5), beta_0 = c(0, 0), error_dist = "homoscedastic", standard_error = "normal"){ # generate data X <- cbind(rep(1,n), runif(n,-4,4)) u <- switch(error_dist, homoscedastic = rnorm(n, sd = 2), heteroscedastic = rnorm(n, sd = sqrt(abs(X[ ,2]))), # same uncoditional variance as for homoscedasticity stop("Unknown distribution") ) y <- X %*% matrix(beta) + u # fit the model with OLS lin_reg <- lm.fit(X,y) # compute standard errors se <- switch(standard_error, normal = sqrt( (1/(n-2)*sum((lin_reg$residuals)^2) * solve(t(X) %*% X)[diag(T,2,2)])) , robust = sqrt( (solve(t(X) %*% X) %*% t(X) %*% diag((lin_reg$residuals)^2) %*% X %*% solve(t(X) %*% X))[diag(T,2,2)]), stop("Unknown distribution") ) # compute t-statistic and p-values t_stat <- (lin_reg$coefficients - beta_0) / se p_value <- 2 * (1 - pt(abs(t_stat), df = n - 2)) data.frame(t_stat_beta1 = t_stat[1], t_stat_beta2 = t_stat[2], p_value_beta1 = p_value[1], p_value_beta2 = p_value[2], beta1 = beta[1], beta2 = beta[2], beta1_0 = beta_0[1], beta2_0 = beta_0[2], n, error_dist, standard_error, stringsAsFactors = FALSE)} We will look how the test performs with different sample sizes, error variances, standard errors and values for \(\beta_2\), keeping the rest of the possible input values fixed at their default values. # considered parameter combinationsparameter_grid <- expand.grid( n = c(10, 50, seq(100, 500, 100)), beta2 = seq(-0.5,0.5, 0.1), error_dist = c("homoscedastic", "heteroscedastic"), standard_error = c("normal", "robust"), stringsAsFactors = FALSE ) The simulation for a given set of parameters is performed by mc_t_stat() which runs sim_t_stat() 1000 times. mc_t_stat <- function(n, beta2, error_dist, standard_error){ map_df(1:1000, ~ sample_t_stat(n = n, beta = c(0.5, beta2), error_dist = error_dist, standard_error = standard_error) )} With the function purrr::pmap_dfr() we can iterate over the rows of the parameter grid and run mc_t_stat() for each set of parameters. system.time(res <- pmap_dfr(parameter_grid, mc_t_stat)) ## user system elapsed ## 391.59 25.03 435.35 This takes a while. However, if I simply add the line plan(multiprocess) and switch from purrr::pmap_dfr() to furrr::future_pmap_dfr() the computation time on my computer is reduced significantly. library(furrr)plan(multiprocess)system.time(res2 <- future_pmap_dfr(parameter_grid, mc_t_stat)) ## user system elapsed ## 0.28 0.03 39.69 I think this is one of the easiest ways to parallelization in R. The future.apply package does the same for the apply functions, in case you like those more. Finally, a quick look at the results. res2 %>% group_by(n, error_dist, standard_error , beta2) %>% summarise(rejection_rate = mean(p_value_beta2 < 0.05)) %>% ggplot(aes(x = beta2, y = rejection_rate, col = standard_error )) + facet_grid(n ~ error_dist) + geom_line() + geom_abline(intercept = 0.05, slope = 0, linetype = "dashed") + geom_vline(aes(xintercept = 0), linetype = "dashed") We would expect that the test rejects in 5 percent of the cases if the null hypothesis is true. That is the case when \(\beta_2 = 0\). However, we find that under heteroscedasticity the test with the non-robust standard errors rejects too often (the curves should go through the point where the two dashed lines intersect). This doesn’t even change when the sample size increases. The test with robust standard errors is doing here a much better job. If the errors are homoscedastic, both tests perform for sample sizes of approx. 100 and more almost equally well.
Please help transcribe this video using our simple transcription tool. You need to be logged in to do so. Description A technique introduced by Indyk and Woodruff [STOC 2005] has inspired several recent advances in data-stream algorithms. We show that a number of these results follow easily from the application of a single probabilistic method called {\em Precision Sampling}. Using this method, we obtain simple data-stream algorithms that maintain a randomized sketch of an input vector $x=(x_1,\ldots x_n)$, which is useful for the following applications: \begin{itemize} \item Estimating the $F_k$-moment of $x$, for $k>2$. \item Estimating the $\ell_p$-norm of $x$, for $p\in[1,2]$, with small update time. \item Estimating cascaded norms $\ell_p(\ell_q)$ for all $p,q>0$. \item$\ell_1$ sampling, where the goal is to produce an element $i$ with probability (approximately) $|x_i|/\|x\|_1$. It extends to similarly defined $\ell_p$-sampling, for $p\in [1,2]$. \end{itemize} For all these applications the algorithm is essentially the same: pre-multiply the vector $x$ entry-wise by a well-chosen random vector, and run a heavy-hitter estimation algorithm on the resulting vector. Our sketch is a linear function of $x$, thereby allowing general updates to the vector $x$. Precision Sampling itself addresses the problem of estimating a sum $\sum_{i=1}^n a_i$ from weak estimates of each real $a_i\in[0,1]$. More precisely, the estimator first chooses a desired precision $u_i\in(0,1]$ for each $i\in[n]$, and then it receives an estimate of every $a_i$ within additive $u_i$. Its goal is to provide a good approximation to $\sum a_i$ while keeping a tab on the cost $\sum_i (1/u_i)$. Here we refine previous work [Andoni, Krauthgamer, and Onak, FOCS 2010] which shows that as long as $\sum a_i=\Omega(1)$, a good multiplicative approximation can be achieved using total precision of only $O(n\log n)$. Questions and Answers You need to be logged in to be able to post here. ADS
Could you please check my proof to the following exercise: Consider the topologists's sine curve $X$ defined by: $$X = \{\ \{(0,0)\}\ \cup \ \{(x,\sin(\frac{1}{x})) \in \mathbb{R}^2 \text{ for } x > 0 \}\}$$ We will prove that $X$ is connected. We start by assuming that $X = U_1 \cup V_1$ where $U_1$ and $V_1$ are open in $X$ and $U_1 \cap V_1 = \emptyset$. a) Explain why there exist open sets $U,V \subset \mathbb{R}^2$ such that $$U_1 = U \cup X \text{ and } V_1 = V \cup X$$ b) W.l.o.g. we assume that $(0,0) \in U_1$. Prove that there is a $x_0 > 0$ such that $(x_0,\sin(\frac{1}{x_0})) \in U_1$. c) Explain why $X \setminus \{(0,0)\}$ is connected. d) Define $U_2 = U_1 \setminus \{(0,0)\}$. Prove that $U_2$ is open in $X \setminus \{(0,0)\}$ and $X \setminus \{(0,0)\} = U_2 \cup V_1.$ Conclude from that that $X \setminus \{(0,0)\} = U_2$ and therefore $X= U_1$. My take on the exercise: a) This is precisly the concept of the subspace topology. b) If $U_1$ would be a singleton it could not be open, therefore there have to be additional elements in $U_1$. By definition of $X$ these have to have the form $(x_0,sin(1/x_0))$. c) It is the image of the connected set $(0,\infty)$ under the continuous function $$f: x \in (0,\infty) \mapsto (x,\sin(\frac{1}{x}))$$ and therefore connected. d) Punctured open sets are still open, therefore $U_2$ is open in $X \setminus \{(0,0)\}$. By assumption in b) $(0,0)$ is in $U_1$ and $U_1$ and $V_1$ are assumend to be disjoint, therefore $X \setminus \{(0,0)\} = U_2 \cup V_1$. Since by b) $U_2$ is non-empty and by c) $X \setminus \{(0,0)\}$ is connected we see that $X \setminus \{(0,0)\} = U_2$ (since $U_2$ is open) and so clearly $X = U_1$. Thanks in advance.
kidzsearch.com > wiki Explore:images videos games Mathematics Mathematics, sometimes shortened to maths (in England, Australia, New Zealand and France) or math (in the United States, Canada and Germany) is the study of numbers, shapes and patterns. Mathematicians are people who learn about and discover such things in mathematics. Mathematics is useful for solving problems that occur in the real world, so many people besides mathematicians study and use mathematics. Today, mathematics is needed in many jobs. Business, science, engineering, and construction need some knowledge of mathematics. Mathematicians solve problems by using logic. Mathematicians often use deduction. Deduction is a special way of thinking to discover and prove new truths using old truths. To a mathematician, the reason something is true is just as important as the fact that it is true. Using deduction is what makes mathematical thinking different from other kinds of thinking. Contents About Mathematics includes the study of: Numbers (example 3+6=9) Structure: how things are organized. Place: where things are and their arrangement. Change: how things become different over time. Mathematics often uses logic, paper, and a calculator. These things are used to create general rules, which are an important part of mathematics. These rules leave out information that is not important so that a single rule can cover many situations. By finding general rules, mathematics solves many problems at the same time. A proof gives a reason why a rule in mathematics is correct. This is done by using certain other rules that everyone agrees are correct, which are called axioms. A rule that has a proof is sometimes called a theorem. Experts in mathematics perform research to create new theorems. Sometimes experts find an idea that they think is a theorem but can not find a proof for it. That idea is called a conjecture until they find a proof. Sometimes, mathematics finds and studies rules or ideas in the real world that we don't understand yet. Often in mathematics, ideas and rules are chosen because they are considered simple or beautiful. On the other hand, sometimes these ideas and rules are found in the real world after they are studied in mathematics; this has happened many times in the past. In general, studying the rules and ideas of mathematics can help us understand the world better. Branches Mathematics has the following main branches. Arithmetic Algebra Geometry Trigonometry Calculus Statistics Number Mathematics includes the study of number, or quantity. [math]1, 2, 3, \ldots[/math] [math] \ldots, -1, 0, 1, \ldots[/math] [math]\frac{1}{2}, \frac{2}{3}, 0.125,\ldots[/math] [math]\pi, e, \sqrt{2},\ldots[/math] [math] 1+i, 2e^{i\pi/3},\ldots [/math] Natural numbers Integers Rational numbers Real numbers Complex numbers [math]0, 1, \ldots, \omega, \omega + 1, \ldots, 2\omega, \ldots[/math] [math]\aleph_0, \aleph_1, \ldots[/math] [math]+,-,\times,\div[/math] [math] \gt ,\ge, =, \le, \lt [/math] [math]f(x) = \sqrt x[/math] Ordinal numbers Cardinal numbers Arithmetic operations Arithmetic relations Functions Structure Some areas of mathematics study the structure that an object has. Shape Some areas of mathematics study the shapes of things. Change Some areas of mathematics study the way things change. Applied mathematics Applied mathematics uses mathematics to solve problems of other areas such as engineering, physics, and computing. Numerical analysis – Optimization – Probability theory – Statistics – Mathematical finance – Game theory – Mathematical physics – Fluid dynamics - computational algorithms Famous theorems These theorems have interested mathematicians and people who are not mathematicians. Pythagorean theorem – Fermat's last theorem – Goldbach's conjecture – Twin Prime Conjecture – Gödel's incompleteness theorems – Poincaré conjecture – Cantor's diagonal argument – Four color theorem – Zorn's lemma – Euler's Identity – Church-Turing thesis These are theorems and conjectures that have greatly changed mathematics. Riemann hypothesis – Continuum hypothesis – P Versus NP – Pythagorean theorem – Central limit theorem – Fundamental theorem of calculus – Fundamental theorem of algebra – Fundamental theorem of arithmetic – Fundamental theorem of projective geometry – classification theorems of surfaces – Gauss-Bonnet theorem – Fermat's last theorem Foundations and methods Progress in understanding the nature of mathematics also influences the way mathematicians study their subject. Philosophy of mathematics – Mathematical intuitionism – Mathematical constructivism – Foundations of mathematics – Set theory – Symbolic logic – Model theory – Category theory – Logic – Reverse Mathematics – Table of mathematical symbols History and the world of mathematicians Mathematics in history, and the history of mathematics. History of mathematics – Timeline of mathematics – Mathematicians – Fields medal – Abel Prize – Millennium Prize Problems (Clay Math Prize) – International Mathematical Union – Mathematics competitions – Lateral thinking – Maths and gender Name Often, the word "mathematics" is made shorter into maths (in British English) or math (in American English). The short words math or maths are often used for arithmetic, geometry or simple algebra by young students and their schools. Awards in mathematics Mathematical tools Tools that are used to do mathematics or find answers to mathematics problems. Old: Abacus Order of Operations Calculator Napier's bones, slide rule Ruler and Compass Mental calculation New:
The beth numbers, $\beth_\alpha$ The beth numbers $\beth_\alpha$ are defined by transfinite recursion: $\beth_0=\aleph_0$ $\beth_{\alpha+1}=2^{\beth_\alpha}$ $\beth_\lambda=\sup_{\alpha\lt\lambda}\beth_\alpha$, for limit ordinals $\lambda$ Thus, the beth numbers are the cardinalities arising from iterating the power set operation. It follows by a simple recursive argument that $|V_{\omega+\alpha}|=\beth_\alpha$. Beth one The number $\beth_1$ is $2^{\aleph_0}$, the cardinality of the power set $P(\aleph_0)$, which is the same as the continuum. The continuum hypothesis is equivalent to the assertion that $\aleph_1=\beth_1$. The generalized continuum hypothesis is equivalent to the assertion that $\beth_\alpha=\aleph_\alpha$ for all ordinals $\alpha$. Beth omega The cardinal $\beth_\omega$ is the smallest uncountable cardinal exhibiting the interesting property that whenever a set $X$ has cardinality less than $\beth_\omega$, then also the power set $P(X)$ also has size less than $\beth_\omega$. Strong limit cardinal More generally, a cardinal $\kappa$ is a strong limit cardinal if whenever $\gamma\lt\kappa$, then $2^\gamma\lt\kappa$. Thus, the strong limit cardinals are those cardinals closed under the exponential operation. The strong limit cardinals are precisely the cardinals of the form $\beth_\lambda$ for a limit ordinal $\lambda$. Beth fixed point A cardinal $\kappa$ is a $\beth$-fixed point when $\kappa=\beth_\kappa$. Just as in the construction of aleph fixed points, we may similar construct beth fixed points: begin with any cardinal $\beta_0$ and let $\beta_{n+1}=\beth_{\beta_n}$; it follows that $\kappa=\sup_n\beta_n$ is a $\beth$-fixed point, since $\beth_\kappa=\sup_n\beth_{\beta_n}=\sup_n\beta_{n+1}=\kappa$. One may similarly construct $\beth$-fixed points of any desired cardinality, and indeed, the class of $\beth$-fixed points are precisely the closure points of the function $\alpha\mapsto\beth_\alpha$ and therefore form a closed unbounded proper class of cardinals. Every $\beth$-fixed point is an $\aleph$-fixed point as well. Since every model of ZFC satisfies the existence of a $\beth$-fixed point, it follows that no model of ZFC satisfies $\forall\alpha >0(\beth_\alpha>\aleph_\alpha)$.
By Schaefer's dichotomy theorem, this is NP-complete. Consider the case where all clauses have 2 or 3 literals in them; then we can consider this as a constraint satisfaction problem over a set $\Gamma$ of relations of arity 3. In particular, the relations $R(x,y,z)$ are the following: $x \lor y$, $x \lor \neg y$, $\neg x \lor \neg y$, $x \oplus y \oplus z$, $x \oplus y \oplus \neg z$. Now apply Schaefer's dichotomy theorem, in its modern form. Check each of the six operations to see if they are a polymorphism: Unary 0: Not a polymorphism of $x \lor y$. Unary 1: Not a polymorphism of $\neg x \lor \neg y$. Binary AND: Not a polymorphism of $x \lor y$. (Consider $(0,1,0)$ and $(1,0,0)$; they both satisfy the relation, but their pointwise-AND $(0,0,0)$ doesn't.) Binary OR: Not a polymorphism of $\neg x \lor \neg y$. (Consider $(0,1,0)$ and $(1,0,0)$; they satisfy the relation, but $(1,1,0)$ doesn't.) Ternary majority: Not a polymorphism of $x \oplus y \oplus z$. (Consider $(0,0,1)$ and $(0,1,0)$ and $(1,0,0)$; they satisfy the relation, but their majority $(0,0,0)$ doesn't.) Ternary minority: Not a polymorphism of $x \lor y$. (Consider $(0,1,0)$, $(1,0,0)$, and $(1,1,0)$; they satisfy the relation, but their minority $(0,0,0)$ doesn't.) It follows that this problem is NP-complete, even if you restrict all the XOR clauses to be of length at most 3. On the other hand, if all the XOR clauses are restricted to be of length at most 2, then this is in P. In particular $(x \oplus y)$ is equivalent to $(x \lor y) \land (\neg x \lor \neg y)$, so any such formula is equivalent to a 2SAT formula, whose satisfiability can be determined in polynomial time.
Equivalence of Definitions of Algebraically Closed Field Contents Theorem Let $K$ be a field. The only algebraic field extension of $K$ is $K$ itself. Proof Definition $(1)$ implies Definition $(2)$ Let $K$ be algebraically closed by definition 1. Let $f$ be an irreducible polynomial over $K$. By Principal Ideal of Principal Ideal Domain is of Irreducible Element iff Maximal, the ideal $\left\langle{f}\right\rangle$ generated by $f$ is maximal. $L = K \left[{X}\right] / \left\langle{f}\right\rangle$ is a field where $L$ is a field extension over $K \left[{X}\right]$. Now: $L = \left\{{g + \left\langle{f}\right\rangle: g \in K \left[{X}\right]}\right\}$ $\forall g \in K \left[{X}\right]: \exists q, r \in K \left[{X}\right]: g = q f + r, \operatorname{deg} r < \operatorname{deg} f =: n$ Therefore: $L = \left\{{r + \left\langle{f}\right\rangle: r \in K \left[{X}\right],\ \operatorname{deg} r < n}\right\}$ $1 + \left\langle{f}\right\rangle, \ldots, X^{n-1} + \left\langle{f}\right\rangle$ span $L$. Thus $L$ is finite. Also $K \subseteq L$. So by hypothesis $K = L$. This implies: $\left[{L : K}\right] = 1$ where $\left[{L : K}\right]$ is the degree of $L$ over $K$. Hence $n = \operatorname{deg}f = 1$. Thus $K$ is algebraically closed by definition 2. $\Box$ Definition $(2)$ implies Definition $(3)$ Let $K$ be algebraically closed by definition 2. From Polynomial Forms over Field form Principal Ideal Domain, $K \left[{X}\right]$ is a principal ideal domain. From Principal Ideal Domain is Unique Factorization Domain, $K \left[{X}\right]$ is a unique factorization domain. So $f$ can be factorized $f = u g_1 \cdots g_r$ such that: $u$ is a unit and: $g_i$ are irreducible for $i = 1, \ldots, r$. By hypothesis, $g_1$ has degree $1$. Thus $K$ is algebraically closed by definition 3. $\Box$ Definition $(3)$ implies Definition $(1)$ Let $K$ be algebraically closed by definition 3. Let $L / K$ be an algebraic field extension of $K$. Let $\alpha \in L$. Therefore by the Polynomial Factor Theorem: $\mu_\alpha = \left({X - \beta}\right) g$ for some $g \in K \left[{X}\right]$. $\mu_\alpha = X - \beta$ Also: $\mu_\alpha \left({\alpha}\right) = \alpha - \beta = 0$ so $\alpha = \beta$. Therefore $\alpha \in K$. Therefore $L = K$ as required. Thus $K$ is algebraically closed by definition 1. $\blacksquare$
Infinity The Greeks had already noted that there are two ways of considering infinity. Potential infinity is what we consider when we say that counting never ends. Whatever natural number you can think of, there is a bigger number. Formally $$(\forall x\in\mathbb{N})(\exists y) (y>x),$$ and this is not really deniable. Actual infinity is what happens when we switch the order: There is a number which is bigger than any natural number you can think of. Formally $$(\exists y)(\forall x\in\mathbb{N})(y>x),$$ and this naturally implies that $y$ cannot be a member of $\mathbb{N}$. The existence of an actual infinity is philosophically non trivial and not accepted by all mathematicians. Its existence cannot be proven; it is axiomatically given (see ZFC). The real question however is, given a collection, how does one determine whether it is finite or not. Certainly, if one can count all the elements of a collection, then the collection is finite, but who can count up to a googol? yet it is finite. Gallileo noticed that there are as many even numbers as there are positive integers. To see this without formal machinery: imagine the collection of all positive integers. Don't try to imagine them individually, imagine them as a completed collection. Now multiply them all by 2. There are no left overs. This violates the Greek saying that the whole is greater than the parts. This leads to Dedekind's characterisation: A finite set cannot be in one-to-one relation with a proper subset. An infinite set is a set that can be in one-to-one relation with a proper subset. In this sense, once accepted the existence of the set of all natural numbers, $\mathbb{N}$ is infinite since it is possible to map $n$ to $n+1$ thus providing a one-to-one relation from $\mathbb{N}$ to $\mathbb{N}\setminus\{0\}$.
The Bernstein operator maps $f\in C[0,1]$ to its Bernstein polynomial $B_n f.$ The eigenvalues and eigenfunctions of the Bernstein operator on $C[0,1]$ have been described in [1]. Similar description has been obtained for the $q$-Bernstein polynomials in [2]. The study of $q$-Bernstein polynomials in the case $0<q<1$ leads to the following definition. Definition. Let $0<q< 1.$ The limit q-Bernsteinoperator on $C[0,1]$ is given by:$$ B_{\infty,q}:f\mapsto B_{\infty,q}f,$$where$$(B_{\infty,q}f)(x) = \left\{\begin{array}{ll} \displaystyle \prod_{j=0}^{\infty}(1-q^jx)\cdot \sum_{k=0}^{\infty}\frac{f(1-q^k)\,x^k}{(1-q)\dots(1-q^k)},& x\in [0,1),\\f(1), & x=1.\end{array}\right.$$ Problem. Find all $f\in C[0,1]$ so that$$B_{\infty,q}f=\lambda f,\;\;\lambda \in {\bf C}\setminus \{0\}.$$ Conjecture: If $B_{\infty,q}f=\lambda f,\;\lambda \neq 0,$ then $f$ is a polynomial and $\lambda\in \{q^{m(m-1)/2}\}_{m=0}^{\infty}.$ Remark. The conjecture has been proved under some additional conditions on the smoothness of $f$ at 1 (for example, for $f\in {\rm Lip}\,\alpha$) in [3], Corollary 5.6. [1] S. Cooper, S. Waldron, The Eigenstructure ofthe Bernstein Operator, J. Approx. Theory, 105, 2000,133-165. [2] S. Ostrovska, M. Turan, On the eigenvectors of the q-Bernstein operators, Mathematical Methods in the Applied Sciences, Vol 37, Issue 4 (2014), pp. 562-570. [3] S. Ostrovska, On the improvement of analytic properties under the limit $q$-Bernsteinoperator, J. Approx. Theory, 138, 2006, 37-53.
Let $R$ be a ring with identity (not necessarily commutative) and $R[x]$ be a ring of polynomials over $R$. We say that a ring $S$ is an extension of $R$ if there is a subring $\tilde{R}$ in $S$ isomorphic to $R$.Let $S$ be an extension of $R$, and $$\phi: R\to \tilde{R}\subset S$$be a ring isomorphism.We say that a polynomial $f(x) = \sum\limits_{j\geq 0}f_jx^j\in R[x]$ has a root $\alpha\in S$ if$$\sum\limits_{j\geq 0}\phi(f_j)\alpha^j = 0.$$ In the case, where $R$ is a commutative ring every monic polynomaial $f(x)\in R[x]$ has a root $[x]_f$ in the extension $S = R[x]/R[x]f(x)$ of $R$. In the case, where $R$ is not commutative the set $R[x]/R[x]f(x)$ is a left $R[x]$-module but not a ring, because an ideal $R[x]f(x)$ is not two-sided ideal, but only one-sided. Also in non-commutative case there are examples such that two-sided ideal, containing $f(x)$ that is an ideal $R[x]f(x)R[x]$ is equal to $R[x]$ and in this case $R[x]/R[x]f(x)R[x]$ isomorphic to zero ring. I want to prove that for every ring with identity $R$ and every monic polynomaial $f(x)$ over $R$ there exists an extension $S$ of $R$ such that $f(x)$ has a root in $S$.
The beth numbers, $\beth_\alpha$ The beth numbers $\beth_\alpha$ are defined by transfinite recursion: $\beth_0=\aleph_0$ $\beth_{\alpha+1}=2^{\beth_\alpha}$ $\beth_\lambda=\sup_{\alpha\lt\lambda}\beth_\alpha$, for limit ordinals $\lambda$ Thus, the beth numbers are the cardinalities arising from iterating the power set operation. It follows by a simple recursive argument that $|V_{\omega+\alpha}|=\beth_\alpha$. Beth one The number $\beth_1$ is $2^{\aleph_0}$, the cardinality of the power set $P(\aleph_0)$, which is the same as the continuum. The continuum hypothesis is equivalent to the assertion that $\aleph_1=\beth_1$. The generalized continuum hypothesis is equivalent to the assertion that $\beth_\alpha=\aleph_\alpha$ for all ordinals $\alpha$. Beth omega The cardinal $\beth_\omega$ is the smallest uncountable cardinal exhibiting the interesting property that whenever a set $X$ has cardinality less than $\beth_\omega$, then also the power set $P(X)$ also has size less than $\beth_\omega$. Strong limit cardinal More generally, a cardinal $\kappa$ is a strong limit cardinal if whenever $\gamma\lt\kappa$, then $2^\gamma\lt\kappa$. Thus, the strong limit cardinals are those cardinals closed under the exponential operation. The strong limit cardinals are precisely the cardinals of the form $\beth_\lambda$ for a limit ordinal $\lambda$. Beth fixed point A cardinal $\kappa$ is a $\beth$-fixed point when $\kappa=\beth_\kappa$. Just as in the construction of aleph fixed points, we may similar construct beth fixed points: begin with any cardinal $\beta_0$ and let $\beta_{n+1}=\beth_{\beta_n}$; it follows that $\kappa=\sup_n\beta_n$ is a $\beth$-fixed point, since $\beth_\kappa=\sup_n\beth_{\beta_n}=\sup_n\beta_{n+1}=\kappa$. One may similarly construct $\beth$-fixed points of any desired cardinality, and indeed, the class of $\beth$-fixed points are precisely the closure points of the function $\alpha\mapsto\beth_\alpha$ and therefore form a closed unbounded proper class of cardinals. Every $\beth$-fixed point is an $\aleph$-fixed point as well. Since every model of ZFC satisfies the existence of a $\beth$-fixed point, it follows that no model of ZFC satisfies $\forall\alpha >0(\beth_\alpha>\aleph_\alpha)$.
2019-09-20 08:41 Search for the $^{73}\mathrm{Ga}$ ground-state doublet splitting in the $\beta$ decay of $^{73}\mathrm{Zn}$ / Vedia, V (UCM, Madrid, Dept. Phys.) ; Paziy, V (UCM, Madrid, Dept. Phys.) ; Fraile, L M (UCM, Madrid, Dept. Phys.) ; Mach, H (UCM, Madrid, Dept. Phys. ; NCBJ, Swierk) ; Walters, W B (Maryland U., Dept. Chem.) ; Aprahamian, A (Notre Dame U.) ; Bernards, C (Cologne U. ; Yale U. (main)) ; Briz, J A (Madrid, Inst. Estructura Materia) ; Bucher, B (Notre Dame U. ; LLNL, Livermore) ; Chiara, C J (Maryland U., Dept. Chem. ; Argonne, PHY) et al. The existence of two close-lying nuclear states in $^{73}$Ga has recently been experimentally determined: a 1/2$^−$ spin-parity for the ground state was measured in a laser spectroscopy experiment, while a J$^{\pi} = 3/2^−$ level was observed in transfer reactions. This scenario is supported by Coulomb excitation studies, which set a limit for the energy splitting of 0.8 keV. [...] 2017 - 13 p. - Published in : Phys. Rev. C 96 (2017) 034311 Registo detalhado - Registos similares 2019-09-20 08:41 Search for shape-coexisting 0$^+$ states in $^{66}$Ni from lifetime measurements / Olaizola, B (UCM, Madrid, Dept. Phys.) ; Fraile, L M (UCM, Madrid, Dept. Phys.) ; Mach, H (UCM, Madrid, Dept. Phys. ; NCBJ, Warsaw) ; Poves, A (Madrid, Autonoma U.) ; Nowacki, F (Strasbourg, IPHC) ; Aprahamian, A (Notre Dame U.) ; Briz, J A (Madrid, Inst. Estructura Materia) ; Cal-González, J (UCM, Madrid, Dept. Phys.) ; Ghiţa, D (Bucharest, IFIN-HH) ; Köster, U (Laue-Langevin Inst.) et al. The lifetime of the 0$_3^+$ state in $^{66}$Ni, two neutrons below the $N=40$ subshell gap, has been measured. The transition $B(E2;0_3^+ \rightarrow 2_1^+)$ is one of the most hindered E2 transitions in the Ni isotopic chain and it implies that, unlike $^{68}$Ni, there is a spherical structure at low excitation energy. [...] 2017 - 6 p. - Published in : Phys. Rev. C 95 (2017) 061303 Registo detalhado - Registos similares 2019-09-17 07:00 Laser spectroscopy of neutron-rich tin isotopes: A discontinuity in charge radii across the $N=82$ shell closure / Gorges, C (Darmstadt, Tech. Hochsch.) ; Rodríguez, L V (Orsay, IPN) ; Balabanski, D L (Bucharest, IFIN-HH) ; Bissell, M L (Manchester U.) ; Blaum, K (Heidelberg, Max Planck Inst.) ; Cheal, B (Liverpool U.) ; Garcia Ruiz, R F (Leuven U. ; CERN ; Manchester U.) ; Georgiev, G (Orsay, IPN) ; Gins, W (Leuven U.) ; Heylen, H (Heidelberg, Max Planck Inst. ; CERN) et al. The change in mean-square nuclear charge radii $\delta \left \langle r^{2} \right \rangle$ along the even-A tin isotopic chain $^{108-134}$Sn has been investigated by means of collinear laser spectroscopy at ISOLDE/CERN using the atomic transitions $5p^2\ ^1S_0 \rightarrow 5p6\ s^1P_1$ and $5p^2\ ^3P_0 \rightarrow 5p6s^3 P_1$. With the determination of the charge radius of $^{134}$Sn and corrected values for some of the neutron-rich isotopes, the evolution of the charge radii across the $N=82$ shell closure is established. [...] 2019 - 7 p. - Published in : Phys. Rev. Lett. 122 (2019) 192502 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-17 07:00 Radioactive boron beams produced by isotope online mass separation at CERN-ISOLDE / Ballof, J (CERN ; Mainz U., Inst. Kernchem.) ; Seiffert, C (CERN ; Darmstadt, Tech. U.) ; Crepieux, B (CERN) ; Düllmann, Ch E (Mainz U., Inst. Kernchem. ; Darmstadt, GSI ; Helmholtz Inst., Mainz) ; Delonca, M (CERN) ; Gai, M (Connecticut U. LNS Avery Point Groton) ; Gottberg, A (CERN) ; Kröll, T (Darmstadt, Tech. U.) ; Lica, R (CERN ; Bucharest, IFIN-HH) ; Madurga Flores, M (CERN) et al. We report on the development and characterization of the first radioactive boron beams produced by the isotope mass separation online (ISOL) technique at CERN-ISOLDE. Despite the long history of the ISOL technique which exploits thick targets, boron beams have up to now not been available. [...] 2019 - 11 p. - Published in : Eur. Phys. J. A 55 (2019) 65 Fulltext from Publisher: PDF; Registo detalhado - Registos similares 2019-09-17 07:00 Inverse odd-even staggering in nuclear charge radii and possible octupole collectivity in $^{217,218,219}\mathrm{At}$ revealed by in-source laser spectroscopy / Barzakh, A E (St. Petersburg, INP) ; Cubiss, J G (York U., England) ; Andreyev, A N (York U., England ; JAEA, Ibaraki ; CERN) ; Seliverstov, M D (St. Petersburg, INP ; York U., England) ; Andel, B (Comenius U.) ; Antalic, S (Comenius U.) ; Ascher, P (Heidelberg, Max Planck Inst.) ; Atanasov, D (Heidelberg, Max Planck Inst.) ; Beck, D (Darmstadt, GSI) ; Bieroń, J (Jagiellonian U.) et al. Hyperfine-structure parameters and isotope shifts for the 795-nm atomic transitions in $^{217,218,219}$At have been measured at CERN-ISOLDE, using the in-source resonance-ionization spectroscopy technique. Magnetic dipole and electric quadrupole moments, and changes in the nuclear mean-square charge radii, have been deduced. [...] 2019 - 9 p. - Published in : Phys. Rev. C 99 (2019) 054317 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-17 07:00 Investigation of the $\Delta n = 0$ selection rule in Gamow-Teller transitions: The $\beta$-decay of $^{207}$Hg / Berry, T A (Surrey U.) ; Podolyák, Zs (Surrey U.) ; Carroll, R J (Surrey U.) ; Lică, R (CERN ; Bucharest, IFIN-HH) ; Grawe, H ; Timofeyuk, N K (Surrey U.) ; Alexander, T (Surrey U.) ; Andreyev, A N (York U., England) ; Ansari, S (Cologne U.) ; Borge, M J G (CERN ; Madrid, Inst. Estructura Materia) et al. Gamow-Teller $\beta$ decay is forbidden if the number of nodes in the radial wave functions of the initial and final states is different. This $\Delta n=0$ requirement plays a major role in the $\beta$ decay of heavy neutron-rich nuclei, affecting the nucleosynthesis through the increased half-lives of nuclei on the astrophysical $r$-process pathway below both $Z=50$ (for $N>82$ ) and $Z=82$ (for $N>126$). [...] 2019 - 5 p. - Published in : Phys. Lett. B 793 (2019) 271-275 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-14 06:30 Precision measurements of the charge radii of potassium isotopes / Koszorús, Á (KU Leuven, Dept. Phys. Astron.) ; Yang, X F (KU Leuven, Dept. Phys. Astron. ; Peking U., SKLNPT) ; Billowes, J (Manchester U.) ; Binnersley, C L (Manchester U.) ; Bissell, M L (Manchester U.) ; Cocolios, T E (KU Leuven, Dept. Phys. Astron.) ; Farooq-Smith, G J (KU Leuven, Dept. Phys. Astron.) ; de Groote, R P (KU Leuven, Dept. Phys. Astron. ; Jyvaskyla U.) ; Flanagan, K T (Manchester U.) ; Franchoo, S (Orsay, IPN) et al. Precision nuclear charge radii measurements in the light-mass region are essential for understanding the evolution of nuclear structure, but their measurement represents a great challenge for experimental techniques. At the Collinear Resonance Ionization Spectroscopy (CRIS) setup at ISOLDE-CERN, a laser frequency calibration and monitoring system was installed and commissioned through the hyperfine spectra measurement of $^{38–47}$K. [...] 2019 - 11 p. - Published in : Phys. Rev. C 100 (2019) 034304 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-12 09:23 Evaluation of high-precision atomic masses of A ∼ 50-80 and rare-earth nuclides measured with ISOLTRAP / Huang, W J (CSNSM, Orsay ; Heidelberg, Max Planck Inst.) ; Atanasov, D (CERN) ; Audi, G (CSNSM, Orsay) ; Blaum, K (Heidelberg, Max Planck Inst.) ; Cakirli, R B (Istanbul U.) ; Herlert, A (FAIR, Darmstadt) ; Kowalska, M (CERN) ; Kreim, S (Heidelberg, Max Planck Inst. ; CERN) ; Litvinov, Yu A (Darmstadt, GSI) ; Lunney, D (CSNSM, Orsay) et al. High-precision mass measurements of stable and beta-decaying nuclides $^{52-57}$Cr, $^{55}$Mn, $^{56,59}$Fe, $^{59}$Co, $^{75, 77-79}$Ga, and the lanthanide nuclides $^{140}$Ce, $^{140}$Nd, $^{160}$Yb, $^{168}$Lu, $^{178}$Yb have been performed with the Penning-trap mass spectrometer ISOLTRAP at ISOLDE/CERN. The new data are entered into the Atomic Mass Evaluation and improve the accuracy of masses along the valley of stability, strengthening the so-called backbone. [...] 2019 - 9 p. - Published in : Eur. Phys. J. A 55 (2019) 96 Fulltext from Publisher: PDF; Registo detalhado - Registos similares 2019-09-05 06:35 Nuclear charge radii of $^{62−80}$Zn and their dependence on cross-shell proton excitations / Xie, L (Manchester U.) ; Yang, X F (Peking U., SKLNPT ; Leuven U.) ; Wraith, C (Liverpool U.) ; Babcock, C (Liverpool U.) ; Bieroń, J (Jagiellonian U.) ; Billowes, J (Manchester U.) ; Bissell, M L (Manchester U. ; Leuven U.) ; Blaum, K (Heidelberg, Max Planck Inst.) ; Cheal, B (Liverpool U.) ; Filippin, L (U. Brussels (main)) et al. Nuclear charge radii of $^{62−80}$Zn have been determined using collinear laser spectroscopy of bunched ion beams at CERN-ISOLDE. The subtle variations of observed charge radii, both within one isotope and along the full range of neutron numbers, are found to be well described in terms of the proton excitations across the $Z=28$ shell gap, as predicted by large-scale shell model calculations. [...] 2019 - 5 p. - Published in : Phys. Lett. B 797 (2019) 134805 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-04 06:18 Electromagnetic properties of low-lying states in neutron-deficient Hg isotopes: Coulomb excitation of $^{182}$Hg, $^{184}$Hg, $^{186}$Hg and $^{188}$Hg / Wrzosek-Lipska, K (Warsaw U., Heavy Ion Lab ; Leuven U.) ; Rezynkina, K (Leuven U. ; U. Strasbourg) ; Bree, N (Leuven U.) ; Zielińska, M (Warsaw U., Heavy Ion Lab ; IRFU, Saclay) ; Gaffney, L P (Liverpool U. ; Leuven U. ; CERN ; West Scotland U.) ; Petts, A (Liverpool U.) ; Andreyev, A (Leuven U. ; York U., England) ; Bastin, B (Leuven U. ; GANIL) ; Bender, M (Lyon, IPN) ; Blazhev, A (Cologne U.) et al. The neutron-deficient mercury isotopes serve as a classical example of shape coexistence, whereby at low energy near-degenerate nuclear states characterized by different shapes appear. The electromagnetic structure of even-mass $^{182-188}$ Hg isotopes was studied using safe-energy Coulomb excitation of neutron-deficient mercury beams delivered by the REX-ISOLDE facility at CERN. [...] 2019 - 23 p. - Published in : Eur. Phys. J. A 55 (2019) 130 Fulltext: PDF; Registo detalhado - Registos similares
Suppose $A\subseteq X$. Prove that the boundary $\partial A$ of $A$ is closed in $X$. My knowledge: $A^{\circ}$ is the interior $A^{\circ}\subseteq A \subseteq \overline{A}\subseteq X$ My proof was as follows: To show $\partial A = \overline{A} \setminus A^{\circ}$ is closed, we have to show that the complement $( \partial A) ^C = X\setminus{}\partial A =X \setminus (\overline{A} \setminus A^{\circ})$ is open in $X$. This is the set $A^{\circ}\cup X \setminus(\overline{A})$ Then I claim that $A^{\circ}$ is open by definion ($a\in A^{\circ} \implies \exists \epsilon>0: B_\epsilon(a)\subseteq A$. As this is true for all $a$, by definition of open sets, $A^{\circ}$ is open. My next claim is that $X \setminus \overline{A}$ is open. This is true because the complement is $\overline{A}$ is closed in $X$, hence $X \setminus \overline{A}$ is open in $X$. My concluding claims are: We have a union of two open sets in $X$, By a proposition in my textbook, this set is open in $X$. Therefore the complement of that set is closed, which is we had to show. What about this ?
If $A\subseteq B$ are affine domains over an algebraically closed field $k$ of characteristic zero, such that $Q(A)$ is algebraically closed in $Q(B)$, how can one show that $Q(A)$ is also algebraically closed in the field of fractions of $Q(A)\otimes_kB$? The history behind this problem: Starting from the fact that $Q(A)$ is algebraically closed in $Q(B)$, I intend to conclude that a general fiber of the morphism Spec $B \rightarrow$ Spec $A$ is irreducible. To do so, by first Bertini Theorem, as in Shafarevich's Basic Algebraic Geometry, vol. 1, it suffices to show that $Q(A)\otimes_k B$ is geometrically irreducible over $Q(A)$, which, in turn, by Zariski-Samuel's Commutative Algebra, vol. 2 (see page 230, thm. 39), is equivalent to showing that $Q(A)$ is algebraically closed in the field of fractions of $Q(A)\otimes_kB$. Now, for the proof it's easy to see that $Q(A)$ is alg. closed in $C:=Q(A)\otimes_kB$. But I cannot settle the case where some $x/y\in Q(C)$ might be algebraic over $Q(A)$, where both $x$ and $y$ go to 0 under the natural map $Q(A)\otimes_k B \rightarrow Q(B)$. By the way, if $B=k[x_1,...,x_n]/\mathfrak{p}$, with $\mathfrak{p}\in$ Spec $k[x_1,...,x_n]$, then the field of fractions of $Q(A)\otimes_k B$ is equal to the residue field of $Q(A)[x_1,...,x_n]$ at the point $\mathfrak{p}Q(A)[x_1,...,x_n]$.
The lower attic From Cantor's Attic Welcome to the lower attic, where the countably infinite ordinals climb ever higher, one upon another, in an eternal self-similar reflecting ascent. $\omega_1$, the first uncountable ordinal, and the other uncountable cardinals of the middle attic stable ordinals The ordinals of infinite time Turing machines, including admissible ordinals and relativized Church-Kleene $\omega_1^x$ Church-Kleene $\omega_1^{ck}$, the supremum of the computable ordinals the omega one of chess $\omega_1^{\mathfrak{Ch}_{\!\!\!\!\sim}}$ = the supremum of the game values for white of all positions in infinite chess $\omega_1^{\mathfrak{Ch},c}$ = the supremum of the game values for white of the computable positions in infinite chess $\omega_1^{\mathfrak{Ch}}$ = the supremum of the game values for white of the finite positions in infinite chess the Takeuti-Feferman-Buchholz ordinal the Bachmann-Howard ordinal the large Veblen ordinal the small Veblen ordinal the Extended Veblen function the Feferman-Schütte ordinal $\Gamma_0$ $\epsilon_0$ and the hierarchy of $\epsilon_\alpha$ numbers indecomposable ordinal the small countable ordinals, such as $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$ up to $\epsilon_0$ Hilbert's hotel and other toys in the playroom $\omega$, the smallest infinity down to the parlour, where large finite numbers dream
Let $u$ be an element of $\mathbb{Z}[\sqrt 5]$ of norm 1, i.e. $u = r + s \sqrt 5$ with $r^2-5s^2 = 1$. The multiplication by $u$ in $\mathbb{Z}[\sqrt 5]$ turns any element $y$ of norm $44$ into another element $uy$ of norm $44$.View this multiplication operation on $\mathbb{Z}[\sqrt 5]$ as the transformation of the plane $f : (p,q) \rightarrow (pr+5qs,ps+qr)$, and look for its eigenvalues : $f(\sqrt5,1) = (r\sqrt5+5s,r+\sqrt5s) = u(\sqrt5,1)$, and we have $f(- \sqrt5,1) = \frac 1u (- \sqrt5,1)$ as well. If $u>1$ this means that $f$ is an operation that, when iterated, takes elements near the line $(p = - \sqrt5 q)$ and moves them over to the line $(p = \sqrt5 q)$Now you want to find a sector of the plane so that you can reach the whole plane by taking its images by the iterates of $f$ and $f^{-1}$ Define $g(p,q) = \frac {p + \sqrt5 q}{p - \sqrt5 q}$, which is the ratio of the coordinates of $(p,q)$ in the eigenbasis of $f$.$g(f(p,q)) = \frac {pr+5qs + \sqrt5 (ps+qr)}{pr+5qs - \sqrt5 (ps+qr)} = \frac{(r+\sqrt5 s)(p + \sqrt5 q)}{(r-\sqrt5 s)(p - \sqrt5 q)} = (r+\sqrt5 s)^2 g(p,q)$. Or alternately, define $g(y) = y/\overline{y}$, so that $g(uy) = uy/\overline{uy} = u^2 g(y)$. Thus if you look at any point $(p,q)$, you know you can apply $f$ or $f^{-1}$ to turn it into $(p',q')$ such that $g(p',q') \in [1 ; u^2[$ Thus, a suitable sector of the plane is the set of points $(p,q)$ such that $g(p,q) \in [1 ; u^2[$ : if you find all the elements $y$ of norm $44$ such that $g(y) \in [1 ; u^2[$, then this means that the $u^ky$ will cover all the elements of norm $44$ Finally, the good thing is that $ \{y \in \mathbb{Z}[ \sqrt 5] / g(y) \in [1; u^2[, y\overline{y} \in [0; M] \}$ is a finite set, so a finite computation can give you all the elements of norm $44$ you need. In the case of $p²-10q²=9$, a fundamental unit is $u = 19+6\sqrt{10}$,so replace $\sqrt 5,r,s$ with $ \sqrt {10},19,6$ in everything I wrote above. In order to find all the solutions, you only need to check potential solutions in the sector of the plane between the lines $g(p,q) = 1$ and $g(p,q) = u^2$. You can look at the intersection of the line $g(p,q)=1$ with the curve $p^2-10q^2 = 9$.$g(p,q)=1$ implies that $p+\sqrt{10}q = p- \sqrt{10}q$, so $q=0$, and then the second equations has two solutions $p=3$ and $p= -3$.It so happens that the intersection points have integer coordinates so they give solutions to the original equation. Next, the intersection of the line $g(p,q) = u^2$ with the curve will be $u \times (3,0) = f(3,0) = (19*3+60*0, 6*3+19*0) = (57,18)$ and $u \times (-3,0) = (-57,-18)$. So you only have to look for points on the curve $p^2-10q^2=9$ with integers coordinates in the section of the curve between $(3,0)$ and $(57,18)$ (and the one between $(-3,0)$ and $(-57,-18)$ but it is essentially the same thing). You can write a naïve program : for q = 0 to 17 do : let square_of_p = 9+10*q*q. if square_of_p is a square, then add (sqrt(square_of_p),q) to the list of solutions. Which will give you the list $\{(3,0) ; (7,2) ; (13,4)\}$.These three solutions, together with their opposite, will generate, using the forward and backward interations of the function $f$, all the solution in $\mathbb{Z}^2$. If you only want solution with positive coordinates, the forward iteration of $f$ on those three solutions are enough. Also, as Gerry points out, the conjugate of $(7,2)$ generates $(13,4)$ because $f(7,-2) = (13,4)$. Had we picked a sector of the plane symmetric around the $x$-axis, we could have halved the search space thanks to that symmetry, and we would have obtained $\{(7,-2),(3,0),(7,2)\}$ instead. One loop of this hypnotic animation represents one application of the function $f$.Each dot corresponds to one point of the plane with integer coordinates, and is moved to its image by $f$ in the course of the loop.The points are colored according to their norm (and as you can see, each of them stay on their hyperbolic branch of points sharing their norm), and I've made the yellow-ish points of norm 9 (the solutions of $x^2-10y^2 = 9$) a bit bigger.For example, the point at (3,0) is sent outside the graph, and the point at (-7,2) is sent on (13,4) (almost vanishing). You can see that there are three points going through (3,0) during the course of one loop. They correspond to three representants of the three fundamental solutions of the equation.For each yellowish point on the curve $x^2-10y^2=9$, no matter how far along the asymptote it may be, there is an iterate of $f$ or $f^{-1}$ that sends it to one of those three fundamental solutions. In order to find all fundamental solutions, it is enough to explore only a fundamental portion of the curve (a portion whose iterates by $f$ covers the curve), for example the fundamental portion of the curve between (-7,2) and its image by $f$, (13,4). To find the solutions on that portion, you set $y=-2,-1,0,1,2,3$ and look if there is an integer $x$ that makes a solution for each of those $y$. Whichever fundamental portion of the curve you choose, you will find 3 solutions inside it, whose images by $f$ are sent to the next three solutions in the next portion of the curve, and so on. Now there is a better procedure than the "brute search" I did to get all the solutions.It is an adaptation of the procedure to obtain a fundamental unit : Start with the equation $x^2-10y^2 = 9$, and suppose we want all the positive solutions. We observe that we must have $x > 3y$, or else $-y^2 \ge 9$, which is clearly impossible. So, replace $x$ with $x_1 + 3y$. We get the equation $x_1^2 + 6x_1 y - y^2 = 9$. We observe that we must have $y > 6x_1$, or else $x_1^2 \le 9$. In this case we quickly get the three small solutions $(x_1,y) = (1,2),(1,4),(3,0)$ which correspond to the solutions $(x,y) = (7,2),(13,4),(3,0)$. Otherwise, continue and replace $y$ with $y_1 + 6x_1$. We get the equation $x_1^2 - 6x_1y_1 - y_1^2 = 9$. We observe that we must have $x_1 > 6y_1$, or else $-y_1^2 \ge 9$, which is clearly impossible. So, replace $x_1$ with $x_2 + 6y_1$. We get the equation $x_2^2 + 6x_2y_1 - y_1^2 = 9$. But we already encountered that equation so we know how to solve it.
Answer The value of $y$ here is $$y=\frac{5\pi}{6}$$ Work Step by Step $\DeclareMathOperator{\arccot}{arccot}$ $$y=\arccot (-\sqrt 3)$$ First, we see that the domain of inverse cotangent function is $(-\infty,\infty)$. Therefore, in fact when we deal with inverse cotangent function, we do not need to do this checking step. The range of inverse cotangent function is $(0,\pi)$. In other words, $y\in(0,\pi)$. We can rewrite $y=\arccot(-\sqrt 3)$ into $\cot y=-\sqrt3$ We know that $$\cot\frac{\pi}{6}=\sqrt 3$$ which means $$-\cot\frac{\pi}{6}=-\sqrt 3$$ $$\cot(-\frac{\pi}{6})=-\sqrt 3$$ (If you need a proof here, take $\cot(-X)=\frac{\cos(-X)}{\sin(-X)}$. We know that $\cos(-X)=\cos X$ and $\sin(-X)=-\sin X$. That means $\cot(-X)=\frac{\cos X}{-\sin X}=-\cot X$). However, $-\frac{\pi}{6}$ does not belong to the range $(0,\pi)$. So we must take another similar value but stays in quadrant 1 or 2, which belongs to the range. Since $\cot (X+\pi) =\cot x$, $\frac{5\pi}{6}$ is such a value. Therefore, the exact value of $y$ here is $$y=\frac{5\pi}{6}$$
The Dirac’s theorem states that: “For a Graph G with N vertices, if the degree of each vertex is atleast N/2 then, the Graph has a Hamilton Circuit.” Can the same be said if a graph has a Hamilton Circuit then the degree of each vertex is atleast N/2 ? Given a natural number $ n \geq 1$ , I am looking for a Boolean circuit over $ 2n$ variables, $ \varphi(x_1, y_1, \dots, x_n, y_n)$ , such that the output is true if and only if the assignment that makes it true verifies $ $ \sum_{i = 1}^{i = n} (x_i + y_i) \not\equiv n \bmod 3$ $ I should specify that this I am looking for a Boolean circuit, not necessarily a Boolean formula as it is usually written in Conjunctive Normal Form (CNF). This is because when written in CNF, a formula like the one before has a trivial representation where the number of clauses is approximately $ \frac{4^n}{3}$ , as it contains a clause for every assignment $ (x_1, y_1, \dots, x_n, y_n)$ whose bits sum to a value which is congruent with $ n \bmod 3$ . Constructing such a formula would therefore take exponential time. I have been told that a Boolean circuit can be found for this formula that accepts a representation of size polynomial in $ n$ . However, so far I have been unable to find it. I would use some help; thanks. I’m trying to Design a sequential synchronous circle, what would on two outputs display cyclic values 0/0, 0/1, 1/0, 0/0, 0/1 and so on.. How would I design such circuit? Thanks! Berkowitz algorithm provides a polynomial size circuit with logarithmic depth for determinant of a square matrix using matrix powers. Is there a polynomial size boolean circuit for $ i$ th bit of determinant and is it still logarithmic depth? Given the following function: $ $ \:f\left(G,v\right)\:=\:size\:of\:the\:longest\:simple\:circuit\:in\:a\:directed\:graph\:G\:that\:contains\:v$ $ Output: Function returns a natural number or 0, which is the largest simple circuit in directed graph G. However, I don’t understand the following claim: if it is possible to compute in a polynomial time the function g(G,v) and it is guaranteed that: $ f(G,v) -5 ≤ g(G,v) ≤ f(G,v) +5$ , then $ P=NP$ . I don’t understand why. as far as I know, the longest simple circuit can be proved NP-complete using LongestSimplePath(by adding remaining vertices to a new graph G’ and two directed edges, and then running it) or HamCycle(by constructing an instance of longestsimplecycle(H,|V|) based on hamcycle(H) (can be done in polynomial time) and checking if the longest simple cycle equals to |V|(contains hamiltonian cycle) as the basis for the reductions. So if we know that it is possible to compute in polynomial time g(G,v), I don’t understand how the fact that it is guaranteed that $ f(G,v) -5 ≤ g(G,v) ≤ f(G,v) +5$ helps deduct that P=NP. There’s probably some trick to it that I cannot see, And would appreciate your help with it. How is it possible to determine that? Suppose we have an algorithm for a decision problem with $ n$ bit inputs that runs in $ DTIME[f(n)]$ is there ways to convert to circuits of $ O(f(n))$ size with AND, OR and NOT gates? How about when we go from circuits to programs? I have an existing project with the following microservice architect. Client –> API Gateway(Spring cloud using Hystrix as circuit breaker) –> UploadService. When uploading small file( POST /upload/video) everything is fine. But when the file is larger then the upload time is very long and Hystrix will be OPEN and return fallback. Does anyone have practice for my case or how can I set up the timeout for only POST /upload/video request on Hystrix? tl;dr: I have a problem where I have a Boolean circuit and need to implement it with very specific single-thread primitives, such that SIMD computation is significantly cheaper after a threshold. I’m trying to optimize the implementation. Going into detail, the input is a combinatorial Boolean circuit (so no loops, state, etc.). I’m implementing it in software with a rather unusual set of primitives, such that: logic gates must be computed one at a time, as the “engine” is single-threaded NOT gates are free AND, OR, XOR have a cost of 1 (eg. 1 second) N identical gates can be evaluated at the same time for a cost of 10 plus some tiny proportional term (eg. a batch of 20 distinct AND gates can be evaluated in 10 seconds, 50 distinct XOR gates in ~10 seconds, etc.) The objective is to implement the circuit with the given primitives while minimizing the cost. What I tried This problem looks vaguely related to the bin packing problem, but the differences – constraints on the order of the items and different cost for each “bin” depending on the number of items – make me think it’s not particularly applicable. I was suggested to use integer linear programming, which sounds like the best fit so far, but I’m not sure how to represent the problem. Specifically, I’d use binary variables to represent whether the implementation gate/batch M is used in place of the circuit gate N, but then I don’t know how to express the objectives to be maximized/minimized. So I have this issue. My website uses data, that is scraped from a different site – sports results. This data can update in relatively random intervals, but I do not care if my data is a bit stale – it does not have to be instant, but it should update on some regular basis. At the same time, I cannot just cache the responses from the external site -> I process them and import into a graph database so that I can do other analytics over them. I would like to have a system like this: interface IDataSource { public function getData(): array; } class ExternalDataSource implements IDataSource { // gets data from the external website - the ultimate source of truth } class InternalDataSource implements IDataSource { // gets data from my own graph database } class InternalImportDecorator implements IDataSource { private $ external; public function __contruct(ExternalDataSource $ external) { $ this->external = $ external } public function getData(): array { $ data = $ this-external->getData(); // import the data into my internal DB return $ data; } } class CompositeDataSource implements IDataSource { public function __construct(ExternalDataSource $ external, InternalDataSource $ internal) { $ this->external = new InternalImportDecorator($ external); $ this->internal = $ internal; } public function getData(): array //HERE I NEED HELP { if(rand(0, 100) > 95) {//in 95% of the cases, go for internal DB for data - like weighted load-balancer somewhat //here I need something like "chain of responsibility" in case the internal DB is not yet populated } else { // go for the external data source, so that I can update my internal data //what if the external data source is not available? I need a circuit breaker with fallback to internal //what if I fall back to internal and the internal DB has not yet been populated } } } I have a general idea about the code and the composition, I just need help with one method implementation. Or maybe just some nomenclature, how is this situation properly called, so that I can google it myself. The circuit rank of a complete graph with n=4 (6 edges)is 3. The circuit rank of a complete graph with n=5 (10 edges)is 5. I think that the circuit rank of a complete graph with n=6 (15 edges) is 10? I think that the circuit rank of a complete graph with n=7 (21 edges) is 15? I don’t see the pattern.
Disclaimer In the process of typing up this question, I determine its solution. Since I went through the trouble of typing up the question in its entirety, I will post its answer as well. It may help out others who find themselves in the same predicament. Think of this as a sort of blog-post, if you will. The Goal Consider the mixed boundary value problem \begin{align} \frac{d}{dx}\left(k(x)\frac{du}{dx}\right)=f \text{ in }\Omega\\ u=P \text{ at } x=0\\ \frac{du}{dx}=T \text{ at } x=1 \end{align} where $P$ and $T$ are constants and $f$ is a source term. I'm using finite differences and my goal is to impose the boundary condition at $x=1$ in such a way as to achieve second order accuracy. Assume that the grid has $N+1$ equispaced points (including the boundary points) given as $x_0,x_1,...,x_N$ My Approach At the right boundary, use a 2nd order centered difference for the boundary condition \begin{equation} \frac{u_{N+1}-u_{N-1}}{\Delta x}=T\end{equation} and the 2nd derivative operator as: \begin{align} \frac{k_{N+\frac{1}{2}}\frac{u_{N+1}-u_{N}}{\Delta x} -k_{N-\frac{1}{2}}\frac{u_{N}-u_{N-1}}{\Delta x}}{\Delta x}=f_i\end{align} We can solve for the ghost point $U_{N+1}$ in the 1st equation, substitute it into the 2nd equation and simplify. The Problem Using this discretization requires the evaluation of $k_{N+\frac{1}{2}}=k(x_N+\frac{\Delta x}{2})$, which is outside of the domain! In general, $k(x)$ is only defined within the domain and I can't/shouldn't use values outside of it. Thus, I don't think this is the right approach to achieve 2nd order accuracy. What else can I do in this case?
The Feferman-Schütte ordinal, $\Gamma_0$ The Feferman-Schütte ordinal, denoted $\Gamma_0$ ("gamma naught"), is the first ordinal fixed point of the Veblen function. It figures prominently in the ordinal-analysis of the proof-theoretic strength of several mathematical theories. This page needs additional information. Veblen hierarchy Every increasing continuous ordinal function $f$ has an unbounded set of fixed points; Proof When $f$ is increasing, $f(\alpha)\geq \alpha$ for all $\alpha$; when also continuous, $$ f ( \cup_n f^n (\alpha + 1)) = \cup_n f^n (\alpha + 1) $$ is a fixed point greater than $\alpha$ Since the set of fixed points is an unbounded, well-ordered set, there is an ordinal function $\varphi^{[f]}$ listing these fixedpoints; it is in turn increasing and continuous. The Veblen Hierarchy is the sequence of functions $\varphi_\alpha$ defined by $\varphi_0 x = \omega^x$ $\varphi_{\alpha + 1} = \varphi^{[\varphi_\alpha]}$ for $ 0 \lt \beta = \cup \beta $, $\varphi_\beta(x)$ enumerates the fixedpoints common to all $\varphi_\alpha$ for $\alpha \lt \beta$ (For $\alpha \lt \beta$, the fixed point sets of $\varphi_\alpha$ are all closed sets, and so their intersection is closed; it is unbounded because $\cup_\alpha \varphi_\alpha(t+1)$ is a common fixed point greater than $t$) In particular the function \(\varphi_1\) enumerates epsilon numbers i.e. \(\varphi_1(\alpha)=\varepsilon_\alpha\) The Veblen functions have the following properties: if \(\beta<\gamma\) then \(\varphi_\alpha(\beta)<\varphi_\alpha(\gamma)\) if \(\alpha<\beta\) then \(\varphi_\alpha(0)<\varphi_\beta(0)\) if \(\alpha>\gamma\) then \(\varphi_\alpha(\beta)=\varphi_\gamma(\varphi_\alpha(\beta))\) \(\varphi_\alpha(\beta)\) is an additive principal number. An ordinal \(\alpha\) is an additive principal number if \(\alpha>0\) and \(\alpha>\delta+\eta\) for all \(\delta, \eta<\alpha\). Let \(P\) denote the set of all additive principal numbers. We define the normal form for ordinals \(\alpha\) such that \(0<\alpha<\Gamma_0=\min\{\beta|\varphi(\beta,0)=\beta\}\) \(\alpha=_{NF}\varphi_\beta(\gamma)\) if and only if \(\alpha=\varphi_\beta(\gamma)\) and \(\beta,\gamma<\alpha\) \(\alpha=_{NF}\alpha_1+\alpha_2+\cdots+\alpha_n\) if and only if \(\alpha=\alpha_1+\alpha_2+\cdots+\alpha_n\) and \(\alpha>\alpha_1\geq\alpha_2\geq\cdots\geq\alpha_n\) and \(\alpha_1,\alpha_2,...,\alpha_n\in P\) Let \(T\) denote the set of all ordinals which can be generated from the ordinal number 0 using the Veblen functions and the operation of addition \(0 \in T\) if \(\alpha=_{NF}\varphi_\beta(\gamma)\) and \(\beta,\gamma \in T\) then \(\alpha\in T\) if \(\alpha=_{NF}\alpha_1+\alpha_2+\cdots+\alpha_n\) and \(\alpha_1,\alpha_2,...,\alpha_n\in T\) then \(\alpha\in T\) For each limit ordinal number \(\alpha\in T\) we assign a fundamental sequence i.e. a strictly increasing sequence \((\alpha[n])_{n<\omega}\) such that the limit of the sequence is the ordinal number \(\alpha\) if \(\alpha=\alpha_1+\alpha_2+\cdots+\alpha_k\) then \(\alpha[n]=\alpha_1+\alpha_2+\cdots+(\alpha_k[n])\) if \(\alpha=\varphi_0(\beta+1)\) then \(\alpha[n]=\varphi_0(\beta)\times n\) if \(\alpha=\varphi_{\beta+1}(0)\) then \(\alpha[0]=0\) and \(\alpha[n+1]=\varphi_\beta(\alpha[n])\) if \(\alpha=\varphi_{\beta+1}(\gamma+1)\) then \(\alpha[0]=\varphi_{\beta+1}(\gamma)+1\) and \(\alpha[n+1]=\varphi_\beta(\alpha[n])\) if \(\alpha=\varphi_{\beta}(\gamma)\) and \(\gamma\) is a limit ordinal then \(\alpha[n]=\varphi_{\beta}(\gamma[n])\) if \(\alpha=\varphi_{\beta}(0)\) and \(\beta\) is a limit ordinal then \(\alpha[n]=\varphi_{\beta[n]}(0)\) if \(\alpha=\varphi_{\beta}(\gamma+1)\) and \(\beta\) is a limit ordinal then \(\alpha[n]=\varphi_{\beta[n]}(\varphi_{\beta}(\gamma)+1)\) The Feferman-Schütte ordinal, \(\Gamma_0\) is the least ordinal not in \(T\). Gamma function The Gamma function is a function enumerating ordinal numbers \(\alpha\) such that \(\varphi(\alpha,0)=\alpha\) if \(\alpha=\Gamma_0\) then \(\alpha[0]=0\) and \(\alpha[n+1]=\varphi(\alpha[n],0)\) if \(\alpha=\Gamma_{\beta+1}\) then \(\alpha[0]=\Gamma_{\beta}+1\) and \(\alpha[n+1]=\varphi(\alpha[n],0)\) if \(\alpha=\Gamma_{\beta}\) and \(\beta\) is a limit ordinal then \(\alpha[n]=\Gamma_{\beta[n]}\) References Oswald Veblen. Continuous Increasing Functions of Finite and Transfinite Ordinals. Transactions of the American Mathematical Society (1908) Vol. 9, pp.280–292
We can multiply $a$ and $n$ by adding $a$ a total of $n$ times. $$ n \times a = a + a + a + \cdots +a$$ Can we define division similarly using only addition or subtraction? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community We can multiply $a$ and $n$ by adding $a$ a total of $n$ times. $$ n \times a = a + a + a + \cdots +a$$ Can we define division similarly using only addition or subtraction? To divide $60$ by $12$ using subtraction: $$\begin{align*} &60-12=48\qquad\text{count }1\\ &48-12=36\qquad\text{count }2\\ &36-12=24\qquad\text{count }3\\ &24-12=12\qquad\text{count }4\\ &12-12=0\qquad\;\text{ count }5\;. \end{align*}$$ Thus, $60\div 12=5$. You can even handle remainders: $$\begin{align*} &64-12=52\qquad\text{count }1\\ &52-12=40\qquad\text{count }2\\ &40-12=28\qquad\text{count }3\\ &28-12=16\qquad\text{count }4\\ &16-12=4\qquad\;\text{ count }5\;. \end{align*}$$ $4<12$, so $64\div 12$ is $5$ with a remainder of $4$. If $n$ is divisible by $b$ ($\frac{n}{b}$ is a whole number), then keep doing $n - b - b - b - b - b - \cdots - b$ until the value of that is $0$. The number of times you subtract $b$ is the answer. For example, $\frac{20}{4} \rightarrow 20 - 4 - 4 - 4 - 4 - 4$. We subtracted '$4$' five times, so the answer is $5$. You can also use additions. One should use results from intermediate calculations to speed up. Let us divide 63 by 12. $$ \begin{split} 12+12=24,&\qquad\textrm{count }1+1=2\\ 24+24=48,&\qquad\textrm{count }2+2=4\\ 48+24=72,&\qquad\textrm{count }4+2=6\textrm{ (exceeded 63)}\\ 48+12=60,&\qquad\textrm{count }4+1=5\textrm{ (so we try adding less)}\\ 63-60=3,&\qquad\textrm{(calculation of the remainder)}\\ \end{split} $$ You can define division as repeated subtraction:$${72\over 9}=72-9-9-9-9-9-9-9-9$$Subtracting by $9$ eight times is the same as subtracting by $72$ since $9\cdot8=72$. So, the answer is $8$. Also, this is why ${n\over a}=n-a-a-a-a\cdots$ for whatever whole number $a$ is other than zero. If you have a remainder, then you just do this:$${13\over 2}=13-2-2-2-2-2-2-1$$as you just saw, subtracting by $2$ six times is the same as subtracting by $12$ since $2\cdot6=12$, but there's a remainder of $1$ being sutracted, so it's the same as subtracting by $13$ since $2\cdot6+1=13$, so the answer is $6$ R$1$ or $6.5$. Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Your problems are with algebraic manipulation. You have a linear equation to solve for $\frac{dy}{dx}$. If you were trying to solve $3(5+T)=7$ for $T$, you could divide by $3$ to get $5+T=\frac{7}{3}$, then subtract $5$ to get $T=\frac{7}{3}-5$ (which you might rewrite as $T=-\frac{8}{3}$). Or, you could distribute the multiplication on the left first, to get $15+3T=7$, then subtract $15$ to get $3T=-8$, then divide by $3$ to get $T=-\frac{8}{3}$. Your equation has the same general form, in that $\dfrac{dy}{dx}$ plays the role of $T$ above, and the other parts can be thought of as numbers to subtract, add, muliply, or divide according to correct algebraic rules. From here I'm lost, do I subtract 18 from both sides and then multiply it by 8 so I get: $-144\cos(8x+y)(\frac{dy}{dx})=0$ ? Here you are saying that you got $\cos(8x+y)(8+\frac{dy}{dx})-18=-144\cos(8x+y)(\frac{dy}{dx})$, and that would be like saying $3(5+T)-7 = -35\cdot3(T)$; it does not follow from principles of arithmetic that numbers can be rearranged in such ways. You can use principles like $(ab)/a = b$, $(a+b)-a = b$, and $a(b+c)= ab+ac$. You want to isolate $\dfrac{dy}{dx}$; in order to do so, you can divide by $\cos(8x+y)$ to get $8+\frac{dy}{dx}=\dfrac{18}{\cos(8x+y)}$. Then you can subtract $8$ to get $\frac{dy}{dx}=\dfrac{18}{\cos(8x+y)}-8$. André Nicolas points out in a comment how you can solve entirely in terms of $x$, similar to Pocho la pantera's answer, except that the sign of $\cos(8x+y)$ is not clear, hence which root to take is not clear.
Analysis. Since both factors are very close in value, we can place a lower bound on $M$ with the square root of the smallest 6-digit valid number. $\sqrt[]{123456} \approx 351.36$ results to $M \ge 3$. From $D * N = M$ we know all 3 numbers must be different. This means that neither $D$ nor $N$ can be in $\{0, 1, 5\}$. This is because $0 * x = 0$, $1 * x = x$, $5 * odd = 5$ and $5 * even = 0$. By extension, $M \ne 5$. We also have $M \ne 9$, because that would require an illegal product from $\{1*9, 3*3, 7*7\}$. We can enumerate all valid $(D, N)$ pairs. To simplify our work, we assume $D \gt N$ and for any solution found we have a pair of solutions where we can switch them around. As mentioned above, we also require $M = D * N \pmod{10} \ge 3$. And since we require all three letters being different, if one letter is $6$, the other can't be $2$, $4$ or $8$. By brute forcing the pairs we can determine $M$. Then, the values for $ASYLUM$ can range anything from $M1D * M1N$ to $M9D * M9N$, which restricts the values $A$ can take. We can simplify our calculations by approximating $M00^2 \lt ASYLUM \lt X00^2$, where $X = M + 1$. For example, for $M = 6$, $360000 \lt ASYLUM \lt 490000$, so $A \in \{3, 4\}$. However, we also note that the smallest product for $A = 3$ is $632 * 634 \gt 400000$, which creates the contradiction that $A = 3$ in the factors and $A = 4$ in the result. With this reasoning we can narrow down $A$ to a single value for each value of $M$. - $M = 3; A \in \{0, 1\} \Rightarrow A = 1$ - $M = 4; A \in \{1, 2\} \Rightarrow A = 1$ - $M = 6; A \in \{3, 4\} \Rightarrow A = 4$ - $M = 7; A \in \{4, 5, 6\} \Rightarrow A = 5$ - $M = 8; A \in \{6, 7, 8\} \Rightarrow A = 7$ D = 2 - N = 3; M = 6, A = 4, ASYLUM = 412806, Y = 2 = D. Contradiction. - N = 4; M = 8, A = 7, ASYLUM = 762128, Y = 2 = D. Contradiction. - N = 7; M = 4, A = 1, ASYLUM = 171804, Y = 1 = A. Contradiction. - N = 8; M = 6, A = 4, ASYLUM = 416016, Y = 6 = M. Contradiction. - N = 9; M = 8, A = 7, ASYLUM = 766488, S = 6 = Y. Contradiction. D = 3 - N = 6; M = 8, A = 7, ASYLUM = 764748, S = 6 = N. Contradiction. - N = 8; M = 4, A = 1, ASYLUM = 172634, U = 3 = D. Contradiction. - N = 9; M = 7, A = 5, ASYLUM = 571527, S = 5 = M. Contradiction. D = 4 - N = 7; M = 8, A = 7 = N. Contradiction. - N = 9; M = 6, A = 4 = D. Contradiction. D = 6 - N = 9; M = 4, A = 1, ASYLUM = 174304, Y = 4 = M. Contradiction. D = 7 - N = 8; M = 6, A = 4, ASYLUM = 419256. Solution. - N = 9; M = 3, A = 1, ASYLUM = 101123, Y = 1 = A. Contradiction.
I saw this question, but was unable to answer because of my inexistant knowledge of Rust, so I decided to try and write an algorithm on my own and put it for review. There is the casual way to compute factorials: \$n! = \prod\limits_{k=1}^n k\$ public static BigInteger fact(BigInteger n) => n == 0 ? 1 : n*fact(n - 1); The problem with this one is that with big factorials, it get's slow (or, in this case, throw a StackOverflowException. Now, reading this "paper", I figured I'd try to implement a faster algorithm to compute factorials. From what I understood, the equation would look something like this : \$\prod\limits_{i=1}^{\frac{n}{2}}( \sum\limits_{z=0}^i (n - z\times2))((\lceil{\frac{n}{2}\rceil})\$ ( if n is odd)\$)\$ The idea is to reduce the number of multiplications by half. public static BigInteger Factorial(int n){ BigInteger sum = n; BigInteger result = n; for (int i = n - 2; i > 1; i -= 2) { sum = (sum + i); result *= sum; } if (n % 2 != 0) result *= (BigInteger)Math.Round((double)n / 2, MidpointRounding.AwayFromZero); return result;} I decided to keep a sum variable so I don't need to redo the sum computing at each multiplication's iteration. I'd like to see if I missed something or if this can be optimized. (I'm doing this as a mathematical exercise, so there's no need for performance, I'm looking at ways to make it more performant just because I want to understant the maths behind it).
I do not understand completely how do rotating pulleys with ropes (that do not slip) work. Besides the equations of motions, which are clear to me, some conditions on the accelerations are needed to solve the system with the equations. I'm having troubles in finding these conditions. Consider the following situations. Here, since the rope doesn't slip, I wrote that: $\begin{cases} \ddot{y_3}=R_2 \ddot{\theta}_2 \\ R_2 \ddot{\theta}_2=R_1 \ddot{\theta}_1 \\ \ddot{y_1}=R_1 \ddot{\theta}_1 \end{cases}$ But I don't think that this is right. In particular pulley 1 is accelerating downward, so maybe in the second equation I should add $\ddot{y_1}$ too. Furthermore I'm not sure at all about all the plus or minus that must be considered in the equations that I wrote. Is this the right way to solve the problem? Here since the rope is getting longer but it does not slip the following equation must be always satisfied . $x+y= l_0 +R_1 \theta_1+R_2 \theta_2\implies \ddot{x}+\ddot{y}=R_1 \ddot{\theta}_1+R_2 \ddot{\theta}_2$ But this doesn't help a lot and I tried to find other equations. $\begin{cases} a_1= \ddot{y}= R \ddot{\theta}+R_1 \ddot{\theta}_1\\ a_2=\ddot{x}= - R \ddot{\theta}+R_2 \ddot{\theta}_2\end{cases}$ But again I'm not sure about signs, since in this way I'm assuming that the central pulley is rotating clockwise or counterclockwise, while that is unknown. In general is there a way or a rule to follow to write such equations? Moreover is this the right way to think about the problem? i.e. that the objects and the points of the pulley in contact with ropes must have the same acceleration of ropes?
Electrochemical Impedance Spectroscopy: Experiment, Model, and App Electrochemical impedance spectroscopy is a versatile experimental technique that provides information about an electrochemical cell’s different physical and chemical phenomena. By modeling the physical processes involved, we can constructively interpret the experiment’s results and assess the magnitudes of the physical quantities controlling the cell. We can then turn this model into an app, making electrochemical modeling accessible to more researchers and engineers. Here, we will look at three different ways of analyzing EIS: experiment, model, and simulation app. Electrochemical Impedance Spectroscopy: The Experiment Electrochemical impedance spectroscopy (EIS) is a widely used experimental method in electrochemistry, with applications such as electrochemical sensing and the study of batteries and fuel cells. This technique works by first polarizing the cell at a fixed voltage and then applying a small additional voltage (or occasionally, a current) to perturb the system. The perturbing input oscillates harmonically in time to create an alternating current, as shown in the figure below. An oscillating perturbation in cell voltage gives an oscillating current response. For a certain amplitude and frequency of applied voltage, the electrochemical cell responds with a particular amplitude of alternating current at the same frequency. In real systems, the response may be complicated for components of other frequencies too — we’ll return to this point below. EIS experiments typically vary the frequency of the applied perturbation across a range of mHz and kHz. The relative amplitude of the response and time shift (or phase shift) between the input and output signals change with the applied frequency. These factors depend on the rates at which physical processes in the electrochemical cell respond to the oscillating stimulus. Different frequencies are able to separate different processes that have different timescales. At lower frequencies, there is time for diffusion or slow electrochemical reactions to proceed in response to the alternating polarization of the cell. At higher frequencies, the applied field changes direction faster than the chemistry responds, so the response is dominated by capacitance from the charge and discharge of the double layer. The time-domain response is not the simplest or most succinct way to interpret these frequency-dependent amplitudes and phase shifts. Instead, we define a quantity called an impedance. Like resistance in a static system, impedance is the ratio of voltage to current. However, it uses the real and imaginary parts of a complex number to represent the relation of both amplitude and phase to the input signal and output response. The mathematical tool that relates the impedance to the time-domain response is a Fourier transform, which represents the frequency components of the oscillating signal. To explain the idea of impedance more fully for a simple case, consider the input voltage as a cosine wave oscillating at an angular frequency ( ω): Then the response is also a cosine wave, but with a phase offset ( φ). Compared to the time shift in the image above, the phase offset is given as \phi = -\omega \,\delta t . The magnitude of the current and its phase offset depend on the physics and chemistry in the cell. Now, let’s consider the resistance from Ohm’s law: This quantity varies in time with the same frequency as the perturbing signal. It equals zero at times when the numerator also equals zero and becomes singular when the denominator equals zero. So unlike the resistance in a DC system, it’s not a very useful quantity! Instead, from Euler’s theorem, let’s express the time-varying quantities as the real parts of complex exponentials, so that: and We denote the coefficients V_0 and I_0\,\exp(i\phi) as quantities \bar{V} and \bar{I}, respectively. These are complex amplitudes that can be understood in terms of the Fourier transformation of the original time-domain sinusoidal signals. They express the distinct amplitudes and phase difference of the voltage and current. Because all of the quantities in the system are oscillating sinusoidally, we understand the physical effects by comparing these complex quantities, rather than the time-domain quantities. To describe the oscillating problem (often called phasor theory), we define a complex analogue of resistance as: This is the impedance of the system and, as the name suggests, it’s the quantity we measure in electrochemical impedance spectroscopy. It’s a complex quantity with a magnitude and phase, representing both resistive and capacitive effects. Resistance contributes the real part of the complex impedance, which is in-phase with the applied voltage, while capacitance contributes the imaginary part of the complex impedance, which is precisely out-of-phase with the applied voltage. EIS specialists look at the impedance in the form of a spectrum, normally with a Nyquist plot. This plots the imaginary component of impedance against the real component, with one data point for every frequency at which the impedance has been measured. Below is an example from a simulation — we’ll discuss how it’s modeled in the next section. Simulated Nyquist plot from an electrochemical impedance spectroscopy experiment. Points toward the top right are at lower frequencies (mHz), while those toward the bottom left are at higher frequencies (>100 Hz). In the figure above, the semicircular region toward the left side shows the coupling between double-layer capacitance and electrode kinetic effects at frequencies faster than the physical process of diffusion. The diagonal “diffusive tail” on the right comes from diffusion effects observed at lower frequencies. EIS experiments are useful because information about many different physical effects can be extracted from a single analysis. There is a quantitative relationship between properties like diffusion coefficients, kinetic rate constants, and dimensions of the features in Nyquist plots. Often, EIS experiments are interpreted using an “equivalent circuit” of resistors and capacitors that yields a similar frequency-dependent impedance to the one shown in the Nyquist plot above. This idea was discussed in my colleague Scott’s blog post on electrochemical resistances and capacitances. When there is a linear relation between the voltage and current, only one frequency will appear in the Fourier transform. This simplifies the analysis significantly. For the simple harmonic interpretation of the experiment in terms of impedance, we need the current response to oscillate at the same frequency as the voltage input. This means that the system must respond linearly. For an electrochemical cell, we can usually accomplish this by ensuring that the applied voltage is small compared to the quantity RT/F — the ratio of the gas constant multiplied by the temperature to the Faraday constant. This is the characteristic “thermal voltage” in electrochemistry and is about 25 mV at normal temperatures. Smaller voltage changes usually induce a linear response, while larger voltage changes cause an appreciably nonlinear response. Of course, with simulation to predict the time-domain current, we can always consider a nonlinear case and perform a Fourier transform numerically to study the effect on the impedance. In practice, the interpretation in terms of impedance illustrated above is best suited to the harmonic assumption. Impedance measurements are therefore often used in a complementary manner with transient techniques, such as amperometry or voltammetry, which are better suited for investigating nonlinear or hysteretic effects. Let’s look at a simple example of the physical theory that underpins these ideas to see how the impedance spectrum relates to the real controlling physics. Electrochemical Impedance Spectroscopy: The Model To model an EIS experiment, we must describe the key underlying physical and chemical effects, which are the electrode kinetics, double-layer capacitance, and diffusion of the electrochemical reactants. In electroanalytical systems, a large quantity of artificially added supporting electrolytes keeps the electric field low so that solution resistance can be neglected. In this case, we can describe the mass transport of chemical species in the system using the diffusion equation (Fick’s laws) with suitable boundary conditions for the electrode kinetics and capacitance. In the COMSOL Multiphysics® software, we use the Electroanalysis interface together with an Electrode Surface boundary feature to describe these equations. For more details about how to set up this model, you can download the Electrochemical Impedance Spectroscopy tutorial example in the Application Library. Model tree for the Electroanalysis interface in an EIS model. Under Transport Properties, we can specify the diffusion coefficients of the redox species under consideration. We at least need the reduced and oxidized species for a single redox couple, such as the common redox couple ferro/ferricyanide, to use as an analytical reference. The Concentration boundary condition defines the fixed bulk concentrations of these species. The Electrode Reaction and Double Layer Capacitance subnodes for the Electrode Surface boundary feature contribute Faradaic and non-Faradaic current, respectively. For the double-layer capacitance, we typically use an empirically measured equivalent capacitance and specify the electrode reaction according to a standard kinetic equation like the Butler-Volmer equation. Note that we’re not referring to equivalent circuit properties at all here. In COMSOL Multiphysics, all of the inputs in the description of the electrochemical problem are physical or chemical quantities, while the output is a Nyquist plot. When analyzing the problem in reverse, we’re able to use an observed Nyquist plot from our experiments to make inferences about the real values of these physical and chemical inputs. In the settings for the Electrode Surface feature, we represent the impedance experiment by applying a Harmonic Perturbation to the cell voltage. Settings for the Electrode Surface boundary feature in an EIS model. Here, the quantity V_app is the applied voltage. The harmonic perturbation is applied with respect to a resting steady voltage (or current) on the cell. In this case, we have set this to a reference value of zero volts. With more advanced models, we might consider using the results of another COMSOL Multiphysics model, one that’s significantly nonlinear for example, to find the resting conditions to which the perturbation is applied. If you’re interested in understanding the mathematics of the harmonic perturbation in greater detail, my colleague Walter discussed them in a previous blog post. When studying lithium-ion batteries, for example, we can perform a time-dependent analysis of the cell’s discharge, studying its charge transport, diffusion and migration of the lithium electrolyte, and the electrode kinetics and diffusion of the intercalated lithium atoms. We can pause this simulation at various times to consider the impedance measured from a rapid perturbation. For further insight into the physics involved, you can read my colleague Tommy’s blog post on modeling electrochemical impedance in a lithium-ion battery. Electrochemical Impedance Spectroscopy: The Simulation App A frequent demand for electrochemical simulations is that they “fit” experimental data in order to determine unknown physical quantities or, more generally, to interpret the data at all. Even for experienced electroanalytical chemists, it can be difficult to intuitively “see” the physics and chemistry in the underlying graphs like the Nyquist plot. However, by simulating the plots under a range of conditions, the influence of different effects on the overall graph is revealed. Simulation is helpful for analyzing EIS, but it can also be time consuming for the experts involved. As was the case with my old research group, these experts can spend more time writing programs and running models to fit data together with experimental researchers than on the science. Wouldn’t it be nice if all electrochemical researchers could load experimental data into a simple interface, simulate impedance spectra for a given physical model and inputs, and even perform automatic parameter fitting? The good news is that we can! With the Application Builder in COMSOL Multiphysics, we can create an easy-to-use EIS app based on an underlying model. As a model can contain any level of physical detail, the app provides direct access to the physical data and isn’t confined to simple equivalent circuits. To highlight this, we have an EIS demo app based on the model available in the Application Library. The app user can set concentrations for electroactive species and tune the diffusion coefficients as well as the electrode kinetic rate constant and double-layer capacitance. After clicking the Compute button, the app generates results that can be visualized through Nyquist and Bode plots. The EIS simulation app in action. As well as enabling physical parameter estimation, this app is very helpful for teaching, since we can quickly change inputs and visualize the results that would occur in the experiment. A natural extension for the app is to import experimental data to the same Nyquist plot for direct comparison. We can also build up the underlying physical model to consider the influence of competing electrochemical reactions or follow-up homogeneous chemistry from the products of an electrochemical reaction. Concluding Thoughts Here, we’ve introduced electrochemical impedance spectroscopy and discussed some methods used to model it. We also saw how a simulation app built from a simple theoretical model can provide greater insight into the relationship between the theory of an electrochemical system and its behavior as observed in an experiment. Further Reading Explore other topics related to electrochemical simulation on the COMSOL Blog Comments (1) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Abstract: The Xe isotopes are located at a possible phase change in the collective structure from vibrational to rotational. The Xe isotopes therefore were considered as potential candidates for \(\gamma\)-soft nuclei or \(O(6)\) symmetry limit within the IBM. However, testing of the O(6) symmetry in Xe reveals that indeed their structure is more complicated. The study of the nucleus \(^{122}Xe\) is a part of a systematic examination of the development of collectivity in the Xe isotopes, which are located in the \(Z>50\), \(N<82\) region that display an extraordinarily smooth evolution of simple collective signatures. In order to probe the development of collectivity in greater detail, additional information on excited states is required. Specifically, the knowledge of excited states that decay to other excited states resulting low-energy transitions which can give strong indicators of the underlying structure need to be improved . The experiment to study \(^{122}Xe\) with the \(\beta^+/EC\) decay of \(^{122}Cs\) was performed at the TRIUMF-ISAC facility located in Vancouver, B.C., Canada. Radioactive \(^{122}Cs\) beams from the ISAC facility were delivered to the 8\(\pi\) spectrometer which was composed of 20 HPGe detectors. The isomeric and ground states of \(^{122}Cs\) decay to the \(^{122}Xe\) with half lives of 21 seconds and a 3.7 minutes, respectively. Two sets of high-statistics data for short- and long-half-life decays were collected to have an access for potential excited states of the \(^{122}Xe\). As an evidence of data quality, the level scheme of \(^{122}Xe\) has been dramatically extended by 177 new levels and 482 new transitions. Using \(\gamma\)-\(\gamma\) angular correlation analysis, 71 spin assignments have been made for 55 levels since a spin of some levels could be assigned via several different cascades. Mixing ratios for 44 \(\gamma\)-ray transitions were extracted from the angular correlation analysis by extracting the values from the minimized \(\chi^2\) fit. Branching ratios have been determined for all observed transitions independently in the two different data sets collected with the short and long cycled runs. The data have been compared with a sd-IBM-1 calculation.
Search Now showing items 1-10 of 15 A free-floating planet candidate from the OGLE and KMTNet surveys (2017) Current microlensing surveys are sensitive to free-floating planets down to Earth-mass objects. All published microlensing events attributed to unbound planets were identified based on their short timescale (below 2 d), ... OGLE-2016-BLG-1190Lb: First Spitzer Bulge Planet Lies Near the Planet/Brown-Dwarf Boundary (2017) We report the discovery of OGLE-2016-BLG-1190Lb, which is likely to be the first Spitzer microlensing planet in the Galactic bulge/bar, an assignation that can be confirmed by two epochs of high-resolution imaging of the ... OGLE-2015-BLG-1459L: The Challenges of Exo-Moon Microlensing (2017) We show that dense OGLE and KMTNet $I$-band survey data require four bodies (sources plus lenses) to explain the microlensing light curve of OGLE-2015-BLG-1459. However, these can equally well consist of three lenses ... OGLE-2017-BLG-1130: The First Binary Gravitational Microlens Detected From Spitzer Only (2018) We analyze the binary gravitational microlensing event OGLE-2017-BLG-1130 (mass ratio q~0.45), the first published case in which the binary anomaly was only detected by the Spitzer Space Telescope. This event provides ... OGLE-2017-BLG-1434Lb: Eighth q < 1 * 10^-4 Mass-Ratio Microlens Planet Confirms Turnover in Planet Mass-Ratio Function (2018) We report the discovery of a cold Super-Earth planet (m_p=4.4 +/- 0.5 M_Earth) orbiting a low-mass (M=0.23 +/- 0.03 M_Sun) M dwarf at projected separation a_perp = 1.18 +/- 0.10 AU, i.e., about 1.9 times the snow line. ... OGLE-2017-BLG-0373Lb: A Jovian Mass-Ratio Planet Exposes A New Accidental Microlensing Degeneracy (2018) We report the discovery of microlensing planet OGLE-2017-BLG-0373Lb. We show that while the planet-host system has an unambiguous microlens topology, there are two geometries within this topology that fit the data equally ... OGLE-2017-BLG-1522: A giant planet around a brown dwarf located in the Galactic bulge (2018) We report the discovery of a giant planet in the OGLE-2017-BLG-1522 microlensing event. The planetary perturbations were clearly identified by high-cadence survey experiments despite the relatively short event timescale ... Spitzer Opens New Path to Break Classic Degeneracy for Jupiter-Mass Microlensing Planet OGLE-2017-BLG-1140Lb (2018) We analyze the combined Spitzer and ground-based data for OGLE-2017-BLG-1140 and show that the event was generated by a Jupiter-class (m_p\simeq 1.6 M_jup) planet orbiting a mid-late M dwarf (M\simeq 0.2 M_\odot) that ... OGLE-2016-BLG-1266: A Probable Brown-Dwarf/Planet Binary at the Deuterium Fusion Limit (2018) We report the discovery, via the microlensing method, of a new very-low-mass binary system. By combining measurements from Earth and from the Spitzer telescope in Earth-trailing orbit, we are able to measure the ... KMT-2016-BLG-0212: First KMTNet-Only Discovery of a Substellar Companion (2018) We present the analysis of KMT-2016-BLG-0212, a low flux-variation $(I_{\rm flux-var}\sim 20$) microlensing event, which is well-covered by high-cadence data from the three Korea Microlensing Telescope Network (KMTNet) ...
Below I've addressed your specific questions. However, based on your multiple questions about this I think it might be more useful to give a list of good sources, so I'll do that first. On "gaps" in the constructible universe: Marek/Srebrny, Gaps in the constructible universe. The introduction is very readable and will give you a good sense of what's going on. On the mastercode hierarchy (and what happens when new reals do appear): Hodes' paper Jumping through the transfinite. This is also closely connected with the study of gaps. Like the paper above, the introduction is a very good read. On the general structure of $L$: Devlin's book Constructibility. It has a serious error unfortunately, but that error doesn't affect the important results; see this review by Stanley for a summary of the issue (and if you're interested in how to correct it, this paper by Mathias). Ultimately the error is very limited and easily avoided once you know it exists - basically, doubt anything involving a claim about the (aptly named) set theory "BS," but pretty much everything else is correct. Now, it would seem that every one of these sets could in principle be defined in first order logic without parameters (though I am unsure how this would work in practice) There's no subtlety here: we first define addition and multiplication of finite ordinals, and now we can use port the usual definitions in $(\mathbb{N}; +,\times)$ of those sets into the set theory context. Indeed, there's a natural way (the Ackermann interpretation) to pass between $L_\omega$ and $(\mathbb{N};+,\times)$, so definability in $L_\omega$ can be reasoned about by proving things in the more familiar setting of definability in arithmetic; e.g. this lets us argue that the Busy Beaver function is indeed in $L_{\omega+1}$. would a non-constructible real (assuming its existence) be in some sense infinitely complex in that it could not be described in any form whatsoever, either directly, or via some cumulative process? Certainly not: e.g. $0^\sharp$ is definitely definable (it's $\Delta^1_3$, and in particular is definable in second-order arithmetic) but is not in $L$ (assuming it exists at all). ZFC can't prove that something matching the definition of $0^\sharp$ exists, but it can prove that if it exists then it's not constructible. Given a particular countable ordinal $\alpha$, can we always find (by which I mean, explicitly describe) a real X with L-rank $\alpha$? No; for many (indeed, club-many) ordinals $<\omega_1^L$, we have no new reals at that level. Indeed, the $L$-hierarchy is "filled with gaps" - even very long gaps. If you google "gaps in $L$-hierarchy" you'll find a lot of information around this; roughly speaking, an ordinal $\alpha<\omega_1^L$ starts a "long" gap if it is "very" similar to $\omega_1^L$. In terms of complexity, the reals clearly become more complex as their $L$-rank increases, but is there a way to formalize this precisely? Well, the obvious one is that if $A$ has $L$-rank greater than that of $B$, then the set $A$ is not definable in the structure $(\mathbb{N}; +,\times, B)$ (that is, arithmetic augmented by a predicate naming the naturals in $B$). In particular $A\not\le_TB$. On the other hand, $A$ might not compute $B$ either (e.g. if $A$ is "sufficiently Cohen generic" over $L_\beta$ then $A$ won't compute any noncomputable real in $L_\beta$ - in particular, it won't compute any real in $L_\beta$ not in $L_{\omega+1}$).
7 Share Simplification Quiz for SSC CGL Railways 4 years ago . Here is Simplification Quiz for SSC CGL Railways. This quiz contains important questions matching the exact pattern and syllabus of upcoming exams. Make sure you attempt today’s Quant Quiz for Upcoming Exams to check your preparation level. The fourth root of 24010000 is? The value of \(999\frac{{995}}{{999}} \times 999\) is Given that, √(13) = 3.6 and √(130) = 11.4, then the value of \(\sqrt {1.3} + \;\sqrt {1300} + \;\sqrt {0.013}\) is equal to The value of (3 2 – 2 2) 2 + (5 2 – 4 2) 2 + (6 2 – 5 2) 2 is The value of: \(\sqrt {\left\{ {\frac{{0.324 \times 0.081 \times 4.624}}{{1.5625 \times 0.0289 \times 72.9 \times 64}}} \right\}}\) The value of \(\frac{{{{\left( {243} \right)}^{\frac{n}{5}}} \times {3^{2n + 1}}}}{{{9^n} \times {3^{n – 1}}}}\)is If \({\left( {\frac{{3}}{5}} \right)^3}{\left( {\frac{3}{5}} \right)^{ – 6}} = {\left( {\frac{3}{5}} \right)^{2x – 1}}\) then x is equal to By how much does \(\frac{6}{{7/8}}\) exceed \(\frac{{6/7}}{8}\)? The value \(\sqrt {\left[ {6 + \sqrt {\left\{ {6 + \sqrt {6 + \ldots upto\;\infty } } \right\}} \;} \right]}\) is equal to: The value of \(\left\{ {{{\left( {\sqrt[n]{{{x^2}}}} \right)}^{\frac{n}{2}}}} \right\}\) is More Quizzes from Testbook.com Co-founder, Testbook.com 7
Let $\mathbb{T}$ be the $1$-torus and define: $$H^1(\mathbb{T}):=\{f\in L^1(\mathbb{T})\ | \ \forall n<0, \hat{f}(n)=0\},$$ where if $f\in L^1(\mathbb{T})$ we have denoted by $\hat{f}$ the Fourier transform of $f$. By the linearity of the Fourier transform, it is clear that $H^1(\mathbb{T})$ is a subspace of $L^1(\mathbb{T})$. By $\forall f,g \in L^1(\mathbb{T}), \widehat{f*g}=\hat{f}\hat{g}$ and by Young inequality for convolution, it is clear that $\left(H^1(\mathbb{T}),+,*,\|\|_1\right)$ is a commutative normed algebra. By the continuity of the Fourier transform, it is clear that $H^1(\mathbb{T})$ is a closed subspace of $\left(L^1(\mathbb{T}),\|\|_1\right),$ and so $\left(H^1(\mathbb{T}),+,*,\|\|_1\right)$ is a commutative Banach algebra. Then I start wondering how the spectrum (i.e. the set of non null multiplicative linear functional) of the commutative Banach algebra $\left(H^1(\mathbb{T}),+,*,\|\|_1\right)$ looks like... Clearly, every element of the spectrum of the commutative Banach algebra $(L^1(\mathbb{T}),+,*,\|\|_1)$ is also an element of the spectrum of $\left(H^1(\mathbb{T}),+,*,\|\|_1\right)$, provided that this element does not vanish on the whole $H^1(\mathbb{T})$. Being the spectrum of $\left(L^1(\mathbb{T}),+,*,\|\|_1\right)$ formed by the elements $$\varphi_n: L^1(\mathbb{T})\rightarrow\mathbb{C}, f\mapsto \hat{f}(n)$$ for some $n \in \mathbb{Z}$, and being clear that, for all integers $n$, the multiplicative functional $\varphi_n$ does not vanish identically on $H^1(\mathbb{T})$ if and only if $n$ is non-negative, we found that $\forall n\ge0, \varphi_n$ is an element of the spectrum of $\left(H^1(\mathbb{T}),+,*,\|\|_1\right).$ So the question: are there any other elements of the spectrum out there?
The aleph numbers, $\aleph_\alpha$ The aleph function, denoted $\aleph$, provides a 1 to 1 correspondence between the ordinal and the cardinal numbers. In fact, it is the only order-isomorphism between the ordinals and cardinals, with respect to membership. It is a strictly monotone ordinal function which can be defined via transfinite recursion in the following manner: $\aleph_0 = \omega$ $\aleph_{n+1} = \bigcap \{ x \in \operatorname{On} : | \aleph_n | \lt |x| \}$ $\aleph_a = \bigcup_{x \in a} \aleph_x$ where $a$ is a limit ordinal. To translate the formalism, $\aleph_{n+1}$ is the smallest ordinal whose cardinality is greater than the previous aleph. $\aleph_a$ is the limit of the sequence $\{ \aleph_0 , \aleph_1 , \aleph_2 , \ldots \}$ until $\aleph_a$ is reached when $a$ is a limit ordinal. Contents Aleph one $\aleph_1$ is the first uncountable cardinal. The continuum hypothesis The continuum hypothesis is the assertion that the set of real numbers $\mathbb{R}$ have cardinality $\aleph_{1}$. Gödel showed the consistency of this assertion with ZFC, while Cohen showed using forcing that if ZFC is consistent then ZFC+$\aleph_1<|\mathbb R|$ is consistent. Equivalent Forms The cardinality of the power set of $\aleph_{0}$ is $\aleph_{1}$ The is no set with cardinality $\alpha$ such that $\aleph_{0} < \alpha < \aleph_{1}$ Generalizations The generalized continuum hypothesis (GCH) states that if an infinite set's cardinality lies between that of an infinite set S and that of the power set of S, then it either has the same cardinality as the set S or the same cardinality as the power set of S. That is, for any infinite cardinal \(\lambda\) there is no cardinal \(\kappa\) such that \(\lambda <\kappa <2^{\lambda}.\) GCH is equivalent to:\[\aleph_{\alpha+1}=2^{\aleph_\alpha}\] for every ordinal \(\alpha.\) (occasionally called Cantor's aleph hypothesis) For more,see https://en.wikipedia.org/wiki/Continuum_hypothesis Aleph two Aleph hierarchy The $\aleph_\alpha$ hierarchy of cardinals is defined by transfinite recursion: $\aleph_0$ is the smallest infinite cardinal. $\aleph_{\alpha+1}=\aleph_\alpha^+$, the successor cardinal to $\aleph_\alpha$. $\aleph_\lambda=\sup_{\alpha\lt\lambda}\aleph_\alpha$ for limit ordinals $\lambda$. Thus, $\aleph_\alpha$ is the $\alpha^{\rm th}$ infinite cardinal. In ZFC the sequence $$\aleph_0, \aleph_1,\aleph_2,\ldots,\aleph_\omega,\aleph_{\omega+1},\ldots,\aleph_\alpha,\ldots$$ is an exhaustive list of all infinite cardinalities. Every infinite set is bijective with some $\aleph_\alpha$. Aleph omega The cardinal $\aleph_\omega$ is the smallest instance of an uncountable singular cardinal number, since it is larger than every $\aleph_n$, but is the supremum of the countable set $\{\aleph_0,\aleph_1,\ldots,\aleph_n,\ldots\mid n\lt\omega\}$. Aleph fixed point A cardinal $\kappa$ is an $\aleph$-fixed point when $\kappa=\aleph_\kappa$. In this case, $\kappa$ is the $\kappa^{\rm th}$ infinite cardinal. Every inaccessible cardinal is an $\aleph$-fixed point, and a limit of such fixed points and so on. Indeed, every worldly cardinal is an $\aleph$-fixed point and a limit of such. One may easily construct an $\aleph$-fixed point above any ordinal $\beta$: simply let $\beta_0=\beta$ and $\beta_{n+1}=\aleph_{\beta_n}$; it follows that $\kappa=\sup_n\beta_n=\aleph_{\aleph_{\aleph_{\aleph_{\ddots}}}}$ is an $\aleph$-fixed point, since $\aleph_\kappa=\sup_{\alpha\lt\kappa}\aleph_\alpha=\sup_n\aleph_{\beta_n}=\sup_n\beta_{n+1}=\kappa$. By continuing the recursion to any ordinal, one may construct $\aleph$-fixed points of any desired cofinality. Indeed, the class of $\aleph$-fixed points forms a closed unbounded class of cardinals.
Limit ordinal Properties All limit ordinals are equal to their union. All limit ordinals contain an ordinal $\alpha$ if and only if they contain $\alpha + 1$. $\omega$ is the smallest nonzero limit ordinal, and the smallest ordinal of infinite cardinal number. $(\omega + \omega)$, also written $( \omega \cdot 2 )$, is the next limit ordinal. $( \omega \cdot \alpha )$ is a limit ordinal for any ordinal $\alpha$. Types of Limits A limit ordinal $\alpha$ is called additively indecomposable (or a $\gamma$ number) if it cannot be the sum of $\beta<\alpha$ ordinals less than $\alpha$. These numbers are any ordinal of the form $\omega^\beta$ for $\beta$ an ordinal. The smallest is written $\gamma_0$, and the smallest larger than that is $\gamma_1$, etc. A limit ordinal $\alpha$ is called multiplicatively indecomposable (or a $\delta$ number) if it cannot be the product of $\beta<\alpha$ ordinals less than $\alpha$. These numbers are any ordinal of the form $\omega^{\omega^{\beta}}$. The smallest is written $\delta_0$, and the smallest larger than that is $\delta_1$, etc. Interestingly, this pattern does not continue with exponentially indecomposable (or $\varepsilon$ numbers) ordinals being $\omega^{\omega^{\omega^\beta}}$, but rather $\varepsilon_0=sup_{n<\omega}f^n(0)$ with $f(\alpha)=\omega^\alpha$ and $f^n(\alpha)=f(f(...f(\alpha)...))$ with $n$ iterations of $f$. It is the smallest fixed point of $f$. The next $\varepsilon$ number (i.e. the next fixed point of $f$) is then $\varepsilon_1=sup_{n<\omega}f^n(\varepsilon_0+1)$, and more generally the $(\alpha+1)$th fixed point of $f$ is $\varepsilon_{\alpha+1}=sup_{n<\omega}f^n(\varepsilon_\alpha+1)$, also $\varepsilon_\lambda=\cup_{\alpha<\lambda}\varepsilon_\alpha$ for limit $\lambda$. The tetrationally indecomposable ordinals (or $\zeta$ numbers) are then the ordinals $\zeta$ such that $\varepsilon_\zeta=\zeta$. These are obtained similarly as $\varepsilon$ numbers by taking $f(\alpha)=\varepsilon_\alpha$. Pentationally indecomposable ordinals (or $\eta$ ordinals) are then obtained by taking $f(\alpha)=\zeta_\alpha$, and so on. This pattern continues on with the Veblen Hiearchy, continuing up to the Feferman-Schütte ordinal $\Gamma_0$, the smallest ordinal such that this process does not generate any larger kind of ordinals.
I have a question about functions satisfying a condition. Let $D \subset \mathbb{R}^d$ be a Lipschitz domain. That is, for each $x \in \partial D$, there exists an open neighborhood $U$ of $x$ in $\mathbb{R}^d$ and a bi-Lipschitz function $\psi_{x}:B(1) \to U$ such that $\psi(0)=x$ and $\psi_{x}(B_{+}(1))=U \cap D$. Here, $B(1)=\{x \in \mathbb{R}^d \mid |x|<1\}$ and $B_{+}(1)=\{x=(x_1,\ldots,x_d) \in B(1) \mid x_d>0\}$. We denote $H^{1}(D)$ by the 1-st order Sobolev space on $D$ with Neumann boundary condition: $$H^{1}(D)=\{f \in L^{2}(D,dx)\mid\frac{\partial f}{\partial x_i} \in L^{2}(D,dx),\ 1\le i\le d\}.$$ Here, $\frac{\partial f}{\partial x_i}$ is the distributional derivative of $f$. We also denote $(L,\mathcal{D}(L))$ be the Neumann Laplacian on $D$. We note that \begin{equation*} D(L):=\{f \in H^{1}(D) \mid H^{1}(D) \ni g \mapsto \mathcal{E}(f,g) \text{ is continuous w.r.t. } L^{2}(D,dx)\text{-topology}\}, \end{equation*} where $\mathcal{E}(f,g)=\sum_{i=1}^{d}\int_{D}\frac{\partial f}{\partial x_i}\frac{\partial g}{\partial x_i}\,dx$. My question Fix $p,q \in \partial D$ with $p \neq q$. Then, can we find an $f \in D(L)\cap C(\bar{D})$ such that $f(p) \neq f(q)$ ? My attempt Fix an open neighborhood $U$ of $p$ in $\mathbb{R}^d$ and a bi-Lipschitz function $\psi:B(1) \to U$ such that $\psi(0)=p$ and $\psi_{p}(B_{+}(1))=U \cap D$. We may assume $p,q \in U$. Since $$\psi^{-1}(p)=0\text{ and }\psi^{-1}(q) \in \{x \in B(1)\mid x_d=0\},$$ it is easy to construct a smooth function $F$ on $B(1)$ such that $\text{supp}[F] \subset B(1)$ and $$F(\psi^{-1}(p))\neq F(\psi^{-1}(q))\text{ and }\left.\frac{\partial F}{\partial x_d}\right|_{x_d=0}=0.$$ That is, $F$ satisfies the Neumann boundary condition in the sense that $\sum_{i=1}^{d}\frac{\partial F}{\partial x_i}n_i=0$. Here, $n=(n_1,\ldots,n_d)$ denotes the outward unit normal to the boundary of upper half space. Clearly, $\tilde{F}(x)=F(\psi^{-1}(x))$ satisfies $\tilde{F}(p) \neq \tilde{F}(q)$. Can we show the following? $$(1)\quad(\nu,\nabla \tilde{F}):=\sum_{i=1}^{d}\nu_i\frac{\partial \tilde{F}}{\partial x_i}=0\ \mathcal{H}^{d-1} \text{-a.e. on }\partial D.$$ Here, $\mathcal{H}^{d-1}$ is the $(d-1)$-dim Hausdorff measure and $\nu$ is the outward unit normal to $\partial D$. If (1) holds, I think $\tilde{F} \in D(L)$ follows by using Green's formula:\begin{equation*}-\int_{D}\frac{\partial^2 \tilde{F}}{\partial x_i^2}f\,dx=\int_{D}\frac{\partial f}{\partial x_i}\frac{\partial \tilde{F}}{\partial x_i}\,dx+\int_{\partial D}f (\nu,\nabla \tilde{F})\,d\mathcal{H}^{d-1}.\end{equation*}
I would like to apply the known version of the conjectural formula (11) page !0 of the paper Number theory and dynamical Lefschetz trace formula. Disclaimer: I do not have a complete understanding of this formula but I can get an sketch of it. I just know that both sides of the formula are not number but distribution. I also know the meaning of ingredient of the formula. I wish to apply it to find an appropriate application in limit cycle theory. To start this way, I have three precise questions: Question 1: In the right hand side of the formula (11), is not necessary to assume that there are only a finite number of non degenerate periodic orbits $\gamma$(As they are appeared in the sum $\sum$ of the right side of the formula)? Is it implicitly included in the assumptions of that formula? I think that even the non degeneracy assumption of periodic orbits does not easily imply the finite ness of such periodic orbits. Is not possible that a sequence of non degenerate periodic orbits accumulate to a non periodic orbit which is a kind of complicated and strange attractor? Question 2:(This question is completely different from the previous one but there are some motivations from this posts And also this post Lifting a Quadratic System to a non Vanishing vector field on $S^3$) A polynomial vector field on the plane gives us an analytic vector field $X$ on $S^2$. Put $\tilde{X}$ for the obvious lifting of $X$ on $S^2\times S^1$ with $\tilde{X}=X+\partial/\partial{\theta}$ Is there a quadratic polynomial vector field on $\mathbb{R}^2$ with Poincare compactification $X$ such that $\tilde{X}$ on $S^2\times S^1$ does not admit a 2 dimensional transversal foliation which is invariant under the flow of $\tilde{X}$. The reason we consider Quadratic system: Note that every quadratic system is a geodesible vector field but it is not the case for higher degree polynomial vector field. On the other hand, in dimension 2, if a vector field $X$ is geodesible then there is a transversal field $Y$ with $[X,Y] \parallel Y$ this implies that orbits of $Y$ are invariant under flow of $X$. So this make us to be hopeful a little that the product vector field $\tilde{X}=X+\partial/\partial \theta$ admit a transversal 2 dimensional foliation which is invariant under the flow of $\tilde{X}$. Existence of such transversal foliation is the key condition in the paper we linked in the first lines of this post. The reason we consider $S^2\times S^1$ rather than $S^3$: The lifting of the simplest vector field $X=0$ to the Hopf vector field on $S^3$ does not admit a transversal foliation. Question 3: When we lift a vector field $X$ to a non vanishing vector field $\tilde{X}$ on $S^3, S^2\times S^1\quad\text{or}\quad T^1 S^2$, it is possible that the preimage of a closed orbit, which is an invariant torus, would not contain any closed orbit, so we loose our closed orbits.please see the comment by Sebastian Goette in this post. With terminologies in the linked paper, let we have a 3 manifold foliated by 2 dimensional leaves whch is compatible with a flow $X$. So is there an analogy of the formula (11) in page 10 of the linked paper in the first lines of this post whose right side depends on invariant torus of $X$ as well as closed orbits of $X$?
A group $G$ by itself is not a group of linear transformations, it is an abstract algebraic object. Only its representations map its elements (injectively if the representation is faithful) to elements $\mathrm{Aut}(V)$ of some vector space $V$. Now, physics seems to have no need of such abstract language at first. Our "vector space" is pretty much our spacetime, and its pretty much $\mathbb{R}^4$, so your symmetries are really just matrices on that spacetime. The Lorentz symmetry is just $\mathrm{SO}(1,3)$ in its fundamental representation on Minkowski space $\mathbb{R}^{1,3}$, right? Or non-relativistic, rotational symmetry is just $\mathrm{SO}(3)$ on $\mathbb{R}^3$, right? ...and then there is angular momentum and spin. If you solve the Schrödinger equation for the energy levels of a Hydrogen atom, you find that the energy levels are characterized by "quantum numbers" $(n,l,m,s)$. Now $n$ is boring. But $l$ and $m$ are eigenvalues of the spherical laplacian, and lead to the beloved spherical harmonics $Y^l_m$ as independent solutions. Turns out, if you rotate the system in space, these harmonics behave differently depending on their $l$! Formally, the space $$H_l := \{\sum_m c_m Y^l_m | m \in \{-l,-l+1,\dots,l\}\wedge c_m \in \mathbb{R}\}$$ is a vector space, and it carries a representation of the rotation group $\mathrm{SO}(3)$! But not the fundamental one, if $l > 1$. So there's your non-fundamental representation arising solely by solving the equations describing a physical system. It gets even weirder for these rotation groups, since it also turns out that there are objects, the fermions, which do not transform in a representation of $\mathrm{SO}(1,3)$ or $\mathrm{SO}(3)$, but in a representation of their universal covers, $\mathrm{Spin}(1,3)$ or $\mathrm{SU}(2)$, respectively. You have no chance to describe the kinds of phenomena you observe for fermions without accepting that they transform that way. And that's not the end of the story. If you build a gauge theory with gauge group $G$, you will find that the associated field strength of the gauge field must transform as an element of the adjoint representation of $G$. Non-fundmental representations pervade many aspects of (quantum) field theory in that way.
As anticlimactic as this may be, I'm going to answer my own question here..I found this article that shows the connections between the two models..http://epublications.bond.edu.au/cgi/viewcontent.cgi?article=1126&context=ejsie(mirror) that showsthat prices do converge as N Periods increases. Also they provide all the Excel formulas to recreate ... The model here is the binomial option pricing model, so the second term in the brackets represents the expected future value of the option (under riskneutral probabilities).The aim of the option holder is always to maximize the value of his option. He can at any point sell the option at the fair market price $E(V_{n+1})$ or exercise it to get $G_n$. So if ... one of the most fundamental results states that the binomial model converges towards the Black Scholes model if the step size $\Delta t$ converges to zero.The Black Scholes model is an option pricing model where the underlying is given by$$S_T = S_0 \cdot \exp \Bigl(\sigma W_T - \frac 12 \sigma^2 T \Bigr).$$By choosing$$u = \exp(\sigma \sqrt{\... The Black-Scholes price of this option is approximately $14.8$. When I run a Monte Carlo simulation with $10000$ paths and "exact" time stepping, I get results very close to this value.You are simulating the terminal asset price with the first-order Euler approximation over multiple time steps:$$S(t+\Delta t)= S(t) + rS(t)\Delta t + \sigma S(t)\sqrt{\... The 1.04% are used in the calculation because it is 95% expected shortfall so you want to calculate the expectation on the 5% worst loss.In your problem there is 3 possible outcomes: loss of 200, 100 or 0.As the probability of loss of 200 or 100 is 0.04+3.92 = 3.96% < 5%, you need to take account of the loss of 0$ for 1.04% part to reach the 5%. From your answer to my comment, here is what I would do.Over the horizon $[0,\Delta t]$, the BS model tells you that the expected log-return is$$ \Bbb{E}\left[ \ln\left(\frac{S_{t+\Delta t}}{S_t}\right) \right] = \left(\mu-\frac{1}{2}\sigma^2\right)\Delta t$$with a variance$$ \Bbb{V}\left[ \ln\left(\frac{S_{t+\Delta t}}{S_t}\right) \right] = \sigma^2 \... As your code works for the short maturity case, I assume that it is correct. The volatility of $80 \%$ is simply huge. Thus the area covered by the paths is huge too.As you can read e.g. here the sampling error is proportional to the variance of the process, which is huge in your case.As a brute force solution you can just enlarge the number of samples. ... It is quite common to see non-smooth convergence in tree models and this is not specific to digital options.The problem usually that the tree is constructed independent of the contract to be priced. Thus, the location of the strike relative to the two surrounding nodes might vary widely between two successive step sizes. For European plain vanilla options, ... The pricing of options is married with the concept of a hedging strategy that replicates the effect of the option. If you can only long or short a stock that will not replicate the greeks, it only creates delta. It is the commitment to the strategy that achieves it.For example if the price goes up and you are committed to buying more to increase your delta ... I think you have the correct understanding. The arbitrage is only possible if the risk-neutral probability distribution of the stock is perfectly known, as it is in your simple binomial model. In the real world you can never know the precise distribution, so you cannot create a true arbitrage between an option and its underlying stock in this manner.... Im going to hazard a guess that your problem is u**(N-i). Large exponents are notoriously poor performers, I would first look to restructure that aspect of the code and then isolate other poorly performant sections afterwards.For example you might observe that:S_T[i] = S0 * u**(N-i) * d**(i)is equivalent to:S_T[i] = S0 * u**N * (d/u)**ithen u**N ... This sounds like the first chapter of Björks book am I right? It treats a single-stage model. Simply put, if $1+r \leq d$ you buy the stock and have $V_1\geq 0$ with positive probability of making a profit. If $1+r \geq u$ you want to sell the stock short and buy the bond from the proceeds. The result is the same.Edit: To show that the condition is ... Since all your options have the same strike, you do not have any "explicit" skew or smile exposure in your portfolio. If I had to guess, almost all of your P&L can be explained by primary exposures, with some Theta losses offset by your Gamma scalping and Vega gains.An example of a book with an explicit smile exposure would be a vega-neutral fly - you ... first you have to find the p, u and d that match standard deviations in part (I). The problem is a little underdefined in that even if you match mean and standard deviation precisely, there are multiple solutions. Impose an extra condition to get all three.Once this is done compute $q$ via$$q = \frac{e^{r \delta t} - d}{u-d}.$$The state-price ... You agree that the proposition is proven if the equations have a unique solution. You agree that there is a unique solution if u>d. Then we just have to show that u>d. But the definition of u and d is that we have a binomial model where there are two possible outcomes for the stock, a higher outcome su and a lower outcome sd. Hence u>=d by assumption , ... There is one condition under which the risk neutral probability of an event can be zero: if the real world probability is zero. If not then any contract that pays off in that event must go down in price if the contract is modified as to not pay off or pay off less in that event. Otherwise, one can buy one and sell the other... it's arbitrage in the "free ... The general formula to answer this question can be found on page 105-106 of Introduction to Mathematical Finance by Pliska.In general:$\bar{\mathbb{P}}\left ( M_{4} \geq 4\left ( 2^{i} \right ) \right )$ (or the probability the maximum to date price was 4, 8, 16 etc) is equal to:the probability of the stock price finishing at $M_{4}$; plusthe ... hmm, Trigeorgis seems to be saying that the value of being able to switch at one of the times 1,2 and 3 is the same as the sum of being able to switch at each of the times.This seems wrong to me since if you switch at time 1, the value of the ability to switch at time 2 becomes worthless.Valuing an exchange option on a binomial tree is pretty easy -- ... Let's illustrate with a one step tree. Take a call option. Without even making a specific assumption about the payout of the option, except that it will be greater in case of an uptick than a downtick: $f_u>f_d$. The price at time 0 for the option will be $f=(1+r)^{-1}f_u$ by the risk-neutral valuation formula, since you assume $1+r=u$.Sell the option ... Black Scholes can be seen as the continuous limit of a binomial model when the number of steps go to infinity.(It can be seen as a result of Donsker's theorem)Thus it is normal that your call price in the one-period model is different than the one in the BS model.If you have $n$ steps in your binomial to describe the period $[0,T]$and if your ... I think the proof has already been provided at the end of the proof in Shreve's Theorem 4.4.5. Specifically, note that, since\begin{align*}\frac{1}{(1+r)^{n \wedge \tau^*}}V_{n \wedge \tau^*}.\end{align*}is a martingale,\begin{align*}\tilde{\mathbb{E}}\left(\frac{1}{(1+r)^{N \wedge \tau^*}}V_{N \wedge \tau^*}\right) &= V_0 = \max_{\tau \in S_0} \... It is as simple as just taking the max(). The problem is that you took the wrong one.You must consider the max between the intrinsic value of the option on the one hand and its discounted continuation value (which is an expectation in the risk-neutral world) on the other.In your final loop, you should therefore replace the lineW(j+1,1) = max(K-W(j+2,1)... Increase the number of paths in your simulation for the getting the terminal prices, and at some point your monte carlo option price will finally converge to Black scholes option price as you are using a very longer maturity call option i.e. 10 year call option. What if you write$$P[R_{n+1} = d|F_n] = 1 - P[R_{n+1} = u|F_n] ?$$Let us write $P(u) = P[R_{n+1} = u|F_n]$Then the part to show is$$u \bar{S}_n P(u) + d \bar{S}_n (1-P(u))$$and this$$\bar{S}_n \left(d +(u-d)P(u) \right),$$where we just expanded terms and then extracted the coefficients. This is not the Taylor expansion with respect to $t$, instead, it is the Taylor expansion with respect to $S$. Moreover, the prices at time $t+\delta t$ is used for approximation. That is,\begin{align*}\frac{\partial V}{\partial S}\big|_t &\approx \frac{V(uS, t) - V(vS, t)}{uS - vS}\\&\approx \frac{V(uS, t+\delta t) - V(vS, t+\delta t)}{S(u-v)}\\&... In the link you provided, by noting the construction of array p[], p0 and p1 are respectively the discounted $\texttt{down}$ and $\texttt{up}$ probabilities. Since $d=\frac{1}{u}$, then\begin{align*}p0 &= e^{-r \Delta T}\, \frac{u-e^{(r-q)\Delta T}}{u-d}\\&= \frac{\big(u\,e^{-r \Delta T} -e^{-q\Delta T}\big)u }{u^2-1},\end{align*}and\begin{...
Answer $s(1.3)\approx4$ After $t=1.3$ seconds, the weight is about 4 inches above the equilibrium position. Work Step by Step We calculate $s(1.3)$ by substituting $t=1.3$ into the equation and solving: $s(t)=-5\cos 4\pi t$ $s(1.3)=-5\cos (4\pi\times1.3)$ $s(1.3)=-5\cos (16.34)$ $s(1.3)=-5(-0.81)$ $s(1.3)\approx4$ We know that the motion starts from $-5$ inches. Since the value of $s$ is positive, this means that the weight is moving upwards and has passed the equilibrium position. Therefore, after $t=1.3$ seconds, the weight is about 4 inches above the equilibrium position.
When solving equations like $$\begin{align} 4x-4 &=\frac{(2x)^2}{x} \\ -4 &= \frac{4x^2}{x} -4x \\ -4 &= 4x -4x \\[0.2em] -4 &= 0\end{align}$$ using the equality-symbol feels like abuse of notation, since you'll end up with $-4=0$, which is not an equality. For instance I feel it would be better to write $$\begin{align} 4x-4 &\:\Box\:\frac{(2x)^2}{x} \\ -4 &\:\Box\: \frac{4x^2}{x} -4x \\ -4 &\:\Box\: 4x -4x \\[0.4em] -4 &\:\Box\: 0 \\[0.3em] -4 &\neq 0\end{align}$$ So I was wondering if there's a symbol or any other notations being used when trying to solve such an equation where you don't know if there's an equality?
Euclidean Geometry • Equitable Distributing • Linear Programming • Set Theory • Nonstandard Analysis • Advice • Topology • Number Theory • Computation of Time (Previous | Next) Definition: Two distinct points \(x\) and \(z\) in a Euclidean space (simply called a space in the following) viewed as a subspace of \(\mathbb{R}^{n}\) with \(n \in {}^{\omega }\mathbb{N}^{*}\) (see Set Theory) are said to be a pair (of points). A line segment is a pair \((x, z)\) together with all inner points \(y\) in the space that are distinct from both the starting point \(x\) and the end point \(z\) and that lie between \(x\) and \(z\), satisfying \(||x - y|| + ||y - z|| = ||x - z||\) with respect to the Euclidean norm \(||\cdot||\). Definition: Two line segments are said to intersect if they have precisely one point in common. This includes the case when the common point is only found after completing each line segment with all other inner points of that segment within \(\mathbb{R}^{n}\). A maybe one-dimensional set of points in the space with the property that each point has at least one and at most two gaplessly neighbouring points is called a line. A maximal two-dimensional subspace is named a plane. Definition: A line segment is said to be a straight line if both its starting point and its end point lie on the boundary of the space, for the time being with the additional requirement that none of its inner points do. Two line segments are said to be parallel if one line may be obtained from the other by means of a translation or the minimum distances between each point on one line segment and the other line segment are identical. Any line segment in the space parallel to one of the straight lines defined above is also named a straight line. Result: By defining short straight lines, arbitrarily many counterexamples can be given based on the above to Pasch's axiom, the axiom of line completeness, as well as various other axioms and their equivalents. If a straight line uniquely defines a parallel straight line running through a given point by their shortest distance, the parallel postulate is redundant in Euclidean Geometry. If two straight lines are only considered to be parallel when they lie in the same plane and do not intersect, then the parallel postulate does not hold: The reciprocal of the distance between the straight line and the given point may be greater than infinity or smaller than \(|{}^{\omega }\mathbb{N}|\), and then infinitely many distinct straight lines can be found that pass through the given point without intersecting the original straight line. The Archimedean axiom must be extended to the case where a segment is marked off an infinite natural number of times without exceeding the starting point or end point of a straight line, or replaced by the Archimedean theorem (in the finite case). Pasch's axiom is also unnecessary, since every straight line must be fully contained in the interior of some triangle due to its maximum length, and hence so must its boundary. Toeplitz' conjecture: Every Jordan curve admits an inscribed square. Counterexamples: The right-angled triangle with two sides of length \(d0\) and the obtuse triangle where we infinitesimally move a vertex of at most one inscribed square within the limits.\(\square\) Theorem: There is a Jordan domain with more than one equichordal point (cf. [931], p. 9 f.). Proof: We infinitesimally juxtapose equichordal points within the limits.\(\square\) Fickett's theorem: For any relative positions of two overlapping congruent rectangular \(n\)-prisms \(Q\) and \(R\) with \(n \in {}^{\omega }\mathbb{N}_{\ge 2}\), we have for the exact standard measure \(\mu\) (see Nonstandard Analysis and [931], p. 25), where \(\mu\) for \(n = 2\) needs to be replaced by the Euclidean path length \(L\), that:\[1/(2n - 1) < r := \mu(\partial Q \cap R)/\mu(\partial R \cap Q) < 2n - 1.\]Proof: Since the underlying extremal problem has its maximum for rectangles with the side lengths \(s\) and \(s + 2d0\), min \(r = s/(3s - 2d0) \le r \le\) max \(r = (3s - 2d0)/s\) holds. The proof for \(n > 2\) is analogous.\(\square\) © 2010-2017 by Boris Haase • disclaimer • mail@boris-haase.de • pdf-version • bibliography • subjects • definitions • statistics • php-code • rss-feed • top
We defined in the class branched covering as follows. Let $\Sigma_1, \Sigma_2$ be two surfaces, $f: \Sigma_1 \longrightarrow \Sigma_2$ is a branched covering if $\forall y \in \Sigma_2 $ there exist $V\subset \Sigma_1$ containing y so that $f^{-1}(V)= U_1 \cup U_2 \cup ... \cup U_n $ so that $f: U_j \longrightarrow V$ is given by $z \longrightarrow z^k$ for $k\geq1$. The points which k>1 called ramification points. Now, I didnot understand the following. When I searched on the web the standard definition for branched covering is like it is a covering except on some small set. So How are these two definitions related? As far as I understand, in the definition I give, we view the surface as a complex manifold (with a complex structure on it) but then the question is there might be lots of complex structure which are not biholomorphic to each other, is the definition independent of the complex structure we put on the surface? I would appreciate if you suggest some source explaining monodromy, branched coverings. To understand the first definition a little bit, I took the standard sphere in $\mathbb {R}^3= \mathbb{C} \times R $ and the map taking $(z,t) \longrightarrow (z^2,t)$. This is a branched covering; the north and south poles being branched points with respect to the second definition. Now I will try to see that it is a branched covering with these branch points with the first definition. I need to have some complex coordinates on the sphere and must express the map so that hopefully it will be some power of z, so that I can decide what kind of point is that. Am I on the right track?
If I interpret the request a bit differently, I would say that the Steenrod operations in the cohomology of a spectrum tell you about the attachments of the cells. If $Sq^1 x = y$, then a cell dual to $y$ is attached by a map of degree 2 mod 4 to a cell dual to $x$. Similarly, $Sq^2 x = y$ tells us the attaching map is $\eta$, $Sq^4$ detects $\nu$ and $Sq^8$ detects $\sigma$. This doesn't go very far, but may help with the need to 'get a real grip on what they're doing'. Next, let's assume you're really interested in homotopy, not just (co)homology. A class dual to a homology class in the image of the Hurewicz homomorphism must be indecomposable under the action of the Steenrod algebra, by naturality w.r.t. the map $S^n \longrightarrow X$. This limits the homotopy of $X$ which can be detected by the homomorpism $\pi_* X \longrightarrow H_* X$: the homomorphism $H^* X \longrightarrow H^* S^n$ can only map indecomposables non-trivially, since all classes in degrees below $n$ must go to $0$. Then there are the relations. The fact that $Sq^n$ is decomposable when n is not a power of two tells us that if $y = Sq^n x$, there must be other classes between $x$ and $y$. EG, $Sq^3 x = y \neq 0$ tells us that $Sq^2 x \neq 0$ also, since $Sq^3 = Sq^1 Sq^2$. So our spectrum can't have just two cells, dual to $x$ and $y$, but must have a three cell subquotient with top cell attached by 2 (mod 4) to a cell attached by $\eta$ to the bottom cell. Or, if $Sq^2 Sq^2 x = y \neq 0$ then we must also have nonzero classes $Sq^1 x$ and $Sq^2 Sq^1 x$, since $Sq^2 Sq^2 = Sq^1 Sq^2 Sq^1$, and vice versa, if $Sq^1 Sq^2 Sq^1 x = y \neq 0$ then $Sq^2 x \neq 0$ as well. This leads to an easy proof that the mod 2 Moore spectrum $M$ isn't a ring spectrum, since $2 \pi_0M = 0$ but $\pi_2 M = Z/4$, by looking at the obstruction to attaching the top cell of a putative spectrum with nonzero cohomology spanned by $x$, $Sq^1 x$, $Sq^2 Sq^1 x$, and $Sq^1 Sq^2 Sq^1 x$. More, the fact that you can only add such a top cell if you also have a class $Sq^2 x$ so that the top cell can be attached by the sum of $Sq^1$ on $Sq^2 Sq^1 x$ and $Sq^2$ on $Sq^2 x$ shows that $\eta^2$ (corresponding to the path $Sq^2 Sq^2$ from bottom to top, must lie in the Toda bracket $\langle 2, \eta, 2\rangle$, corresponding to the path $Sq^1$, $Sq^2$, $Sq^1$ from bottom to top. Similarly, $y = Sq^1 Sq^2 x$ tells us that homotopy supported on a cell dual to $x$ can be acted on by $\text{v}_1$ to get $y$, literally if we have a $ku$-module and multiply by $\text{v}_1 \in ku_2$, or as the Toda bracket $\langle 2, \eta, -\rangle$ more generally. The key fact here is that $\text{v}_1 \in ku_2$ is in $\langle 2, \eta, 1_{ku} \rangle$, where $1_{ku} : S \longrightarrow ku$ is the unit. Likewise, $Sq^2 Sq^1 Sq^2 x = y$ corresponds to multiplication by the generator of $ko_4$, literally for $ko$-modules, or as a bracket $\langle \eta, 2, \eta, - \rangle$ more generally. Here you have to be in a situation where $2 \nu = 0$ to form the bracket, since $\langle \eta, 2, \eta \rangle = \{ 2\nu, 6 \nu\}$. This hints that the role of $\nu$ is non-trivial in real K-theory, despite going to $0$ under the homomorphism $\pi_* S \longrightarrow \pi_* ko$ and despite the cohomology of $ko$ being induced up from the subalgebra $A(1)$ generated by $Sq^1$ and $Sq^2$. The Adem relation $Sq^2 Sq^1 Sq^2 = Sq^1 Sq^4 + Sq^4 Sq^1$ shows that $Sq^4$ must act nontrivially if $Sq^2 Sq^1 Sq^2$ does. Also, the fact that $A(1)//A(0)$ is spanned by $1$, $Sq^2$, $Sq^1 Sq^2$, and $Sq^2Sq^1Sq^2$ tells us (with a bit more work) that we can build $HZ$ as a four cell $ko$-module. A good way to organize all this information is the Adams spectral sequence, which tells you that the mod $p$ cohomology of $X$ gives a decent first approximation, $\text{Ext}_{A}(H^*X,F_p)$, to the homotopy of the $p$-completion of $X$.
Let me first answer your question in general. The SVM is not a probabilistic model. One reason is that it does not correspond to a normalizable likelihood. For example in regularized least squares you have the loss function $\sum_i \|y_i - \langle w, x_i\rangle - b\|_2^2$ and the regularizer $\|w\|_2^2$. The weight vector is obtained by minimizing the sum of the two. However this is equivalent to maximizing the log-posterior of $w$ given the data $p(w|(y_1,x_1),...,(y_m,x_m)) \propto 1/Z \exp(-\|w\|_2^2)\prod_i \exp(\|y_i - \langle w, x_i\rangle - b\|_2^2)$ which you can see to be product of a Gaussian likelihood and a Gaussian prior on $w$ ($Z$ makes sure that it normalizes). You get to the Gaussian likelihood from the loss function by flipping its sign and exponentiating it. However, if you do that with the loss-function of the SVM, the log-likelihood is not a normalizeable probabilistic model. There are attempts to turn SVM into one. The most notable one, which is-I think-also implemented in libsvm is: John Platt: Probabilistic outputs for Support Vector Machines and Comparison to Regularized Likelihood Methods (NIPS 1999): http://www.cs.colorado.edu/~mozer/Teaching/syllabi/6622/papers/Platt1999.pdf To answer your question more specificly: The idea in SVMs indeed is that the further a test vector is from the hyperplane the more it belongs to a certain class (except when it's on the wrong side of course). In that sense, support vectors do not belong to the class with high probability because they either are the ones closest to or on the wrong side of the hyperplane. The $\alpha$ value that you get from libsvm has nothing to do with the $\alpha$ in the decision function. It is rather the output of the decision function $\sum_{i \in SV}\alpha_i k(x,x_i) + b$ (and should therefore be properly called $y$). Since $y = \sum_{i \in SV}\alpha_i k(x,x_i) + b = \langle w, \phi(x) \rangle_{\mathcal H} + b$ where $w$ lives in the reproducing kernel Hilbert space, $y$ is proportional to the signed distance to the hyperplane. It would be if you divide by the norm of $w$, which in kernel terms is $\|w\|_{H} = \sqrt{\sum_{i,j\in SV} \alpha_i \alpha_j k(x_i,x_j)}$.
Taiwanese Journal of Mathematics Taiwanese J. Math. Volume 19, Number 2 (2015), 505-517. THE (NORMALIZED) LAPLACIAN EIGENVALUE OF SIGNED GRAPHS Abstract A signed graph $\Gamma=(G, \sigma)$ consists of an unsigned graph $G=(V, E)$ and a mapping $\sigma: E \rightarrow \{+, -\}$. Let $\Gamma$ be a connected signed graph and $L(\Gamma), {\cal L}(\Gamma)$ be its Laplacian matrix and normalized Laplacian matrix, respectively. Suppose $\mu_1\geq \cdots \geq \mu_{n-1}\geq \mu_n\geq 0$ and $\lambda_1\geq \cdots \geq \lambda_{n-1}\geq \lambda_n\geq 0$ are the Laplacian eigenvalues and the normalized Laplacian eigenvalues of $\Gamma$, respectively. In this paper, we give two new lower bounds on $\lambda_1$ which are both stronger than Li's bound [8] and obtain a new upper bound on $\lambda_n$ which is also stronger than Li's bound [8]. In addtion, Hou [6] proposed a conjecture for a connected signed graph $\Gamma: \sum\limits_{i=1}^k\mu _i\gt \sum\limits_{i=1}^k d _i (1\leq k\leq n-1)$. We investigate $\sum\limits_{i=1}^k\mu_i (1\leq k\leq n-1)$ and partly solve the conjecture. Article information Source Taiwanese J. Math., Volume 19, Number 2 (2015), 505-517. Dates First available in Project Euclid: 4 July 2017 Permanent link to this document https://projecteuclid.org/euclid.twjm/1499133643 Digital Object Identifier doi:10.11650/tjm.19.2015.4675 Mathematical Reviews number (MathSciNet) MR3332310 Zentralblatt MATH identifier 1357.05087 Subjects Primary: 05C50: Graphs and linear algebra (matrices, eigenvalues, etc.) Citation Liu, Ying; Shen, Jian. THE (NORMALIZED) LAPLACIAN EIGENVALUE OF SIGNED GRAPHS. Taiwanese J. Math. 19 (2015), no. 2, 505--517. doi:10.11650/tjm.19.2015.4675. https://projecteuclid.org/euclid.twjm/1499133643
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1... Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer... The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$. Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result? Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa... @AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works. Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months. Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter). Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals. I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ... I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side. On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book? suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ . Can you give some hint? My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$ If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero. I have a bilinear functional that is bounded from below I try to approximate the minimum by a ansatz-function that is a linear combination of any independent functions of the proper function space I now obtain an expression that is bilinear in the coeffcients using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0) I get a set of $n$ equations with the $n$ the number of coefficients a set of n linear homogeneus equations in the $n$ coefficients Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz. Avoiding the neccessity to solve for the coefficients. I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero. I wonder if there is something deeper in the background, or so to say a more very general principle. If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x). > Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel. (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
I was wondering if anyone had recommendations for papers or resources on bayesian analysis of frequentist hypothesis testing and use of p-values? Brad Efron has this quote One definition is says that a frequentist is a a Bayesian trying to do well, or at least not too badly, against any possible prior distribution It's clear to me how this makes sense in the context of decision theory, but often I read things similar to: Cosma Shalizi: If you find a small p-value, yay; you've got enough data, with precise enough measurement, to detect the effect you're looking for, or you're really unlucky What does this really mean (in a precise mathematical sense)? "Enough" data for what? For low enough errors if we take the estimate to be true? And what does "precise enough...to detect" mean? To me that only seems to make sense in the context of a bayesian posterior distribution around the estimate. In the simplest case I can think of, consider two hypotheses, $H_0$ and $H_A$ with $\alpha = 0.05$ and $1-\beta = 0.8$. When we "reject the null" , the probability that $H_0$ is true is: $\frac{P(H_0)*\alpha}{P(H_A)*(1-\beta) + P(H_0) * \alpha}$ where $P$ is a prior distribution. Using the fact that $P(H_0) = 1-P(H_A)$, we can see how the probability that $H_0$ is true changes given various parameters of $\alpha$ and $\beta$. The plot below, for example, shows the problem of low power testing (see the replication crisis). However, it's not as clear to me how to interpret use of p-values in the more common, complex practice (in which the parameter is continuous, p-values inform data decisions or decision to look for more data, etc...)
The Annals of Mathematical Statistics Ann. Math. Statist. Volume 28, Number 1 (1957), 242-246. Consistency of Certain Two-Sample Tests Abstract Let $X_1, \cdots, X_m; Y_1, \cdots, Y_n$ be independently distributed on the unit interval. Assume that the $X$'s are uniformly distributed and that the $Y$'s have an absolutely continuous distribution whose density $g(y)$ is bounded and has at most finitely many discontinuities. Let $Z_0 = 0, Z_{n + 1} = 1$, and let $Z_1 < \cdots < Z_n$ be the values of the $Y$'s arranged in increasing order. For each $i = 1, \cdots, n + 1$ let $S_i$ be the number of $X$'s which lie in the interval $\lbrack Z_{i - 1}, Z_i\rbrack$. For each nonnegative integer $r$, let $Q_n(r)$ be the proportion of values among $S_1, \cdots, S_{n + 1}$ which are equal to $r$. Suppose $m$ and $n$ approach infinity in the ratio $(m/n) = \alpha > 0$. In Section 2 it is shown that $$\operatornamewithlimits{\lim}{n \rightarrow \infty} \operatornamewithlimits{\sup}{r \geqq 0} |Q_n(r) - Q(r)| = 0$$ with probability one, where $$Q(r) = \alpha^r \int^1_0 \frac{g^2(y)}{\lbrack\alpha + g(y)\rbrack^{r + 1}}dy.$$ This result may be used to prove consistency of certain tests of the hypothesis that the two samples have the same continuous distribution. Several such examples are given in Section 3. A further property of one of these tests is briefly discussed in Section 4. Article information Source Ann. Math. Statist., Volume 28, Number 1 (1957), 242-246. Dates First available in Project Euclid: 27 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aoms/1177707048 Digital Object Identifier doi:10.1214/aoms/1177707048 Mathematical Reviews number (MathSciNet) MR84956 Zentralblatt MATH identifier 0087.14601 JSTOR links.jstor.org Citation Blum, J. R.; Weiss, Lionel. Consistency of Certain Two-Sample Tests. Ann. Math. Statist. 28 (1957), no. 1, 242--246. doi:10.1214/aoms/1177707048. https://projecteuclid.org/euclid.aoms/1177707048