text
stringlengths
256
16.4k
Pro-<em>p</em> Extensions of Global Fields and pro-<em>p</em> Groups | EMS Press Pro-<em>p</em> Extensions of Global Fields and pro-<em>p</em> Groups The meeting \emph{Pro- p Extensions of Global Fields and pro- p Groups} was organised by Nigel Boston (Madison), John Coates (Cambridge) and Fritz Grunewald (D\"{u}sseldorf). As the name of the meeting conveys, a primary aim was to bring together group theorists working in the field of pro- p groups and number theorists interested in pro- p extensions of global fields. The workshop consisted of over 25 talks, supplemented by informal discussions. Topics included: Galois groups of extensions with restricted ramification, self-similar and automata groups, non-commutative Iwasawa theory, groups acting on rooted trees. This meeting was well attended with over 50 participants; more than 30 of these came from countries other than Germany. The range of topics and the diverse backgrounds of the participants led to a stimulating exchange of recent results, challenging problems and general ideas. The organisers and participants thank the \emph{Mathematisches Forschungsinstitut Oberwolfach} for providing the setting for this successful workshop. The following extended abstracts appear in the chronological order of the talks; they were collected and edited by Benjamin Klopsch (D\"{u}sseldorf). Christian Liedtke, Nigel Boston, John H. Coates, Pro-<em>p</em> Extensions of Global Fields and pro-<em>p</em> Groups. Oberwolfach Rep. 3 (2006), no. 2, pp. 1463–1536
Higher Torsion Invariants in Differential Topology and Algebraic K-Theory | EMS Press E. Bruce Williams The classical Franz-Reidemeister torsion and its cousins, the Whitehead torsion and Ray-Singer analytic torsion, are topological invariants of manifolds with local coefficient systems (or flat vector bundles) that can distinguish homotopy equivalent spaces that are not homeomorphic. The purpose of this Arbeitsgemeinschaft was to learn about several natural generalisations of these classical invariants to families of manifolds. Regard a family~ p\colon E\to B of compact manifolds~ M , equipped with a flat vector bundle~ F\to M . Then the fibrewise cohomology groups~ H^\bullet(E/B;F) form flat vector bundles over the base~ B . The starting point for our investigations are analogues of the Atiyah-Singer family index theorem that relate~ F H^\bullet(E/B;F) . To a flat vector bundle~ F\to M , one associates Kamber-Tondeur characteristic classes c_\bullet(F) H^{\text{odd}}(M;\mathbb R) , which vanish if~ F carries a parallel metric. By Bismut-Lott~\cite{BLin}, one has \sum_i(-1)^ic_\bullet\bigl(H^i(E/B;F)\bigr) =\int_{E/B}e(TM)\,c_\bullet(F) \quad\in H^*(B;\mathbb R)\;, where~ e(TM) is the Euler class of the vertical tangent bundle, and the right hand side is the Becker-Gottlieb transfer in de Rham cohomology. If one specifies some additional geometric data, then all classes above are naturally represented by specific differential forms. On the level of differential forms, the equation above only holds up a correction term~ d\mathcal T . Here~ \mathcal T is the higher analytic torsion, which depends naturally on the fibration and the geometric data. If both~ H^\bullet(E/B;F) F admit parallel metrics, then~ \mathcal T gives rise to a secondary characteristic class~ \mathcal T(E/B;F)\in H^{\text{even},\ge2}(B;\mathbb R) . Dwyer-Weiss-Williams \cite{DWWin} construct Reidemeister torsion for a smooth fiber bundle p\colon E \to B as a byproduct of a family index theory. If p is any fiber bundle with fibers compact topological manifolds and base a CW complex, then the family index theory states that \chi(p) , the A-theory Euler characteristic of p is determined by the A-theory Euler class of \tau_{fib}(p), the tangent bundle along the fiber. Here A-theory is algebraic K-theory of spaces in the sense of Waldhausen. More precisely, by applying fiberwise Poincare duality, and then an assembly map to the A-theory Euler class, one gets the A-theory Euler characteristic. If p is a smooth bundle, then one gets a stronger smooth index theorem where the A-theory Euler class is replaced by the Becker-Euler class, which lives in the (twisted) stable cohomotopy of E B is a point this result is equivalent to the classical Poincare-Hopf theorem. The third approach is due to Igusa-Klein~\cite{Ibookin}, and is somewhat different in nature. Here, one regards a generalised fibrewise Morse function on~ M\to B . Together with a flat vector bundle~ F\to M , this gives rise to a classifying map from~ B to a Whitehead space, and the higher Franz-Reidemeister torsion is the pullback of a universal class on the Whitehead space. There are conjectural relations between all three definitions of higher torsion. In a special case, Igusa has characterized higher Franz-Reidemeister torsion axiomatically; checking these axioms for either of the other higher torsions would prove equality. For some bundles, equality of higher Franz-Reidemeister torsion and higher analytic torsion can be shown analytically using the Witten deformation. Finally, one expects that higher Franz-Reidemeister torsion can be recovered from Dwyer-Weiss-Williams torsion. It turns out that higher torsion invariants are somewhat finer than classical FR torsion, since they detect higher homotopy classes of the diffeomorphism group of high-dimensional manifolds that vanish under the forgetful map to the homeomorphism group. In particular, these invariants distinguish differentiable structures on a given topological fibre bundle~ M\to B , where one may even fix differentiable structures on~ M B and the typical fibre. There are also applications of higher torsions to problems in graph theory and moduli spaces of compact surfaces. Some of these were sketched throughout this Arbeitsgemeinschaft. The talks were grouped as follows. \begin{enumerate} \item The first talk gave a short introduction to classical torsion invariants. \item In talks 2--7, we discussed the Dwyer-Weiss-Williams homotopy theoretical approach. \item Parametrized Morse theory, Kamber-Tondeur classes and Igusa-Klein torsion were discussed in talks 8--16, and some applications were given. \item Finally, based on talks 10 and 11, we introduced analytic torsion in the talks 17--19. \end{enumerate} The meeting took place from April 2nd till April 8th 2006 and was organized by Sebastian Goette (Regensburg), Kiyoshi Igusa (Brandeis) and Bruce Williams (Notre Dame). It was attended by 43 participants, coming mainly from Europa and the USA. \begin{thebibliography}{99} \bibitem{BLin} J.-M. Bismut, J. Lott, \textit{Flat vector bundles, direct images and higher real analytic torsion}, J. Am. Math. Soc. \textbf{8} (1995), 291--363. \bibitem{DWWin} W.~Dwyer, M.~Weiss, B.~Williams, \textit{A parametrized index theorem for the algebraic K -Theory Euler class}, Acta Mathematica \textbf{190} (2003), 1--104. \bibitem{Ibookin} K.~Igusa, \textit{Higher Franz-Reidemeister Torsion}, AMS/IP Studies in Advanced Mathematics 31, International Press, 2002. \end{thebibliography} Sebastian Goette, Kiyoshi Igusa, E. Bruce Williams, Higher Torsion Invariants in Differential Topology and Algebraic K-Theory. Oberwolfach Rep. 3 (2006), no. 2, pp. 979–1026
Collinearity Knowpia Points on a lineEdit Examples in Euclidean geometryEdit The lines connecting the feet of the altitudes intersect the opposite sides at collinear points.[3]: p.199  A triangle's incenter, the midpoint of an altitude, and the point of contact of the corresponding side with the excircle relative to that side are collinear.[4]: p.120, #78  Menelaus' theorem states that three points {\displaystyle P_{1},P_{2},P_{3}} on the sides (some extended) of a triangle opposite vertices {\displaystyle A_{1},A_{2},A_{3}} respectively are collinear if and only if the following products of segment lengths are equal:[3]: p. 147  {\displaystyle P_{1}A_{2}\cdot P_{2}A_{3}\cdot P_{3}A_{1}=P_{1}A_{3}\cdot P_{2}A_{1}\cdot P_{3}A_{2}.} In a convex quadrilateral ABCD whose opposite sides intersect at E and F, the midpoints of AC, BD, and EF are collinear and the line through them is called the Newton line (sometimes known as the Newton-Gauss line[citation needed]). If the quadrilateral is a tangential quadrilateral, then its incenter also lies on this line.[6] By Monge's theorem, for any three circles in a plane, none of which is completely inside one of the others, the three intersection points of the three pairs of lines, each externally tangent to two of the circles, are collinear. Collinearity of points whose coordinates are givenEdit In coordinate geometry, in n-dimensional space, a set of three or more distinct points are collinear if and only if, the matrix of the coordinates of these vectors is of rank 1 or less. For example, given three points X = (x1, x2, ... , xn), Y = (y1, y2, ... , yn), and Z = (z1, z2, ... , zn), if the matrix {\displaystyle {\begin{bmatrix}x_{1}&x_{2}&\dots &x_{n}\\y_{1}&y_{2}&\dots &y_{n}\\z_{1}&z_{2}&\dots &z_{n}\end{bmatrix}}} Equivalently, for every subset of three points X = (x1, x2, ... , xn), Y = (y1, y2, ... , yn), and Z = (z1, z2, ... , zn), if the matrix {\displaystyle {\begin{bmatrix}1&x_{1}&x_{2}&\dots &x_{n}\\1&y_{1}&y_{2}&\dots &y_{n}\\1&z_{1}&z_{2}&\dots &z_{n}\end{bmatrix}}} is of rank 2 or less, the points are collinear. In particular, for three points in the plane (n = 2), the above matrix is square and the points are collinear if and only if its determinant is zero; since that 3 × 3 determinant is plus or minus twice the area of a triangle with those three points as vertices, this is equivalent to the statement that the three points are collinear if and only if the triangle with those points as vertices has zero area. Collinearity of points whose pairwise distances are givenEdit {\displaystyle \det {\begin{bmatrix}0&d(AB)^{2}&d(AC)^{2}&1\\d(AB)^{2}&0&d(BC)^{2}&1\\d(AC)^{2}&d(BC)^{2}&0&1\\1&1&1&0\end{bmatrix}}=0.} Two numbers m and n are not coprime—that is, they share a common factor other than 1—if and only if for a rectangle plotted on a square lattice with vertices at (0, 0), (m, 0), (m, n), and (0, n), at least one interior point is collinear with (0, 0) and (m, n). Concurrency (plane dual)Edit Collinearity graphEdit Usage in statistics and econometricsEdit In statistics, collinearity refers to a linear relationship between two explanatory variables. Two variables are perfectly collinear if there is an exact linear relationship between the two, so the correlation between them is equal to 1 or −1. That is, {\displaystyle X_{1}} {\displaystyle X_{2}} are perfectly collinear if there exist parameters {\displaystyle \lambda _{0}} {\displaystyle \lambda _{1}} such that, for all observations i, we have {\displaystyle X_{2i}=\lambda _{0}+\lambda _{1}X_{1i}.} {\displaystyle X_{ki}=\lambda _{0}+\lambda _{1}X_{1i}+\lambda _{2}X_{2i}+\dots +\lambda _{k-1}X_{(k-1),i}} {\displaystyle X_{ki}=\lambda _{0}+\lambda _{1}X_{1i}+\lambda _{2}X_{2i}+\dots +\lambda _{k-1}X_{(k-1),i}+\varepsilon _{i}} where the variance of {\displaystyle \varepsilon _{i}} Usage in other areasEdit Antenna arraysEdit The collinearity equations are a set of two equations, used in photogrammetry and computer stereo vision, to relate coordinates in an image (sensor) plane (in two dimensions) to object coordinates (in three dimensions). In the photography setting, the equations are derived by considering the central projection of a point of the object through the optical centre of the camera to the image in the image (sensor) plane. The three points, object point, image point and optical centre, are always collinear. Another way to say this is that the line segments joining the object points with their image points are all concurrent at the optical centre.[11] ^ The concept applies in any geometry Dembowski (1968, pg. 26), but is often only defined within the discussion of a specific geometry Coxeter (1969, pg. 178), Brannan, Esplen & Gray (1998, pg.106) ^ Colinear (Merriam-Webster dictionary) ^ a b Johnson, Roger A., Advanced Euclidean Geometry, Dover Publ., 2007 (orig. 1929). ^ Scott, J. A. "Some examples of the use of areal coordinates in triangle geometry", Mathematical Gazette 83, November 1999, 472–477. ^ Dušan Djukić, Vladimir Janković, Ivan Matić, Nikola Petrović, The IMO Compendium, Springer, 2006, p. 15. ^ Myakishev, Alexei (2006), "On Two Remarkable Lines Related to a Quadrilateral" (PDF), Forum Geometricorum, 6: 289–295 . ^ Bradley, Christopher (2011), Three Centroids created by a Cyclic Quadrilateral (PDF) ^ Kock, N.; Lynn, G. S. (2012). "Lateral collinearity and misleading results in variance-based SEM: An illustration and recommendations" (PDF). Journal of the Association for Information Systems. 13 (7): 546–580. doi:10.17705/1jais.00302. S2CID 3677154. ^ It's more mathematically natural to refer to these equations as concurrency equations, but photogrammetry literature does not use that terminology.
Raul claims that he has a shortcut for deciding what kind of roots a function has. Jolene thinks that a shortcut is not possible. She says you just have to solve the quadratic equation to find out. They are working on y = x^2 - 5x - 14 Jolene says, “See, I just start out by trying to factor. This one can be factored (x - 7)(x + 2) = 0 , so the equation will have two real solutions and the function will have two real roots.” “But what if it can’t be factored?” Raul asked. “What about x^2 + 2x + 2 = 0 “That’s easy! I just use the Quadratic Formula,” says Jolene. “And I get… let’s see… negative two plus or minus the square root of… two squared… that’s 4 … minus… eight…” “Wait!” Raul interrupted. “Right there, see, you don’t have to finish. 2^2 4 · 2 , that gives you -4 . That’s all you need to know. You’ll be taking the square root of a negative number so you will get a complex result.” “Oh, I see,” said Jolene. “I only have to consider part of the solution, the inside of the square root.” Use Raul’s method to tell whether each of the following functions has real or complex roots without completely solving the equation. Note: Raul’s method is also summarized in the Math Notes box for this lesson. y = 2x^2 + 5x + 4 Calculate the discriminant to see if it is positive or negative. b^2-4ac y = 2x^2 + 5x - 3 The discriminant is positive so the roots are real.
MIMO channel block diagonalized weights - MATLAB blkdiagbfweights - MathWorks Australia blkdiagbfweights Spatial Multiplexing with Block Diagonal Weights Spatial Multiplexing with Specified Power MIMO channel block diagonalized weights [wp,wc] = blkdiagbfweights(chanmat,ns) [wp,wc] = blkdiagbfweights(chanmat,ns,pt) [wp,wc] = blkdiagbfweights(chanmat,ns) returns precoding weights, wp, and combining weights, wc, derived from the channel response matrices contained in a MATLAB® cell array chanmat. You can specify multiple user channels by putting each channel in a chanmat cell. chanmat{k} represents the kthchannel from the transmitter to the user. For a single frequency, specify the channel cell as a matrix. For multiply frequencies, specify the channel cell as a three-dimensional array where the rows represent different subcarriers. Specify multiple subchannels per channel using the ns argument. Subchannels represent different data streams. ns specifies the number of subchannels for each user channel. Multiply the data streams by the precoding weights, wp. The precoding and combining weights diagonalize the channel into independent subchannels so that for the kthuser, the matrix wp*chanmat{k}*wc{k} is diagonal for each subcarrier. [wp,wc] = blkdiagbfweights(chanmat,ns,pt) also specifies the total transmitted power, pt, per subcarrier. Start with a base station consisting of a uniform linear array (ULA) with 16 antennas, and two users having receiver ULA arrays with 8 and 4 antennas, respectively. Show that using block diagonalization-based precoding and combining weights achieves spatial multiplexing, where the received signal at each user can be decoded without interference from the other user. Specify two data streams for each user. Specify the transmitter location in txpos and two user receiver locations in rxpos1 and rxpos2. Array elements are spaced one-half wavelength apart. txpos = (0:15)*0.5; rxpos1 = (0:7)*0.5; Create the channel matrix cell array using scatteringchanmtx and then compute the beamforming weights wp and wc. Each channel corresponds to a user. Assume that the channels have 10 scatterers. Each channel has two subchannels specified by the vector ns. chanmat = {scatteringchanmtx(txpos,rxpos1,10), ... scatteringchanmtx(txpos,rxpos2,10)}; ns = [2 2]; [wp,wc] = blkdiagbfweights(chanmat,ns); The weights diagonalize the channel matrices for each user. disp(wp*chanmat{1}*wc{1}) First create four subchannels to carry the data streams: two subchannels per channel. Each data stream contains 20 samples of ±1 . Precode the input streams and combine the streams to produce the recovered signals. x = 2*round(rand([20,4])) - 1; xp = x*wp; y1 = xp*chanmat{1} + 0.1*randn(20,8); y = [y1*wc{1},y2*wc{2}]; Overlay stem plots of the input and recovered signals to show that the received user signals are the same as the transmitted signals. subplot(4,1,m) s = stem([x(:,m) 2*((real(y(:,m)) > 0) - 0.5)]); s(1).LineWidth = 2; s(2).MarkerEdgeColor = 'none'; s(2).MarkerFaceColor = 'r'; title(sprintf('User %d Stream %d',ceil(m/2),rem(m-1,2) + 1)) legend('Input','Recovered','Location','best') Start with a base station consisting of a uniform linear array (ULA) with 16 antennas, and two users having receiver ULA arrays with 8 and 5 antennas, respectively. Show how to use three-dimensional arrays of channel matrices to handle two subcarriers. Then, the channel matrix for the first user takes the form 2-by-16-by-8 and the channel matrix for the second user takes the form 2-by-16-by-5. Also assume that there are two data streams for each user. rxpos1 = (0:(nr1-1))*0.5; Create the channel matrices using scatteringchanmtx and put them in a cell array. To create a second subchannel for each receiver, duplicate each channel matrix. Assume 10 point scatterers in computing the channel matrix. smtmp1 = scatteringchanmtx(txpos,rxpos1,10); sm1 = zeros(2,16,8); sm1(1,:,:) = smtmp1; chanmat = {sm1,sm2}; Specify that there are two data streams for each user. Specify the transmitted powers for each subcarrier. pt = [1.0 1.5]; [wp,wc] = blkdiagbfweights(chanmat,ns,pt); Show that the channels are diagonalized for the first subcarrier. ksubcr = 1; wpx = squeeze(wp(ksubcr,:,:)); chanmat1 = squeeze(chanmat{1}(ksubcr,:,:)); wc1 = squeeze(wc{1}(ksubcr,:,:)); wpx*chanmat1*wc1 Propagate the signals to each user and then decode. Generate four streams of random data containing -1's and +1's and having two columns for each user. Each stream is a subchannel. x = 2*(round(rand([20 4]))) - 1; Precode the data streams. xp = x*wpx; y1 = xp*chanmat1 + 0.1*randn(20,8); Decode the data streams. y = [y1*wc1,y2*wc2]; chanmat — Channel response matrices Nu-element cell array Channel response matrices, specified as an Nu-element cell array. Nu is the number of receive arrays. Each cell corresponds to a different channel and contains a channel response matrix or a three dimensional MATLAB array. The cell array must contain either all matrices or all arrays. For matrices, the number of rows for all matrices must be the same. For three-dimensional arrays, the number of rows and columns must be the same. If the kth cell is a matrix, the matrix has the size Nt-by-Nr(k). Nt is the number of elements in the transmitting array and Nr(k) is the number of elements in the kth receiving array. If the kth cell is an array, the array has the size L-by-Nt-by-Nr(k). L is the number of subcarriers. Nt is the number of elements in the transmit array and Nr(k) is the number of elements in the kth receive array. ns — Number of data streams per receive array Nu-element row vector of positive integers Number of data streams per receive array, specified as an Nu-element row vector of positive integers. Nu is the number of receive arrays. pt — Total transmitted power per subcarrier Total transmitted power per subcarrier, specified as a positive scalar or an L-element vector of positive values. L is the number of subcarriers. If pt is a scalar, all subcarriers have the same transmitted power. If pt is a vector, each vector element specifies the transmitted power for the corresponding subcarrier. Power is in linear units. complex-valued Nst-by-Nt matrix | complex-valued L-by-Nst-by-Nt MATLAB array Precoding weights, returned as a complex-valued Nst-by-Nt matrix or a complex-valued L-by-Nst-by-Nt MATLAB array. If chanmat contains matrices, wp is a complex-valued Nst-by-Nt matrix where Nst is the total number of data channels (sum(ns)). If chanmat contains three-dimensional MATLAB arrays, wp is a complex-valued L-by-Nst-by-Nt MATLAB array where Nst is the total number of data channels (sum(ns)). Units are dimensionless. Combining weights, returned as an Nu-element cell array. Units are dimensionless. If chanmat contains matrices, the kth cell in wc contains a complex valued Nr(k)-by-Ns(k) matrix. Ns(k) is the value of the argument ns for the kth receive array. If chanmat contains three-dimensional MATLAB arrays, the kth cell of wc contains a complex-valued L-by-Nr(k)-by-Ns(k) MATLAB array. Ns(k) is the value of the kth entry of the ns vector. [1] Heath, Robert W., et al. “An Overview of Signal Processing Techniques for Millimeter Wave MIMO Systems.” IEEE Journal of Selected Topics in Signal Processing, vol. 10, no. 3, Apr. 2016, pp. 436–53. DOI.org (Crossref), doi:10.1109/JSTSP.2016.2523924. Bibliography [4] Spencer, Q.H., et al. "Zero-Forcing Methods for Downlink Spatial Multiplexing in Multiuser MIMO Channels." IEEE Transactions on Signal Processing, Vol. 52, No. 2, February 2004, pp. 461-471. DOI.org (Crossref), doi:10.1109/TSP.2003.821107. diagbfweights | scatteringchanmtx | waterfill
Automorphic Forms, Geometry and Arithmetic | EMS Press The theory of automorphic forms has its roots in the early ninteenth century in classical work of Euler, Gauss, Jacobi, Eisenstein, and others. The subject experienced a vast expansion and reformulation following the work of Selberg, Harish-Chandra, and Langlands, in the 1970's, and remains the focus of intense current activity. The goal of this meeting was two-fold, first to provide an overview of the most recent developments in the theory of automorphic forms and automorphic representations, and, second, to provide a glimpse of the many closely related topics involving geometry and arithmetic where automorphic forms play an important role. Thus, one subset of the lectures (Soudry, Waldspurger, Gan, Muic, Moeglin, and Henniart) focused on automorphic forms and automorphic representations, while a second subset ranged quite widely and included geometry (Burger), arithmetic geometry (Pink, Howard, Nekovar, Yang), moduli spaces (Rapoport, van der Geer, G\"ortz, Ng\^o), Galois theory (Savin) and L-functions (Harder, Shahidi). Among the many fundamental insights of Langlands are the following: \begin{enumerate} \item{} Automorphic representations of a given reductive group G over a number field should occur in packets (L-packets or Arthur packets), parametrized by representations of the Weil-Deligne group into the Langlands dual group {}^LG . A local version of this should describe the (irreducible, admissible) representations of the group G(F) for any local field F . \item{} It is necessary to consider the automorphic representations of all reductive groups together and, in particular, their relations, the most important of which are predicted by the principle of functoriality. \end{enumerate} These insights lie very deep and their complete realization is still a very distant dream. Nonetheless, they have provided a guide for much of the subsequent research in this area and a number of the most important techniques that have been brought to bear were discussed at the meeting. These included the Arthur-Selberg trace formula, fundamental lemma, local and global descent, converse theorems, local theta correspondence, and Eisenstein series. The connections of automorphic forms with geometry and arithmetic are many and important. One such set of connections occurs in the theory of Shimura varieties. Here important topics include interpretation as moduli spaces and period domains, the arithmetic of Heegner points and their higher dimensional generalizations, including their arithmetic intersections and heights, and the structure of Shimura varieties in characteristic p>0 . Automorphic forms have a deep connection with the geometry of locally symmetric spaces, where, for example, the boundary behavior of cohomology classes and Eisenstein series can be applied to the study of special values of L-functions. Again, all of these aspects were discussed during the program. The meeting revealed, once again, that the theory of automorphic forms continues to be a vibrant subject in which many exciting developments can be expected in the future. \centerline{\bf Special event} On Friday afternoon, the Oberwolfach Prize was awarded to Ng\^o Bao-Chao for his work on the fundamental lemma. The award presentation, by Professor Reinhold Remmert, was followed by a Laudatio given by Michael Rapoport explaining the significance of Ng\^o's work and describing a basic case of the fundamental lemma. Rapoport's Laudatio is included at the end of this report. Ng\^o then gave a lecture in which he explained some of the fundamental ideas of his proof, for example, the use of the Hitchin fibration. In the evening, there was a festive dinner. Wolfgang L. Reiter, Stephen S. Kudla, Automorphic Forms, Geometry and Arithmetic. Oberwolfach Rep. 5 (2008), no. 1, pp. 243–296
Measurement in quantum mechanics - Wikiquote In quantum physics, a measurement is the testing or manipulation of a physical system in order to yield a numerical result. The predictions that quantum physics makes are in general probabilistic. The mathematical tools for making predictions about what measurement outcomes may occur were developed during the 20th century and make use of linear algebra and functional analysis. Quantum physics has proven to be an empirical success and to have wide-ranging applicability. However, on a more philosophical level, debates continue about the meaning of the measurement concept. What we have learnt from this chapter is that we cannot have a direct evidence of, i.e. directly measure, a quantum state of a single system. Our experience is only connected with the experimental values of observables, and any time we measure an observable we can only have a partial experience of a system under a certain perspective but we can never have a complete experience that would be represented by an observation of the state vector, which is – in a quantum-mechanical sense – a complete description of the system. In other words, the quantum state is not an observable in the classical sense. However, since this feature of the quantum state is not due to subjective ignorance but rather to an intrinsic characteristic of the microscopic world, there are no definitive reasons to deny the reality of a quantum state. Gennaro Auletta, Mauro Fortunato and Giorgio Parisi, Quantum Mechanics (2009) By now the reader will have realized that measurement in quantum physics is fundamentally different from that in classical physics. In classical physics, a measurement reveals a pre-existing property of the physical system that is tested. If a car is driving at 180 km h−1 on the highway, the measurement of its speed by radar determines a property that exists prior to the measurement, which gives the police the legitimacy to give a ticket to the driver. On the contrary, the measurement of the Sx component of a spin-1/2 particle in the state |+〉 does not reveal a value of Sx existing before the measurement. The spread in the results of measuring Sx in this case is sometimes attributed to “uncontrollable perturbation of the spin due to the measurement process,” but the value of Sx does not exist before the measurement, and that which does not exist cannot be perturbed. Michel Le Bellac, Quantum physics (2006) When we measure a real dynamical variable ξ, the disturbance involved in the act of measurement causes a jump in the state of the dynamical system. From physical continuity, if we make a second measurement of the same dynamical variable ξ immediately after the first, the result of the second measurement must be the same as that of the first. Thus after the first measurement has been made, there is no indeterminacy in the result of the second. Hence, after the first measurement has been made, the system is in an eigenstate of the dynamical variable ξ, the eigenvalue it belongs to being equal to the result of the first measurement. This conclusion must still hold if the second measurement is not actually made. In this way we see that a measurement always causes the system to jump into an eigenstate of the dynamical variable that is being measured, the eigenvalue this eigenstate belongs to being equal to the result of the measurement. We can infer that, with the dynamical system in any state, any result of a measurement of a real dynamical variable is one of its eigenvalues. Conversely, every eigenvalue is a possible result of a measurement of the dynamical variable for some state of the system, since it is certainly the result if the state is an eigenstate belonging to this eigenvalue. This gives us the physical significance of eigenvalues. The set of eigenvalues of a real dynamical variable are just the possible results of measurements of that dynamical variable and the calculation of eigenvalues is for this reason an important problem. P. A. M. Dirac, The Principles of Quantum Mechanics (4th ed., 1958) Hence coherence survives to the extent to which an experiment fails to distinguish between different eigenvalues of the observable being measured, and distinct final states of the object display interference if the final states of the apparatus overlap. […] When the apparatus ends up in one member of a set of orthogonal states, the experiment unambiguously determines whether or not the object is then in a particular eigenstate of the observable of interest, but the result becomes increasingly ambiguous with increasing overlap between the states of the apparatus; furthermore, states of the object with distinct eigenvalues interfere with a visibility that is a measure of the ambiguity with which the states assumed by the apparatus determine the states of the object. Kurt Gottfried and Tung-Mow Yan, Quantum Mechanics: Fundamentals (2nd ed., 2003) We see that the measuring process in quantum mechanics has a "two- faced" character: it plays different parts with respect to the past and future of the electron. With respect to the past, it "verifies" the probabilities of the various possible results predicted from the state brought about by the previous measurement. With respect to the future, it brings about a new state. Thus the very nature of the process of measurement involves a far-reaching principle of irreversibility. Lev Landau and Evgeny Lifshitz, Quantum Mechanics: Non-relativistic theory (3rd ed., 1977) As we apply the results obtained in this section, we should remember that common terms like "measurement" and "information" are being used here with a specific technical meaning. In particular, this is not the place for a detailed analysis of real experimental measurements and their relation to the theoretical framework. We merely note that, in the information theoretic view of quantum mechanics, the probabilities and the related density operators and entropies, which are employed to assess the properties of quantum states and the outcomes of measurement, provide a coherent and consistent basis for understanding and interpreting the theory. Eugen Merzbacher, Quantum Mechanics (1998) We now consider measurements of A and B when they are compatible observables. Suppose we measure A first and obtain result a'. Subsequently, we may measure B and get result b'. Finally we measure A again. It follows from our measurement formalism that the third measurement always gives a' with certainty; that is, the second (B) measurement does not destroy the previous information obtained in the first (A) measurement. This is rather obvious when the eigenvalues of A are nondegenerate: {\displaystyle |\alpha \rangle {\xrightarrow {\text{A measurement}}}|a',b'\rangle {\xrightarrow {\text{B measurement}}}|a',b'\rangle {\xrightarrow {\text{A measurement}}}|a',b'\rangle } J. J. Sakurai and Jim J. Napolitano, Modern Quantum Mechanics (2nd ed., 2011) This strange fact—that the system evolves one way between measurements and another way during a measurement—has been a source of contention and confusion for decades. It raises a question: Shouldn’t the act of measurement itself be described by the laws of quantum mechanics? The answer is yes. The laws of quantum mechanics are not suspended during measurement. However, to examine the measurement process itself as a quantum mechanical evolution, we must consider the entire experimental setup, including the apparatus, as part of a single quantum system. Leonard Susskind and Art Friedman, Quantum Mechanics: The Theoretical Minimum (2014) Retrieved from "https://en.wikiquote.org/w/index.php?title=Measurement_in_quantum_mechanics&oldid=2967441"
problemJWT_payload HASH(0x56508d4c7e70) A tangent line is drawn to the hyperbola [math] xy= 1 at a point [math] \displaystyle P=\left(2,\ \frac{1}{2}\right) . Find the midpoint [math] M of the line segment cut from this tangent line by the coordinate axis. Note: It turns out that the triangle formed by the tangent line and the coordinate axes always has the same area, no matter where [math] P is located on the hyperbola. x(M)= y(M)=
QPSolve - Maple Help Home : Support : Online Help : Mathematics : Optimization : Optimization Package : QPSolve solve a quadratic program QPSolve(obj, constr, bd, opts) algebraic; quadratic objective function (optional) equation(s) of the form option = value where option is one of assume, feasibilitytolerance, infinitebound, initialpoint, iterationlimit, maximize, output, or useunits; specify options for the QPSolve command The QPSolve command solves a quadratic program (QP), which involves computing the minimum (or maximum) of a quadratic objective function possibly subject to linear constraints. This help page describes the use of the QPSolve command when the QP is specified in algebraic form. A summary of this form is given in the Optimization/AlgebraicForm help page. QPSolve also recognizes the problem in Matrix form (see the QPSolve (Matrix Form) help page). Matrix form leads to more efficient computation, but is more complex . The first parameter obj is the objective function, which must be an algebraic expression, quadratic in the problem variables. If obj is linear, the Optimization[LPSolve] command is automatically called. \mathrm{varname}=\mathrm{varrange} \mathrm{varrange} is its range. The range endpoints can include values of type infinity. Non-negativity of the problem variables is not assumed by default, but can be specified with the assume = nonnegative option. \mathrm{varname}=\mathrm{value} The QPSolve command uses an iterative active-set method implemented in a built-in library provided by the Numerical Algorithms Group (NAG). An initial point can be provided using the initialpoint option. Otherwise, a default point is used. The computation is performed in floating-point. Therefore, all data provided must have type realcons and all returned solutions are floating-point, even if the problem is specified with exact values. For more information about numeric computation in the Optimization package, see the Optimization/Computation help page. \mathrm{with}⁡\left(\mathrm{Optimization}\right): Use QPSolve to minimize a quadratic function of two variables subject to a linear constraint. \mathrm{QPSolve}⁡\left(2⁢x+5⁢y+3⁢{x}^{2}+3⁢x⁢y+2⁢{y}^{2},{2\le x-y}\right) [\textcolor[rgb]{0,0,1}{-3.53333333333333}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0.466666666666667}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-1.60000000000000}]] \mathrm{QPSolve}⁡\left(2⁢x+5⁢y+3⁢{x}^{2}+3⁢x⁢y+2⁢{y}^{2},{2\le x-y},\mathrm{assume}=\mathrm{nonnegative}\right) [\textcolor[rgb]{0,0,1}{16.}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{2.}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0.}]] Bounds can be provided for one or more of the variables. \mathrm{QPSolve}⁡\left(2⁢x+5⁢y+3⁢{x}^{2}+3⁢x⁢y+2⁢{y}^{2},{2\le x-y},x=1.5..\mathrm{\infty }\right) [\textcolor[rgb]{0,0,1}{-1.53125000000000}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{1.50000000000000}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-2.37500000000000}]] The Optimization[QPSolve] command was updated in Maple 2018.
Mini-Workshop: Aspects of Ricci-Flow | EMS Press The motion of a Riemannian metric along its Ricci curvature, \tfrac{d}{dt}g_{ij} = - 2R_{ij} (g(t)) was proposed in 1982 by Richard S. Hamilton as a geometric version of the heat equation suitable for uniformizing and smoothing the geometry of a given initial Riemannian manifold (M^3, g_0) . Hamilton's work has opened up the whole area of geometric evolution equations, leading to the discovery of new phenomena in these equations and to topological applications such as the classification of 3-manifolds of positive Ricci-curvature and certain 4-manifolds. Recent work of G. Perelman has indicated how to approach a proof of the Poincar\'e-conjecture and the Thurston Geometrization conjecture for 3-manifolds using Hamilton-Ricciflow. The mini workshop has concentrated on a thorough technical investigation of that part of the work of Perelman that is related to Hamilton-Ricciflow with surgeries on a finite time interval. Together with further work of Perelman and also Colding-Minicozzi this part of Perelman's work implies a proof of the Poincar\'e conjecture when confirmed. The efforts of the workshop were greatly helped by previous work of other mathematicians on Perelman's work., e.g. the notes of B. Kleiner and J. Lott. The workshop was able to confirm major sections in the two papers of G. Perelman including the entropy- and reduced volume estimates, the compactness properties of ancient solutions to the flow and the surgery construction. It was also able to reinterpret several arguments involving Alexandrov spaces from the viewpoint of smooth Differential Geometry. When the workshop had to come to an end, its participants agreed that it would be very desirable to establish self-contained expositions of the following points: \begin{enumerate} \item The boundedness of the curvature \sup_{B_\rho(x)} R \leq c(\rho) in the proof of the approximation theorem I.12.1 of Perelman's first paper. \item The approximation of mini-max surfaces in the varifold distance for immersed surfaces in the paper of Colding-Minicozzi. \item The survival of the reduced volume estimate past surgeries. \item The uniform control of a fixed scale \rho > 0 past all surgeries on a finite time interval, below which the approximation theorem applies. \end{enumerate} It seems that detailed self-contained expositions of (3) and in particular (4) require more effort than (1) and (2). Gerhard Huisken, Klaus Ecker, Thomas Ilmanen, Mini-Workshop: Aspects of Ricci-Flow. Oberwolfach Rep. 2 (2005), no. 2, pp. 1177–1198
AreSameSolution - Maple Help Home : Support : Online Help : Mathematics : Discrete Mathematics : Summation and Difference Equations : LREtools : HypergeometricTerm Subpackage : AreSameSolution verify whether an expression is a solution of a linear difference equation depending on a hypergeometric term verify whether solutions of a linear difference equation depending on a hypergeometric term are equivalent IsSolution(sol, r, eq, var) AreSameSolution(sol, r, sol1, r1, n) solution to be checked hypergeometric term in the solution sol; specified as a list consisting of the name representing the term in the equation and in the solution and the certificate of the term, such as [t, n+1] difference equation depending on a hypergeometric term function variable for which to solve, such as y(x) solution against which sol is compared hypergeometric term in the solution sol1; specified as a list consisting of the name representing the term in the equation and in the solution and the certificate of the term, such as [t, n+1] The IsSolution(sol, r, eq, var) command returns true if eq is a linear difference equation with polynomial coefficients depending on a hypergeometric term and sol is its solution. Otherwise, false is returned. The IsSolution function substitutes sol for the function variable and checks the result. The AreSameSolution(sol, r, sol1, r1, n) command returns true if the solutions sol and sol1 are equivalent. Otherwise, false is returned. The function transforms sol and sol1 to have the same term and checks that the number of linear independent solutions and degrees of corresponding elements in sol and sol1 are the same. \mathrm{with}⁡\left(\mathrm{LREtools}[\mathrm{HypergeometricTerm}]\right): \mathrm{eq}≔y⁡\left(n+2\right)-\left(n!+n\right)⁢y⁡\left(n+1\right)+n⁢\left(n!-1\right)⁢y⁡\left(n\right) \textcolor[rgb]{0,0,1}{\mathrm{eq}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\right)\textcolor[rgb]{0,0,1}{-}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{!}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{n}\right)\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{⁢}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{!}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{n}\right) \mathrm{sol},r≔\mathrm{PolynomialSolution}⁡\left(\mathrm{eq},y⁡\left(n\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{sol}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{r}\textcolor[rgb]{0,0,1}{≔}\frac{\textcolor[rgb]{0,0,1}{t}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{\mathrm{_C}}}_{\textcolor[rgb]{0,0,1}{1}}}{\textcolor[rgb]{0,0,1}{n}}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{t}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}] \mathrm{IsSolution}⁡\left(\mathrm{sol},r,\mathrm{eq},y⁡\left(n\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{sol1}≔t⁢\mathrm{_C}[1] \textcolor[rgb]{0,0,1}{\mathrm{sol1}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{t}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{\mathrm{_C}}}_{\textcolor[rgb]{0,0,1}{1}} \mathrm{r1}≔[t,n] \textcolor[rgb]{0,0,1}{\mathrm{r1}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{t}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{n}] \mathrm{IsSolution}⁡\left(\mathrm{sol1},\mathrm{r1},\mathrm{eq},y⁡\left(n\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{AreSameSolution}⁡\left(\mathrm{sol1},\mathrm{r1},\mathrm{sol},r,n\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{eq}≔\left({2}^{n}⁢n!+{n}^{2}\right)⁢z⁡\left(n+1\right)-\left(2⁢n⁢{2}^{n}⁢n!+2⁢{2}^{n}⁢n!+{n}^{2}+2⁢n+1\right)⁢z⁡\left(n\right) \textcolor[rgb]{0,0,1}{\mathrm{eq}}\textcolor[rgb]{0,0,1}{≔}\left({\textcolor[rgb]{0,0,1}{2}}^{\textcolor[rgb]{0,0,1}{n}}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{!}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{n}}^{\textcolor[rgb]{0,0,1}{2}}\right)\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{-}\left(\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{2}}^{\textcolor[rgb]{0,0,1}{n}}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{!}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{2}}^{\textcolor[rgb]{0,0,1}{n}}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{!}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{n}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{n}\right) \mathrm{sol},r≔\mathrm{PolynomialSolution}⁡\left(\mathrm{eq},z⁡\left(n\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{sol}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{r}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{\mathrm{_C}}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{n}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{t}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{\mathrm{_C}}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{t}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}] \mathrm{IsSolution}⁡\left(\mathrm{sol},r,\mathrm{eq},z⁡\left(n\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{sol1}≔\mathrm{_C}[1]⁢{n}^{2}+t⁢n⁢\left(n-1\right)⁢\left(n-2\right)⁢\mathrm{_C}[1] \textcolor[rgb]{0,0,1}{\mathrm{sol1}}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{\mathrm{_C}}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{n}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{t}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{⁢}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{⁢}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{2}\right)\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{\mathrm{_C}}}_{\textcolor[rgb]{0,0,1}{1}} \mathrm{r1}≔[t,2⁢n-4] \textcolor[rgb]{0,0,1}{\mathrm{r1}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{t}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{4}] \mathrm{IsSolution}⁡\left(\mathrm{sol1},\mathrm{r1},\mathrm{eq},z⁡\left(n\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{AreSameSolution}⁡\left(\mathrm{sol1},\mathrm{r1},\mathrm{sol},r,n\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{sol2}≔{n}^{2}+t⁢n⁢\left(n-1\right)⁢\left(n-2\right) \textcolor[rgb]{0,0,1}{\mathrm{sol2}}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{n}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{t}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{⁢}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{⁢}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{2}\right) \mathrm{IsSolution}⁡\left(\mathrm{sol2},\mathrm{r1},\mathrm{eq},z⁡\left(n\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{AreSameSolution}⁡\left(\mathrm{sol2},\mathrm{r1},\mathrm{sol},r,n\right) \textcolor[rgb]{0,0,1}{\mathrm{false}}
Grassmann-jl/community - Gitter Grassmann.jl/community Conformal geometric algebra in Julia, https://github.com/chakravala/Grassmann.jl @tkluck julia> 10E30 + -10E30 + 1 julia> 10E30 + (-10E30 + 1) cool! I'll probably drop offline and let you handle it :) hope this was useful want to post on discourse when you found it? yes, I will update discourse if I figure it out, please do keep investigating if you would like to help again, thanks! @tkluck by the way, you will get the one(::Type{Any}) method available if you also using Reduce, Grassmann, that might help resolve some of the other issues with BigFloat in Grassmann, I will try to improve support for it though @dmillard Hi! Please excuse my ignorance, I'm currently learning about Grassmann algebra and certainly don't grasp much of the subtlety. I'm reading Dorst & Mann, Geometric Algebra for Computer Science. There, they define an inner product called the contraction product, "⌋", defined implicitly as (X ∧ A) * B = X * (X ⌋ B). What does this correspond to in Grassman.jl? I see there is an inner product "⋅" defined, are these the same? As I work through the book, I'd like to play along using Grassmann.jl to build intuition. Thanks! Thank you for joining, indeed Grassmann has that contraction product, which is the < operator. That is also known as the left contraction... alternatively you can use the right contraction > instead (or ⋅ alternatively). So, in Grassmann I have provided both the left and right contractions, but I personally consider the right contraction more fundamental than the left one, while Dorst considers the left more important and fundamental. In the package, I provide both methods, but have made the right contraction default when ⋅ is used at the moment. Thank you for the quick reply! What's a recommended way to explore the available products? Browsing algebra.jl is currently a little overwhelming for me. Let me know if you're interested in docs PRs, I'm happy to contribute as I stumble along. Thanks for such a cool package, I'm really looking forward to using it more. You're welcome to make documentation PR's if you really want to. Most of it is documented in This is what I was looking for - somehow I missed it. Thanks! Just committed some basic doc strings to initialize the documentation @lukeburns @chakravala can I construct a clifford algebra over a tensor product of vector spaces with a custom inner product? I'm interesting in working with Doran/Lasenby's multiparticle spacetime algebra: Cl(\mathbb{R}^n_{1,3}) with the inner product v^2 = v_\mu^i v_\nu^j \eta^{\mu \nu} \delta_{i j} yes, the inner product can be customized based on a metric with the DirectSum package. you can add up to 64 indices, but the product operations have not been fully optimized for sparsity at very high dimensions yet Thanks, realize it's just \mathcal{Cl}(\mathbb{R}_{n,3n}) Direct sum not tensor product Have you tried @basis ℝ'⊕ℝ^(3n) for that yet? Yeah just went with e.g. @basis V"++------" You can also use basis"++------" or @basis ℝ^2⊕(ℝ^6)' there are multiple ways Ah! thx Let me know your feedback, there are lots of improvements planned, although I have taken a break recently I will be very interested in optimizations in higher dimensions. The newer SparseChain and MultiGrade types will help with that, but it takes a significant amount of extra effort to incorporate those in a type stable way, if you'd like to encourage me please consider donating at https://liberapay.com/chakravala One more question: I'm interested in constructing quotient algebras, subject to certain (one-sided) equivalence relations (e.g. MSTA is really \mathcal{Cl}(\mathbb{R}^n_{1,3})/Q ). Is it possible to do this with your library? I imagine this is much more complicated to implement, but thought I'd ask, since the implementation seems quite solid. E.g. The equivalence relation: for a,b \in \mathcal{Cl}(\mathbb{R}^n_{1,3}) a \sim b \iff (a - b)p = 0 a-b \neq 0 p that generates the left principal ideal Q Could you give me a more detailed construction of it? \neq ^ Err MSTA is actually a right quotient algebra, in which case a \sim b \iff p(a - b) = 0 a-b \neq 0 well, I imagine at the very least you would be able to define such an equivalence relation in Julia which checks that condition hmm. does your implementation actually generate geometric algebras by quotienting tensor algebras by a quadratic form? while I have theoretically based my algebras on the foundations of the equivalence relations discussed in my paper, I don't have an actual equivalence relation defined in Julia, this is just a theoretical foundation to help explain how to step-by-step derive the theory i imagine that's not particularly performant in reality, I only ever deal with tensors with ordered indices, so I always have an increasing index like v123 for example, which is equal to v231 by the equivalence relation, but there is no need to create a separate instance of the class I only need one instance of the class, so I only need v123 and don't need a representation for v231 in my calculations, I automatically make sure the indices are sorted, so I always use the increasing order, requiring a single element for from the equivalence class for representation so yes, I imagine it would be less performance if I had to also account for the other equivalent representations (for exterior algebra) also, note that if you read my paper, the foundations I use are built up in a (innovative in my opinion) way that is different from the Clifford algebra constructions you may be used to, I am using a system I call "differential geometric algebra" and it is based on a generalization of the Clifford product, which I call the geometric algebraic product, which is a slightly more general definition I've invited people to comment on the foundations of differential geometric algebra on the forumhere https://discourse.bivector.net/t/differential-geometric-algebra-using-leibniz-grassmann/27 @chakravala It looks like the stable link on the announce post is broken, is that possible? https://grassmann.crucialflow.com/stable/ @tkluck yes that is broken, the stable docs never got generated when I pushed v0.4 and I don't know why yet, maybe you can figure out why, but the /dev docs are indeed available @tkluck the stable docs have now been fixed as of v0.5 by the way Cédric Belmant Hi, thank you for this awesome package. I discovered Geometric Algebra recently and I'm glad to see that Julia has an efficient implementation. How likely is it that you would consider a more permissive license in the future, e.g. MIT or similar? I guess it might increase its adoption within the Julia community (since MIT licenses are quite "the norm" AFAIK). I am probably not the first to ask and I guess you have your reasons, which I would totally respect, I'm not trying to (re-)open up a debate about that. Thanks for asking, you are welcome to negotiate a custom license independent of the open source license. Another thing I am considering is a kick starter fundraising campaign to change the license of Grassmann.jl if a large amount of money can be raised. Otherwise, I have already been more than genereous to my free users with the AGPL, which should be more than sufficient.
express 42/-63 as a rational number with denominator 3 - Maths - Rational Numbers - 10393781 | Meritnation.com express 42/-63 as a rational number with denominator 3 \mathrm{The} \mathrm{given} \mathrm{number} \mathrm{is} \frac{42}{-63} \mathrm{or} \frac{-42}{63}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}\mathrm{Now}, \mathrm{to} \mathrm{obtain} \mathrm{an} \mathrm{equivalent} \mathrm{fraction} \mathrm{of} \frac{-42}{63} \mathrm{having} \mathrm{denominator} 3, \mathrm{we} \mathrm{will} \mathrm{divide} \phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}\mathrm{each} \mathrm{numerator} \mathrm{and} \mathrm{denominator} \mathrm{by} 21.\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}\frac{42}{-63} = \frac{\left(-42\right) ÷ 21}{63 ÷ 21} = -\frac{2}{3} Sheersh Jaiswal answered this Abinav Sudharsan answered this can you explain it in detail can you explain it in det
Analysis of the 2019 Mw 5.8 Silivri Earthquake Ground Motions: Evidence of Systematic Azimuthal Variations Associated with Directivity Effects | Seismological Research Letters | GeoScienceWorld Elif Türker; Elif Türker * Corresponding author: etuerker@gfz-potsdam.de Marco Pilz; Elif Türker, Fabrice Cotton, Marco Pilz, Graeme Weatherill; Analysis of the 2019 Mw 5.8 Silivri Earthquake Ground Motions: Evidence of Systematic Azimuthal Variations Associated with Directivity Effects. Seismological Research Letters 2022;; 93 (2A): 693–705. doi: https://doi.org/10.1785/0220210168 The main Marmara fault (MMF) extends for 150 km through the Sea of Marmara and forms the only portion of the North Anatolian fault zone that has not ruptured in a large event (⁠ Mw>7 ⁠) for the last 250 yr. Accordingly, this portion is potentially a major source contributing to the seismic hazard of the Istanbul region. On 26 September 2019, a sequence of moderate‐sized events started along the MMF only 20 km south of Istanbul and were widely felt by the population. The largest three events, 26 September Mw 5.8 (10:59 UTC), 26 September 2019 Mw 4.1 (11:26 UTC), and 20 January 2020 Mw 4.7 were recorded by numerous strong‐motion seismic stations and the resulting ground motions were compared to the predicted means resulting from a set of the most recent ground‐motion prediction equations (GMPEs). The estimated residuals were used to investigate the spatial variation of ground motion across the Marmara region. Our results show a strong azimuthal trend in ground‐motion residuals, which might indicate systematically repeating directivity effects toward the eastern Marmara region. Silivri earthquake 2019
(Redirected from Player Efficiency Rating) The player efficiency rating (PER) is John Hollinger's all-in-one basketball rating, which attempts to collect or boil down all of a player's contributions into one number. Using a detailed formula, Hollinger developed a system that rates every player's statistical performance.[1] 1.1 Relationship to baseball sabermetrics 2 Problems with PER 3 Career PER leaders 3.1 Career PER – Michael Jordan vs. LeBron James 3.2 Players from different NBA generations and Career PER PER strives to measure a player's per-minute performance, while adjusting for pace. A league-average PER is always 15.00, which permits comparisons of player performance across seasons. PER takes into account accomplishments, such as field goals, free throws, 3-pointers, assists, rebounds, blocks and steals, and negative results, such as missed shots, turnovers and personal fouls. The formula adds positive stats and subtracts negative ones through a statistical point value system. The rating for each player is then adjusted to a per-minute basis so that, for example, substitutes can be compared with starters in playing time debates. It is also adjusted for the team's pace. In the end, one number sums up the players' statistical accomplishments for that season. Relationship to baseball sabermetrics[edit] Hollinger's work has benefitted from the observations of sabermetric baseball analysts, such as Bill James. One of the primary observations is that traditional counting statistics in baseball, like runs batted in and wins, are not reliable indicators of a player's value. For example, runs batted in is highly dependent upon opportunities created by a player's teammates. PER extends this critique of counting statistics to basketball, noting that a player's opportunities to accumulate statistics are dependent upon the number of minutes played as well as the pace of the game. Problems with PER[edit] PER largely measures offensive performance. Hollinger freely admits that two of the defensive statistics it incorporates—blocks and steals (which was not tracked as an official stat until 1973)—can produce a distorted picture of a player's value and that PER is not a reliable measure of a player's defensive acumen. For example, Bruce Bowen, widely regarded as one of the best defenders in the NBA through the 2006–07 season, routinely posted single-digit PERs. Some have argued that PER gives undue weight to a player's contribution in limited minutes, or against a team's second unit, and it undervalues players who have enough diversity in their game to play starter's minutes. PER has been said to reward inefficient shooting. To quote Dave Berri, the author of The Wages of Wins: Hollinger responded via a post on ESPN's TrueHoop blog: Berri leads off with a huge misunderstanding of PER—that the credits and debits it gives for making and missing shots equate to a “break-even” shooting mark of 30.4% on 2-point shots. He made this assumption because he forgot that PER is calibrated against the rest of the league at the end of the formula. Reference guide[edit] Hollinger has set up PER so that the league average, every season, is 15.00, which produces sort of a handy reference guide:[2] Second offensive option 18.0–20.0 Third offensive option 16.5–18.0 Slightly above-average player 15.0–16.5 Rotation player 13.0–15.0 Non-rotation player 11.0–13.0 Fringe roster player 9.0–11.0 Player who won't stick in the league 0–9.0 Only 30 times has a player posted a season efficiency rating over 30.0 (with more than 15 games played in that season), with the highest score being 32.85 (Nikola Jokić). Michael Jordan and LeBron James lead with four 30+ seasons, with Shaquille O'Neal, Wilt Chamberlain and Giannis Antetokounmpo having accomplished three each, Anthony Davis, Nikola Jokić and Joel Embiid having accomplished two each, and David Robinson, Tracy McGrady, Dwyane Wade, Chris Paul, Stephen Curry, Russell Westbrook and James Harden[3] having accomplished one each. Career PER leaders[edit] 5. Shaquille O'Neal* 26.43 10. Karl-Anthony Towns 24.82 38. DeMarcus Cousins 22.05 40. Dolph Schayes* 22.00 41. John Stockton* 21.83 43. Andre Drummond 21.81 45. Clyde Lovellette* 21.56 47. Adrian Dantley* 21.51 48. Harry Gallatin* 21.51 53. Dwight Howard 21.30 63. Jonas Valančiūnas 20.77 64. Enes Kanter Freedom 20.76 65. John Drew 20.74 69. Al Jefferson 20.56 71. Nikola Vučević 20.55 73. Elton Brand 20.51 74. Ed Macauley* 20.42 75. Manu Ginóbili 20.22 76. Larry Foust 20.19 80. Greg Monroe 20.07 83. Kevin McHale* 20.02 84. Steve Nash* 19.95 85. Larry Nance 19.92 88. Alex English* 19.87 90. Cliff Hagan* 19.80 92. Paul Pierce* 19.73 93. Terrell Brandon 19.69 95. Isaiah Thomas 19.64 98. Gilbert Arenas 19.57 100. Michael Redd 19.48 Active players are listed in bold. * Indicates member of the Hall of Fame. PER since 1951–52 Career PER – Michael Jordan vs. LeBron James[edit] Prior to the 2013–14 season, LeBron James was on the verge of surpassing Michael Jordan's career PER to take the number one spot.[5][6] As the metric is averaged over the length of a player's entire career a decrease in efficiency later in his career means a player can move down in the ranking; Jordan's PER took a big hit in the final two years of his career when he returned to the game with the Washington Wizards, posting 20.7 in his penultimate season and 19.3 in his final season, compared to his career high of 31.7 (Jordan's PER was 29.1 without accounting for his Wizards years).[7] The debate was intensified on 1 October 2013, with Jordan stating that he would have liked to have played against LeBron, and believes he would have won a one-on-one encounter.[8] Several news features focus on comparing the two players by using the PER metric.[9][10][11] At the conclusion of the 2012–13 NBA season Miami Heat head coach, Erik Spoelstra, stated that comparing players from different generations is the equivalent to comparing apples and oranges. explaining: You'll never be able to tell [how James stacks up to Jordan or Magic Johnson] because they didn't play against each other. The game is different now than when it was played in the 1980s or even before that".[12] Players from different NBA generations and Career PER[edit] Comparing players from different generations using PER presents several problems, this is primarily due to the rule changes and the changes in statistical data collected from different eras (although many other factors could be taken into consideration, even down to the increased sample size as the NBA grew through incorporating more teams). Some of the more important rule changes that should be considered include; some of the players on this list, such as Wilt Chamberlain and Bill Russell, played before the three-point shot, blocks, and steals stats were officially recorded. Blocked shots and steals were first officially recorded in the NBA during the 1973–74 season. The three-point shot entered the league in 1979–80 season. During the 1990s and 2000s numerous rule changes were incorporated, the "three point foul" and "clear path" rules were both introduced in the 1995–96 season with the effect of increasing the number of free throws, hand-checking (the amount of contact a defender may make with an opposing player) was banned in 1994 and the use of elbows was banned in 1997 (both rules had seen various degrees of limitation by earlier rule changes) although neither was fully implemented until 2004. The 2004 rule changes, which also included calling the defensive 3 second rule ("[...] a defensive player may not station himself in the key area longer than three seconds"—a longstanding rule which had been ignored by referees) had a major effect, opening up the game and allowing a more free-flowing offense; it encouraged aggressive inside attack based plays (to draw fouls), and has increased the number of fouls given when contact is made on players who drive to the basket.[13] Former ABA and NBA Coach Larry Brown is quoted as saying: "The college game is much more physical (than the NBA) [...] I always tease Michael (Jordan), if he played today, he'd average 50."[14] All calculations begin with what is called unadjusted PER (uPER). The formula is: {\displaystyle uPER={\frac {1}{min}}\times \left(3P+\left[{\frac {2}{3}}\times AST\right]+\left[\left(2-factor\times {\frac {tmAST}{tmFG}}\right)\times FG\right]+\left[0.5\times FT\times \left(2-{\frac {1}{3}}\times {\frac {tmAST}{tmFG}}\right)\right]-\left[VOP\times TO\right]-\left[VOP\times DRBP\times \left(FGA-FG\right)\right]-\left[VOP\times 0.44\times \left(0.44+\left(0.56\times DRBP\right)\right)\times \left(FTA-FT\right)\right]+\left[VOP\times \left(1-DRBP\right)\times \left(TRB-ORB\right)\right]+\left[VOP\times DRBP\times ORB\right]+\left[VOP\times STL\right]+\left[VOP\times DRBP\times BLK\right]-\left[PF\times \left({\frac {lgFT}{lgPF}}-0.44\times {\frac {lgFTA}{lgPF}}\times VOP\right)\right]\right)} When multiplied out and refactored, the equation above becomes: {\displaystyle uPER={\frac {1}{min}}\times \left(3P-{\frac {PF\times lgFT}{lgPF}}+\left[{\frac {FT}{2}}\times \left(2-{\frac {tmAST}{3\times tmFG}}\right)\right]+\left[FG\times \left(2-{\frac {factor\times tmAST}{tmFG}}\right)\right]+{\frac {2\times AST}{3}}+VOP\times \left[DRBP\times \left(2\times ORB+BLK-0.2464\times \left[FTA-FT\right]-\left[FGA-FG\right]-TRB\right)+{\frac {0.44\times lgFTA\times PF}{lgPF}}-\left(TO+ORB\right)+STL+TRB-0.1936\left(FTA-FT\right)\right]\right)} {\displaystyle \ factor={\frac {2}{3}}-\left[\left(0.5\times {\frac {lgAST}{lgFG}}\right)\div \left(2\times {\frac {lgFG}{lgFT}}\right)\right]} {\displaystyle \ VOP={\frac {lgPTS}{lgFGA-lgORB+lgTO+0.44\times lgFTA}}} {\displaystyle \ DRBP={\frac {lgTRB-lgORB}{lgTRB}}} tm, the prefix, indicating of team rather than of player; lg, the prefix, indicating of league rather than of player; min for number of minutes played; 3P for number of three-point field goals made; FG for number of field goals made; FT for number of free throws made; VOP for value of possession (but in reference to the league, in this instance); RB for number of rebounds: ORB for offensive, DRB for defensive, TRB for (total) combined, RBP for percentage of offensive or defensive; others being outlined in basketball statistics. Once uPER is calculated, it must be adjusted for team pace and normalized to the league to become PER: {\displaystyle \ PER=\left(uPER\times {\frac {lgPace}{tmPace}}\right)\times {\frac {15}{lguPER}}} This final step takes away the advantage held by players whose teams play a fastbreak style (and therefore have more possessions and more opportunities to do things on offense), and then sets the league average to 15.00. Also note that it is impossible to calculate PER (at least in the conventional manner described above) for NBA seasons prior to 1978, as the league did not keep track of turnovers among other advanced statistics before that year. ^ "Basketball Reference - PER". Basketball Reference. Retrieved 5 September 2013. ^ "Hardwood Paroxysm". Archived from the original on 10 September 2017. Retrieved 16 April 2017. ^ "Player Season Finder". ^ "NBA & ABA Career Leaders and Records for Player Efficiency Rating". Basketball-Reference.com. Retrieved 26 May 2021. ^ ESPN.com - "2012-13 Hollinger NBA Player Statistics - All Players" - Accessed 6 October 2013 ^ Basketball Reference, "LeBron James". Accessed 6 October 2013 ^ Basketball Reference, "Michael Jordan" - Accessed 6 October 2013. ^ NBA.com - "Jordan proclaims he could beat LeBron in prime". Accessed 6 October 2013. ^ Busfield, Steve. The Guardian, Michael Jordan vs LeBron James: who is better?" Accessed 6 October 2013 ^ Helin, Kurt. NBCsports.com "LeBron says he wants to be greatest of all time, still "far away from it". Accessed 6 October 2013 ^ Lariviere, David. Forbes, "LeBron James Will Eventually Top Michael Jordan As Basketball's Greatest Player". Accessed 6 October 2013. ^ Golliver, Ben. Sports Illustrated, "Spoelstra: Comparing LeBron to Jordan is impossible because 'game is different now' Archived 11 October 2013 at the Wayback Machine" - Accessed 6 October 2013 ^ NBA.com - NBA Rules History. Accessed 6 October 2013. ^ Aldridge, David. NBA.com - Rules changes have affected defensive philosophies. Accessed 6 October 2013 CREZ Basketball Systems Inc., Software to score your own basketball games and view PER player and lineup statistics An in-depth description of how to calculate PER Hollinger's articles at SI Basketball-Reference.com, Historical NBA statistical site (includes PER) ESPN.com Insider (subscription service) Retrieved from "https://en.wikipedia.org/w/index.php?title=Player_efficiency_rating&oldid=1082681031"
Referee - Uncyclopedia, the content-free encyclopedia Some referees are notorious Nazis. Pictured is Adolf Hitler sending someone for an early shower. ~ The players on Referees ~ The manager on Referees ~ The crowd on Referees ~ Gordon Ramsay on Referees “Nice shorts! ;)” ~ A descendant of Oscar Wilde (somehow!) on Referees The referee (Homo bastardus) is a species of human, distinguishable from other humans by their black uniform and their Nazi tendencies. Referees commonly make their presence known through incessant whistle-blowing and the flashing of red and yellow cards. Referees are particularly noted for their unusual breeding patterns, the father and the mother are almost never married. 4 'The Referee's a wanker' 5 Are you blind ref!? 6 CHEAT! CHEAT! CHEAT! Referees vary in size, averaging between 150 and 180 centimetres (5ft-6ft) tall, and weighing between 54 and 83 kgs (120-183 lbs). Referees possess many defence mechanisms, such as a whistle and red/yellow cards. Most referees are blind, so never get a single decision right, however, studies have shown that these referees go on to be the most successful. They are naturally very protective of footballers, blowing whistles and flashing cards whenever the slightest of contact occurs between them. This has led to some footballers abusing this reflex, falling down when they enter a zone called the penalty area. Some referees have evolved a way of dealing with this, by flashing a yellow card at the falling person. Many referees have devised complex confusion tactics, such as the Offside rule to deter predators. The birth of a new referee is an unknown process. Scientists have attempted research, but all of them got lost in concentration camps. A referee's first known experience is to attend a training course, where they learn the ideals of the Nazi Party, are told to book any players who are not Aryan and how to disguise Nazi salutes behind coloured cards. Most referees then proceed to taking charge of local Sunday league games such as Charlton Athletic vs. Portsmouth and Hull City vs. Accrington Stanley. A select bunch move on to become assistant referees in the County leagues, before they eventually become the man in the middle. An even smaller number move up to take charge of Premier League games. Then one is chosen to go to the World Cup and fuck up on the international stage. This referee will most likely give the same person 3 yellow cards and be sent home in disgrace. Not much is known about life after refereeing, because nobody gives enough of a shit to find out. Many scientists predict that after a referee's career has ended they teleport back to 1933, where they will join Adolf Hitler in creating a new set of referees. Cristiano Ronaldo demonstrating his crying technique. The referee has a small number of natural predators. The most common is Cristiano Ronaldo, who tries to seek sympathy from the referee by crying all the time, before he seals the deal by falling over in the aforementioned penalty area. Didier Drogba also uses the same technique to destroy referee's reputations. Other predators include Wayne Rooney, who deafens his pray with excessive shouting, and Sir Alex Ferguson. Many referees try to appease Sir Alex by adding on 10 minutes of injury time whenever Manchester United are losing. 'The Referee's a wanker'[edit] By studying referees, scientists have managed to prove beyond doubt that masturbation does, in fact, cause blindness. {\displaystyle If:Referees=Wanker} {\displaystyle And:Referees=Blind} {\displaystyle Then:Wanker=Blind} Are you blind ref!?[edit] CHEAT! CHEAT! CHEAT![edit] Referees are often accused of cheating, but almost never do. They abide by the following set of rules while officiating: The referee is always right. If the referee is wrong, see Rule 1. This often causes players to argue with the referee, which in turn makes the referee deploy his defence mechanisms. Usually, the whistle is used first, in an attempt to startle the approaching players. If this fails, the referee goes to his pocket and flashes a yellow card at one of the players. The cautioned player then starts crying and runs off to Mummy. Sometimes, the referee uses the Offside rule as a confusion tactic, before he makes a break for the centre circle. The Offside rule can not be understood by any living person or the most powerful supercomputer. Retrieved from "https://uncyclopedia.com/w/index.php?title=Referee&oldid=6139730"
Mathematics | IIT JEE IEEE Entrance Exam | Under Graduate Entrance Exams Online Objective Test Mathematics | IIT JEE IEEE Entrance Exam | Under Graduate Entrance Exams Online Objective Test Mathematics - Online Test Under Graduate Entrance Exams IIT JEE IEEE Entrance Exam Mathematics TEST : 1 # Under Graduate Entrance Exams # IIT JEE IEEE Entrance Exam # Mathematics # CBSE 11th Mathematics Prepare / Learn Q1. In a multiple choice question, there are 4 alternatives, of which one or more are correct. The number of ways in which a candidate can attempt this question is Answer : Option B Explaination / Solution: Since given there are four alternatives in which one or more are correct,we have to consider the following four cases The candidate choose 1 correct answer, 2 correct answers,3 correct answers or 4 correct answers. 1 correct answer can be chosen in 4C1 ways = 4 ways 2 correct answer can be chosen in 4C2 ways= 6 ways 3 correct answers can be chosen in 4C3ways = 4 ways 4 correct answers can be chosen in4C4 ways = 1 way Hence the totalnumber of ways = 4 + 6 + 4+ 1=15ways # Under Graduate Entrance Exams # IIT JEE IEEE Entrance Exam # Mathematics # TN 12th Maths Prepare / Learn The volume of a sphere is increasing in volume at the rate of 3π cm3 / sec . The rate of change of its radius when radius is 1/2 cm D. 1/2 cm/s Answer : Option A Explaination / Solution: Q3. If the line 2x – y + = 0 is a diameter of the circle then = Equation of circle is Applying completing the square method Comparing the above equation with we get center as (-3,3) and radius as . As centre of the circlre lies on diameter , it will satisfy the equation of diameter, so on putting (-3,3) in equation of diameter we get Q4. If |x + 2| ≤ 9, then x belongs to A. (-∞,-7) B. [-11, 7] C. (-∞,-7) ∪ [11, ∞) Q5. The domain of the function √cosx-1 B. {2 n π: n ∈ I} Answer : Option B Explaination / Solution: This function exists only if cosx−1≥0 ⇒cosx≥1 ⇒cosx>1 OR cosx=1 since maximum value of cosine function is 1 so, cos x >1 is not possible so, cosx=1 cosx=cos2nπ ⇒x=2nπ(n∈I) Q6. Differential equations are equations containing functions y = f(x), g(x) and A. minima of y B. maxima of y C. derivatives of y D. tangent of y at zero Answer : Option C Explaination / Solution: Differential equations are equations containing functions y = f(x), g(x) and derivatives of y with respect to x. Q7. If | adj(adj A) |=| A |9 , then the order of the square matrix A is Q8. Direction cosines of a line are A. The cotangents of the angles made by the line with the negative directions of the coordinate axes. B. The sines of the angles made by the line with the positive directions of the coordinate axes. C. The tangents of the angles made by the line with the negative directions of the coordinate axes. D. The cosines of the angles made by the line with the positive directions of the coordinate axes. Answer : Option D Explaination / Solution: a line when intersect another line or axis two kind of angle form ,as an assumption we take positive side of axis, to define the direction of line we take angle made from all three axis.and we take cos not other trigonometric function like sin,tan because we can define Direction cosines of a line are coefficient of i,j,k of a unit vector along that line. Q9. In a triangle ABC, tan , then tan is equal to Answer : Option C Explaination / Solution: Q10. A circular template has a radius of 10 cm. The measurement of radius has an approximate error of 0.02 cm. Then the percentage error in calculating area of this template is Total Question/Mark : Scored Mark : Mark for Correct Answer : 1 Mark for Wrong Answer : -0.5 Mark for Left Answer : 0 REDO THIS TEST AGAIN : ( TEST 1 ) START NEXT : ( TEST 2 )
Quadratic unconstrained binary optimization - Wikipedia Quadratic unconstrained binary optimization (QUBO), also known as unconstrained binary quadratic programming (UBQP), is a combinatorial optimization problem with a wide range of applications from finance and economics to machine learning.[1] QUBO is an NP hard problem, and for many classical problems from theoretical computer science, like maximum cut, graph coloring and the partition problem, embeddings into QUBO have been formulated.[2][3] Embeddings for machine learning models include support-vector machines, clustering and probabilistic graphical models.[4] Moreover, due to its close connection to Ising models, QUBO constitutes a central problem class for adiabatic quantum computation, where it is solved through a physical process called quantum annealing.[5] 3 Connection to Ising models {\displaystyle f_{Q}:\mathbb {B} ^{n}\rightarrow \mathbb {R} } be a quadratic polynomial over binary variables, {\displaystyle f_{Q}(x)=\sum _{i=1}^{n}\sum _{j=1}^{i}q_{ij}x_{i}x_{j}} {\displaystyle x_{i}\in \mathbb {B} } {\displaystyle i\in [n]} {\displaystyle q_{ij}\in \mathbb {R} } {\displaystyle 1\leq j\leq i\leq n} {\displaystyle [n]} denotes the set of strictly positive integers less or equal to {\displaystyle n} {\displaystyle \mathbb {B} =\lbrace 0,1\rbrace } . The QUBO problem consists of finding a binary vector {\displaystyle x^{*}} that is minimal with respect to {\displaystyle f_{Q}} among all other binary vectors, namely {\displaystyle x^{*}={\underset {x\in \mathbb {B} ^{n}}{\arg \min }}~f_{Q}(x)} Sometimes, QUBO is defined as a maximization instead of a minimization problem, which has no effect on the problem's complexity class, as maximizing {\displaystyle f_{Q}} {\displaystyle f_{-Q}=-f_{Q}} (see below). Another, more compact way to formulate {\displaystyle f_{Q}} is using matrix notation, {\displaystyle f_{Q}(x)=x^{\top }Qx} {\displaystyle Q\in \mathbb {R} ^{n\times n}} is the symmetric {\displaystyle n\times n} matrix containing the coefficients {\displaystyle q_{ij}} Multiplying the coefficients {\displaystyle q_{ij}} with a positive factor {\displaystyle \alpha >0} scales the output o{\displaystyle f} accordingly, leaving the optimum {\displaystyle x^{*}} unchanged: {\displaystyle f_{\alpha Q}(x)=\sum _{i\leq j}(\alpha q_{ij})x_{i}x_{j}=\alpha \sum _{i\leq j}q_{ij}x_{i}x_{j}=\alpha f_{Q}(x)} Flipping the sign of all coefficients flips the sign o{\displaystyle f} 's output, making {\displaystyle x^{*}} the binary vector that maximizes {\displaystyle f_{-Q}} {\displaystyle f_{-Q}(x)=\sum _{i\leq j}(-q_{ij})x_{i}x_{j}=-\sum _{i\leq j}q_{ij}x_{i}x_{j}=-f_{Q}(x)} If all coefficients are positive, the optimum is trivially {\displaystyle x^{*}=(0,\dots ,0)} . Similarly, if all coefficients are negative, the optimum is {\displaystyle x^{*}=(1,\dots ,1)} {\displaystyle \forall i\neq j:~q_{ij}=0} , then the corresponding QUBO problem is solvable in {\displaystyle {\mathcal {O}}(n)} , the optimal variable assignments {\displaystyle x_{i}^{*}} simply being 1 if {\displaystyle q_{ii}<0} Connection to Ising models[edit] QUBO is very closely related and computationally equivalent to the Ising model, whose Hamiltonian function is defined as {\displaystyle H(\sigma )=-\sum _{\langle i~j\rangle }J_{ij}\sigma _{i}\sigma _{j}-\mu \sum _{j}h_{j}\sigma _{j}} with real-valued parameters {\displaystyle h_{j},J_{ij},\mu } {\displaystyle i,j} . The spin variables {\displaystyle \sigma _{j}} are binary with values from {\displaystyle \lbrace -1,+1\rbrace } {\displaystyle \mathbb {B} } . Moreover, in the Ising model the variables are typically arranged in a lattice where only neighboring pairs of variables {\displaystyle \langle i~j\rangle } can have non-zero coefficients. Applying the identity {\displaystyle \sigma \mapsto 2x-1} yields an equivalent QUBO problem:[2] {\displaystyle {\begin{aligned}f(x)&=\sum _{\langle i~j\rangle }-J_{ij}(2x_{i}-1)(2x_{j}-1)+\sum _{j}\mu h_{j}(2x_{j}-1)\\&=\sum _{\langle i~j\rangle }-4J_{ij}x_{i}x_{j}+2J_{ij}x_{i}+2J_{ij}x_{j}-J_{ij}+\sum _{j}2\mu h_{j}x_{j}-\mu h_{j}&&{\text{using }}x_{j}=x_{j}x_{j}\\&=\sum _{i=1}^{n}\sum _{j=1}^{i}q_{ij}x_{i}x_{j}+C\end{aligned}}} {\displaystyle {\begin{aligned}q_{ij}&={\begin{cases}-4J_{ij}&{\text{if }}i\neq j\\\sum _{\langle k~i\rangle }2J_{ki}+\sum _{\langle i~\ell \rangle }2J_{i\ell }+2\mu h_{i}&{\text{if }}i=j\end{cases}}\\C&=-\sum _{\langle i~j\rangle }J_{ij}-\sum _{j}\mu h_{j}\end{aligned}}} As the constant {\displaystyle C} does not change the position of the optimum {\displaystyle x^{*}} , it can be neglected during optimization and is only important for recovering the original Hamiltonian function value. ^ Kochenberger, Gary; Hao, Jin-Kao (2014). "The unconstrained binary quadratic programming problem: a survey" (PDF). Journal of Combinatorial Optimization. 28: 58–81. doi:10.1007/s10878-014-9734-0. S2CID 16808394. ^ a b Glover, Fred; Kochenberger, Gary (2019). "A Tutorial on Formulating and Using QUBO Models". arXiv:1811.11538 [cs.DS]. ^ Lucas, Andrew (2014). "Ising formulations of many NP problems". Frontiers in Physics. 2: 5. arXiv:1302.5843. Bibcode:2014FrP.....2....5L. doi:10.3389/fphy.2014.00005. ^ Mücke, Sascha; Piatkowski, Nico; Morik, Katharina (2019). "Learning Bit by Bit: Extracting the Essence of Machine Learning" (PDF). LWDA. S2CID 202760166. Archived from the original (PDF) on 2020-02-27. ^ Tom Simonite (8 May 2013). "D-Wave's Quantum Computer Goes to the Races, Wins". MIT Technology Review. Retrieved 12 May 2013. Endre Boros, Peter L Hammer & Gabriel Tavares (April 2007). "Local search heuristics for Quadratic Unconstrained Binary Optimization (QUBO)". Journal of Heuristics. Association for Computing Machinery. 13 (2): 99–132. doi:10.1007/s10732-007-9009-3. S2CID 32887708. Retrieved 12 May 2013. Di Wang & Robert Kleinberg (November 2009). "Analyzing quadratic unconstrained binary optimization problems via multicommodity flows". Discrete Applied Mathematics. Elsevier. 157 (18): 3746–3753. doi:10.1016/j.dam.2009.07.009. PMC 2808708. PMID 20161596. This artificial intelligence-related article is a stub. You can help Wikipedia by expanding it. Retrieved from "https://en.wikipedia.org/w/index.php?title=Quadratic_unconstrained_binary_optimization&oldid=1079518221"
Compute sequence components (positive, negative, and zero) of three-phase phasor signal - Simulink - MathWorks United Kingdom \left[\begin{array}{c}{u}_{1}\\ {u}_{2}\\ {u}_{0}\end{array}\right]=\frac{1}{3}\left[\begin{array}{ccc}1& a& {a}^{2}\\ 1& {a}^{2}& a\\ 1& 1& 1\end{array}\right]\left[\begin{array}{c}{u}_{a}\\ {u}_{b}\\ {u}_{c}\end{array}\right] \alpha \pi /3 a={e}^{\frac{j2\pi }{3}}
The La Quebrada Cliff Divers perform shows for the public by jumping into the sea off the cliffs at Acapulco, Mexico. The height (in feet) of a diver at a certain time (in seconds) is given by h = -16t^2 +16t + 400 Use the vertex and y -intercept to make a sketch that represents the dive. What form of the quadratic function helps you determine the y -intercept efficiently? What form helps you determine the vertex easily? At what height did the diver start his jump? What is the maximum height he achieved? What height is the diver when t = 0 Complete the square to write the graphing form of the equation. This will indicate the vertex. h=-16(t^2-t+?)+400-? h=-16\left(t^2-t+\frac{1}{4}\right)+400-\left(-16×\frac{1}{4}\right) Factor the polynomial in the first set of parentheses to get the equation into graphing form. The diver started at 400 feet. The diver reached a maximum height of 404
§ Z algorithm The Z algorithm, for a given string s , computes a function Z: [len(s)] \rightarrow [len(s)] Z[i] is the length of the longest common prefix between S S[i:] S[0] = S[i] S[1] = S[i+1] S[2] = S[i+2] , and so on till S[Z[i]] = S[i + Z[i]] S[Z[i]+1] \neq S[i + Z[i] + 1] If we can compute the Z function for a string, we can then check if pattern Pis a substring of text T by constructing the string P#T$. Then, if we have an index such that Z[i] = len(P), we know that at that index, we have the string P as a substring. Note that the Z-algorithm computes the Z function in linear time . The key idea of the Z algorithm is to see that if we are at an index i, but we have an index l < isuch that i < l + z[l], then we have that s[0:z[l]] = s[l:l+z[l]]. Thus, we are "in the shade" of the l. In this situation, we can reuse z[i-l] as a seed for z[i]. There are two cases: i + z[i-l] > l + z[l] and the converse. If i + z[i-l] < l + z[l], then we are still "in the shade" of l, so we can safely set z[i] = z[i-l]. If not, we set z[i] = l + z[l], since we know that we match at least this much of the beginning of the string. vector<int> calcz(std::string s) { while(i + z[i] < n && s[i+z[i]] == s[z[i]]) { vector<int> myz(std::string s) { for(int i = 0; i < n; ++i) { z[i] = 0; } // shade that was last computed. // shade: (l + z[l]) - i // guess from start: z[i-l] z[i] = max(0, min(l + z[l] - i, z[i-l])); // compare with initial portion of string. while (i + z[i] < n && s[z[i]] == s[i + z[i]]) { z[i]++; } // we exceed the current shade. Begin ruling. if (i + z[i] >= l + z[l]) { l = i; } Reference: Algorithms on strings, trees, and sequences.
SQLite connection - MATLAB - MathWorks América Latina Create SQLite Connection to Existing Database File Create SQLite Connection Using New Database File Create Read-Only SQLite Connection The sqlite function creates an sqlite object. You can use this object to connect to an SQLite database file using the MATLAB® interface to SQLite. The MATLAB interface to SQLite enables you to work with SQLite database files without installing and administering a database or driver. For details, see Interact with Data in SQLite Database Using MATLAB Interface to SQLite. conn = sqlite(dbfile) conn = sqlite(dbfile,mode) conn = sqlite(dbfile) connects to an existing SQLite database file. conn = sqlite(dbfile,mode) connects to an existing database file or creates and connects to a new database file, depending on the mode type. dbfile — SQLite database file SQLite database file, specified as a character vector or string scalar. You can use the database file to store data and import and export it to MATLAB. mode — SQLite database file mode "connect" (default) | "readonly" | "create" SQLite database file mode, specified as one of these values. Connect to an existing SQLite database file. Create a read-only connection to an existing SQLite database file. Create and connect to a new SQLite database file. The file mode determines whether you connect to an existing SQLite database file or create a new one. For existing database files, the file mode determines whether the database connection is read-only and sets the IsReadOnly property. You can specify the file mode as a string scalar or character vector. Database — SQLite database file name SQLite database file name, specified as a character vector that contains the full path to the SQLite database file. The dbfile input argument sets this property. Example: 'C:\tutorial.db' IsOpen — Database connection indicator Database connection indicator, specified as a logical 0 when the database connection is closed or invalid, or a logical 1 when the database connection is open. This property is hidden from the display. IsReadOnly — Read-only database file indicator Read-only database file indicator, specified as a logical 0 when the SQLite database file can be modified, or a logical 1 when the database file is read-only. isopen Determine if SQLite connection is open close Close SQLite connection sqlread Import data into MATLAB from SQLite database table fetch Import data into MATLAB workspace using SQLite connection sqlwrite Insert MATLAB data into SQLite database table commit Make changes to SQLite database file permanent execute Execute SQL statement using SQLite database connection rollback Undo changes to SQLite database file Create an SQLite connection to the MATLAB® interface to SQLite using the existing database file tutorial.db. Specify the file name in the current folder. sqlite with properties: Database: '/tmp/Bdoc22a_1891349_107548/tp256bc224/database-ex96650978/tutorial.db' conn is an sqlite object with these properties: Database — SQLite database file name. IsOpen — SQLite connection is open. IsReadOnly — SQLite connection is writable. To import data from the database file, you can use the fetch function. Create an SQLite connection to the MATLAB® interface to SQLite using a new database file named mysqlite.db. Specify the file name in the current folder. dbfile = fullfile(pwd,"mysqlite.db"); conn = sqlite(dbfile,"create") Database: '/tmp/Bdoc22a_1891349_107548/tp256bc224/database-ex61952421/mysqlite.db' \text{—} SQLite database file name. \text{—} SQLite connection is open. \text{—} SQLite connection is writable. To insert data into the database file, use the sqlwrite function. Create a read-only SQLite connection to the MATLAB® interface to SQLite using the existing database file tutorial.db. Specify the file name in the current folder. conn = sqlite(dbfile,"readonly") IsReadOnly — SQLite connection is read-only. Instead of the sqlite object, the connection object enables you to connect to various relational databases using ODBC and JDBC drivers that you install and administer. You can create the connection object by using the database function. To use the JDBC driver, close the SQLite connection and create a database connection using the URL string. For details, see these topics depending on your platform:
Exit Plane Velocity Profiles and Boundary Layer Similarity on a Forward-Facing Cylinder Issuing a Jet Into a Counterflow | FEDSM | ASME Digital Collection David M. Rooney, Thomas Balestrieri, Michael C. Lipani, Yakov R. Mikhaylov Vaccaro, JC, Rooney, DM, Balestrieri, T, Lipani, MC, & Mikhaylov, YR. "Exit Plane Velocity Profiles and Boundary Layer Similarity on a Forward-Facing Cylinder Issuing a Jet Into a Counterflow." Proceedings of the ASME 2016 Fluids Engineering Division Summer Meeting collocated with the ASME 2016 Heat Transfer Summer Conference and the ASME 2016 14th International Conference on Nanochannels, Microchannels, and Minichannels. Volume 1B, Symposia: Fluid Mechanics (Fundamental Issues and Perspectives; Industrial and Environmental Applications); Multiphase Flow and Systems (Multiscale Methods; Noninvasive Measurements; Numerical Methods; Heat Transfer; Performance); Transport Phenomena (Clean Energy; Mixing; Manufacturing and Materials Processing); Turbulent Flows — Issues and Perspectives; Algorithms and Applications for High Performance CFD Computation; Fluid Power; Fluid Dynamics of Wind Energy; Marine Hydrodynamics. Washington, DC, USA. July 10–14, 2016. V01BT14A002. ASME. https://doi.org/10.1115/FEDSM2016-7584 An experimental study was undertaken to examine two phenomena associated with a jet issuing from a forward-facing circular cylinder (with a length to outer radius ratio of 6.8) into a counterflow: (1) the effect of the counterflow on the internal velocity profile near the injector exit, and (2) the combined jet and counterflow influence on the external surface boundary layer profiles. Wind tunnel experiments, utilizing hot-wire anemometry, were conducted at very low jet-to-counterflow velocity ratios between 0 and 0.41 and Reynolds numbers based on cylinder outer diameter of 2.6×104 and 5.2×104. It was found that the flow at the internal location was negligibly affected by the counterflow, with the profiles exhibiting typical turbulent pipe flow behavior. However, close to the injector exit plane, the counter-flow accelerated the exiting jet in an annular region adjacent to the cylinder inner wall, creating a high velocity region which reversed direction upon exiting. The exit flow reversal around the lip of the cylinder was manifested downstream as an addition of momentum to the flow field. Boundary layer measurements were taken at seven streamwise locations along the upper surface of the cylinder. A dimensionless parameter constructed from both a traditional flat plate turbulent boundary layer scaling and a geometric curvature ratio was used to plot all data, which collapsed to a single curve at all locations in the absence of a jet, while in the presence of the jet, the only differences in the profiles were close to the surface. Independently of jet-to-counterflow velocity ratio, the boundary layer on the cylinder was seen to grow in the streamwise direction at a rate proportional to x25 x45 as occurs with a turbulent boundary layer on a flat plate, even though the boundary layer height to curvature ratio was relatively low (≈ 1), and hence would have been expected to approximate flat plate behavior. Boundary layers, Cylinders, Flow (Dynamics), Flat plates, Boundary layer turbulence, Ejectors, Circular cylinders, Momentum, Pipe flow, Reynolds number, Turbulence, Wind tunnels, Wire On the Response of Planar Wakes to Asymmetric Initial Conditions
§ DFS and topological sorting The proper way to solve a maze is to keep breadcrumbs! Use recursion. Recursively explore the graph, backtracking as necessary. § DFS on a component: parent = { s: None} dfs-visit(adj, s): for v in adj[s]: if v not in parent: parent[v] = s dfs-visit(adj, v) § visit all vertices: dfs(vs, adj): if s not in parent: parent[s] = None dfs-visit(adj, s) We call dfs-visit once per vertex V . Per vertex, we pay adj(v) per vertex v. In total, we visit |E|. § Shortest paths? DFS does not take the shortest path to get to a node. If you want shortest paths (in an unweighted graph), use BFS. § Edge classification Tree edges : visit a new vertex via that edge. Parent pointers track tree edges. forward edges : goes from node n to descendant of node n. backward edges : goes from a node n to an ancestor of node n. cross edges : all other edges. Between two non-ancestor-related nodes. How do we know forward, back, cross edges? § Computing edge classifications backward edges : mark nodes being processed. if we see an edge towards a node still being processed, it's a backward edge. forward edges / cross edges : use time. § Which of these can exist in an undirected graph? Tree edges do exist. They better! That's how we visit new nodes. Forward edges: can't happen, because we will always traverse "backwards". ----> C A -> C is a forward edge! If we made the above undirected, then we will have A -> B tree edge and B -> C back-edge. Back-edges: can exist in an undirected graph as shown above; B -> C is a back edge. Cross-edges: once again, cross edges can only come up from "wrongly directed" edges. But we don't have directions in an undirected graph. § Cycle detection G has a cycle iff G 's DFS has a back-edge. § Proof: DFS has a back edge => G has a cycle A -...-> X ---back---* By definition, A -> X is connected using tree edges, and a back edge X -> A. gives us the cycle. § Proof: G has a cycle => DFS has a back edge Say we have a cycle made of k vertices x, y, z, .... assume v[0] is the first vertex in the cycle visited by the DFS. Keep labeling based on how DFS visits then as v[1], v[2], ... v[k]. The we claim the edge v[k] -> v[0] will be a backedge. We know that when we're recursing on v[0], we will visit v[1] before we finish v[0]. Similarly, v[i] will be visited before v[i-1]. Chaining, we will finish v[k] before we finish v[0]. In terms of balanced parens, it's like {0 (k; k) 0}. So, when we look at the edge v[k] -> v[0], we have not yet finished v[0]. Thus, we get a backedge. § Topological sort Given a DAG, order vertices so that all edges point from lower order to higher order. The algorithm is to run DFS and output the reverse order of finishing time of vertices. Why does this work? § Proof that topological sort works We want to show that for an edge (u, v) v u v is ordered after u . Remember that we sort based on reverse of finishing order. § Case 1: u starts before v We will eventually visit v in the recursion for u because u -> v is an edge. So, we will have the bracketing {u (v; v) u} so we're good: we finish v before we finish u. § Case 2: v starts before u We have the bracketing (v ... {u. If we were to finish u before finishing v, then v is an ancestor of u, and this gives the bracketing (v .. {u .. u} .. v) and thus the edge (u, v) is a back-edge. But this is impossible because the graph cannot have cycles! Thus, we will still have that v finishes beofre u, giving the bracketing (v v) .. {u u}. MIT introduction to algorithms: Lecture 14, DFS and topological sorting
Gray (unit) - Wikipedia SI derived unit of absorbed dose of ionizing radiation 1 Gy in ... ... is equal to ... The gray (symbol: Gy) is a derived unit of ionizing radiation dose in the International System of Units (SI). It is defined as the absorption of one joule of radiation energy per kilogram of matter.[1] It is used as a unit of the radiation quantity absorbed dose that measures the energy deposited by ionizing radiation in a unit mass of matter being irradiated, and is used for measuring the delivered dose of ionising radiation in applications such as radiotherapy, food irradiation and radiation sterilization and predicting likely acute effects, such as acute radiation syndrome in radiological protection. As a measure of low levels of absorbed dose, it also forms the basis for the calculation of the radiation protection unit, the sievert, which is a measure of the health effect of low levels of ionizing radiation on the human body. The gray is also used in radiation metrology as a unit of the radiation quantity kerma; defined as the sum of the initial kinetic energies of all the charged particles liberated by uncharged ionizing radiation[a] in a sample of matter per unit mass. The gray is an important unit in ionising radiation measurement and was named after British physicist Louis Harold Gray, a pioneer in the measurement of X-ray and radium radiation and their effects on living tissue.[2] The gray was adopted as part of the International System of Units in 1975. The corresponding cgs unit to the gray is the rad (equivalent to 0.01 Gy), which remains common largely in the United States, though "strongly discouraged" in the style guide for U.S. National Institute of Standards and Technology.[3] 1.1 Radiobiology 1.4 Absorbed dose in matter 2 Development of the absorbed dose concept and the gray External dose quantities used in radiation protection and dosimetry The gray has a number of fields of application in measuring dose: Radiobiology[edit] The measurement of absorbed dose in tissue is of fundamental importance in radiobiology and radiation therapy as it is the measure of the amount of energy the incident radiation deposits in the target tissue. The measurement of absorbed dose is a complex problem due to scattering and absorption, and many specialist dosimeters are available for these measurements, and can cover applications in 1-D, 2-D and 3-D.[4][5][6] In radiation therapy, the amount of radiation applied varies depending on the type and stage of cancer being treated. For curative cases, the typical dose for a solid epithelial tumor ranges from 60 to 80 Gy, while lymphomas are treated with 20 to 40 Gy. Preventive (adjuvant) doses are typically around 45–60 Gy in 1.8–2 Gy fractions (for breast, head, and neck cancers). The average radiation dose from an abdominal X-ray is 0.7 millisieverts (0.0007 Sv), that from an abdominal CT scan is 8 mSv, that from a pelvic CT scan is 6 mGy, and that from a selective CT scan of the abdomen and the pelvis is 14 mGy.[7] Radiation protection[edit] Relationship of ICRU/ICRP computed Protection dose quantities and units The absorbed dose also plays an important role in radiation protection, as it is the starting point for calculating the stochastic health risk of low levels of radiation, which is defined as the probability of cancer induction and genetic damage.[8] The gray measures the total absorbed energy of radiation, but the probability of stochastic damage also depends on the type and energy of the radiation and the types of tissues involved. This probability is related to the equivalent dose in sieverts (Sv), which has the same dimensions as the gray. It is related to the gray by weighting factors described in the articles on equivalent dose and effective dose. The International Committee for Weights and Measures states: "In order to avoid any risk of confusion between the absorbed dose D and the dose equivalent H, the special names for the respective units should be used, that is, the name gray should be used instead of joules per kilogram for the unit of absorbed dose D and the name sievert instead of joules per kilogram for the unit of dose equivalent H."[9] {\displaystyle 1\ \mathrm {Gy} =1\ {\frac {\mathrm {J} }{\mathrm {kg} }}=1\ {\frac {\mathrm {m} ^{2}}{\mathrm {s} ^{2}}}} The accompanying diagrams show how absorbed dose (in grays) is first obtained by computational techniques, and from this value the equivalent doses are derived. For X-rays and gamma rays the gray is numerically the same value when expressed in sieverts, but for alpha particles one gray is equivalent to 20 sieverts, and a radiation weighting factor is applied accordingly. Radiation poisoning: The gray is conventionally used to express the severity of what are known as "tissue effects" from doses received in acute exposure to high levels of ionizing radiation. These are effects that are certain to happen, as opposed to the uncertain effects of low levels of radiation that have a probability of causing damage. A whole-body acute exposure to 5 grays or more of high-energy radiation usually leads to death within 14 days. LD1 is 2.5 Gy, LD50 is 5 Gy and LD99 is 8 Gy.[10] The LD50 dose represents 375 joules for a 75 kg adult. Absorbed dose in matter[edit] The gray is used to measure absorbed dose rates in non-tissue materials for processes such as radiation hardening, food irradiation and electron irradiation. Measuring and controlling the value of absorbed dose is vital to ensuring correct operation of these processes. Kerma ("kinetic energy released per unit mass") is used in radiation metrology as a measure of the liberated energy of ionisation due to irradiation, and is expressed in grays. Importantly, kerma dose is different from absorbed dose, depending on the radiation energies involved, partially because ionization energy is not accounted for. Whilst roughly equal at low energies, kerma is much higher than absorbed dose at higher energies, because some energy escapes from the absorbing volume in the form of bremsstrahlung (X-rays) or fast-moving electrons. Kerma, when applied to air, is equivalent to the legacy roentgen unit of radiation exposure, but there is a difference in the definition of these two units. The gray is defined independently of any target material, however, the roentgen was defined specifically by the ionisation effect in dry air, which did not necessarily represent the effect on other media. Development of the absorbed dose concept and the gray[edit] Using early Crookes tube X-Ray apparatus in 1896. One man is viewing his hand with a fluoroscope to optimise tube emissions, the other has his head close to the tube. No precautions are being taken. Monument to the X-ray and Radium Martyrs of All Nations erected 1936 at St. Georg hospital in Hamburg, commemorating 359 early radiology workers. Wilhelm Röntgen first discovered X-rays on November 8, 1895, and their use spread very quickly for medical diagnostics, particularly broken bones and embedded foreign objects where they were a revolutionary improvement over previous techniques. Due to the wide use of X-rays and the growing realisation of the dangers of ionizing radiation, measurement standards became necessary for radiation intensity and various countries developed their own, but using differing definitions and methods. Eventually, in order to promote international standardisation, the first International Congress of Radiology (ICR) meeting in London in 1925, proposed a separate body to consider units of measure. This was called the International Commission on Radiation Units and Measurements, or ICRU,[b] and came into being at the Second ICR in Stockholm in 1928, under the chairmanship of Manne Siegbahn.[11][12][c] One of the earliest techniques of measuring the intensity of X-rays was to measure their ionising effect in air by means of an air-filled ion chamber. At the first ICRU meeting it was proposed that one unit of X-ray dose should be defined as the quantity of X-rays that would produce one esu of charge in one cubic centimetre of dry air at 0 °C and 1 standard atmosphere of pressure. This unit of radiation exposure was named the roentgen in honour of Wilhelm Röntgen, who had died five years previously. At the 1937 meeting of the ICRU, this definition was extended to apply to gamma radiation.[13] This approach, although a great step forward in standardisation, had the disadvantage of not being a direct measure of the absorption of radiation, and thereby the ionisation effect, in various types of matter including human tissue, and was a measurement only of the effect of the X-rays in a specific circumstance; the ionisation effect in dry air.[14] In 1940, Louis Harold Gray, who had been studying the effect of neutron damage on human tissue, together with William Valentine Mayneord and the radiobiologist John Read, published a paper in which a new unit of measure, dubbed the "gram roentgen" (symbol: gr) was proposed, and defined as "that amount of neutron radiation which produces an increment in energy in unit volume of tissue equal to the increment of energy produced in unit volume of water by one roentgen of radiation".[15] This unit was found to be equivalent to 88 ergs in air, and made the absorbed dose, as it subsequently became known, dependent on the interaction of the radiation with the irradiated material, not just an expression of radiation exposure or intensity, which the roentgen represented. In 1953 the ICRU recommended the rad, equal to 100 erg/g, as the new unit of measure of absorbed radiation. The rad was expressed in coherent cgs units.[13] In the late 1950s, the CGPM invited the ICRU to join other scientific bodies to work on the development of the International System of Units, or SI.[16] The CCU decided to define the SI unit of absorbed radiation as energy deposited by reabsorbed charged particles per unit mass of absorbent material, which is how the rad had been defined, but in MKS units it would be equivalent to the joule per kilogram. This was confirmed in 1975 by the 15th CGPM, and the unit was named the "gray" in honour of Louis Harold Gray, who had died in 1965. The gray was thus equal to 100 rad. Notably, the centigray (numerically equivalent to the rad) is still widely used to describe absolute absorbed doses in radiotherapy. The adoption of the gray by the 15th General Conference on Weights and Measures as the unit of measure of the absorption of ionizing radiation, specific energy absorption, and of kerma in 1975[17] was the culmination of over half a century of work, both in the understanding of the nature of ionizing radiation and in the creation of coherent radiation quantities and units. Radiation-related quantities[edit] Graphic showing relationships between radioactivity and detected ionizing radiation at a point. Dose area product (Gy·cm2) International System of Units base units Order of magnitude (unit) Sievert, SI derived unit of dose equivalent radiation ^ I.e., indirectly ionizing radiation such as photons and neutrons ^ Originally known as the International X-ray Unit Committee ^ The host country nominated the chairman of the early ICRU meetings. ^ "The International System of Units (SI)" (PDF). Bureau International des Poids et Mesures (BIPM). Retrieved 2010-01-31. ^ "Rays instead of scalpels". LH Gray Memorial Trust. 2002. Retrieved 2012-05-15. ^ "NIST Guide to SI Units – Units temporarily accepted for use with the SI". National Institute of Standards and Technology. 2 July 2009. ^ Seco J, Clasie B, Partridge M (2014). "Review on the characteristics of radiation detectors for dosimetry and imaging". Phys Med Biol. 59 (20): R303–47. Bibcode:2014PMB....59R.303S. doi:10.1088/0031-9155/59/20/R303. PMID 25229250. ^ Hill R, Healy B, Holloway L, Kuncic Z, Thwaites D, Baldock C (2014). "Advances in kilovoltage x-ray beam dosimetry". Phys Med Biol. 59 (6): R183–231. Bibcode:2014PMB....59R.183H. doi:10.1088/0031-9155/59/6/R183. PMID 24584183. ^ Baldock C, De Deene Y, Doran S, Ibbott G, Jirasek A, Lepage M, McAuley KB, Oldham M, Schreiner LJ (2010). "Polymer gel dosimetry". Phys Med Biol. 55 (5): R1–63. Bibcode:2010PMB....55R...1B. doi:10.1088/0031-9155/55/5/R01. PMC 3031873. PMID 20150687. ^ "X-Ray Risk". www.xrayrisk.com. ^ "The 2007 Recommendations of the International Commission on Radiological Protection". Ann ICRP. 37 (2–4). paragraph 64. 2007. doi:10.1016/j.icrp.2007.10.003. PMID 18082557. S2CID 73326646. ICRP publication 103. Archived from the original on 2012-11-16. ^ "Lethal dose". European Nuclear Society. ^ Siegbahn, Manne; et al. (October 1929). "Recommendations of the International X-ray Unit Committee". Radiology. 13 (4): 372–3. doi:10.1148/13.4.372. ^ "About ICRU - History". International Commission on Radiation Units & Measures. Retrieved 2012-05-20. ^ a b Guill, JH; Moteff, John (June 1960). "Dosimetry in Europe and the USSR". Third Pacific Area Meeting Papers — Materials in Nuclear Applications. Symposium on Radiation Effects and Dosimetry - Third Pacific Area Meeting American Society for Testing Materials, October 1959, San Francisco, 12–16 October 1959. American Society Technical Publication. Vol. 276. ASTM International. p. 64. LCCN 60014734. Retrieved 2012-05-15. ^ Lovell, S (1979). "4: Dosimetric quantities and units". An introduction to Radiation Dosimetry. Cambridge University Press. pp. 52–64. ISBN 0-521-22436-5. Retrieved 2012-05-15. ^ Gupta, S. V. (2009-11-19). "Louis Harold Gray". Units of Measurement: Past, Present and Future : International System of Units. Springer. p. 144. ISBN 978-3-642-00737-8. Retrieved 2012-05-14. ^ "CCU: Consultative Committee for Units". International Bureau of Weights and Measures (BIPM). Retrieved 2012-05-18. Boyd, M.A. (March 1–5, 2009). The Confusing World of Radiation Dosimetry—9444 (PDF). WM2009 Conference (Waste Management Symposium). Phoenix, AZ. Archived from the original (PDF) on 2016-12-21. Retrieved 2014-07-07. An account of chronological differences between USA and ICRP dosimetry systems. Retrieved from "https://en.wikipedia.org/w/index.php?title=Gray_(unit)&oldid=1075921683"
§ Grokking Zariski There's a lot written on the Zariski topology on the internet, but most of them lack explicit examples and pictures. This is my attempt to communicate what the Zariski topology looks like, from the perspectives that tickle my fancy (a wealth of concrete examples, topology-as-semi-decidability, and pictures). § The Zariski topology Recall that the Zariski topology is defined by talking about what its closed sets are. The common zero sets of a family of polynomials are the closed sets of the Zariski topology. Formally, the topology (\mathbb R^n, \tau) has as closed sets: \{ x \in \mathbb R^n : \forall f_i \in \mathbb R[X_1, X_2, \dots X_n], f_i(x) = 0 \} Open sets (the complement of closed sets) are of them form: \{ x \in \mathbb R^n : \exists f_i \in \mathbb R[X_1, X_2, \dots X_n], f_i(x) \neq 0 \} \in \tau The empty set is generated as \{ x \in \mathbb R^n : 0 \neq 0 \} and the full set is generated as \{ x \in \mathbb R^n : 1 \neq 0 \} § Semi-decidability Recall that in this view of topology, for a space (X, \tau) , for every open set O \in \tau , we associate a turing machine T_O which semi-decides inclusion. That is, if a point is in O then it halts with the output IN-SET. if the point is not in O T_O infinite loops. Formally: \begin{aligned} x &&\in O \iff \text{$T_O$ halts on input $x$} \\ x &&\not \in O \iff \text{$T_O$ does not halts on input $o$} \end{aligned} Alternatively, for a closed set C \in \tau , we associate a a turing machine T_C which semi-decides exclusion. That is, if a point is not in C it halts with the output "NOT-IN-SET". If the point is in C , the turing machine T_C infinite loops. Now in the case of polynomials, we can write a function that semidecides exclusion from closed sets: # return NOT-IN-SET of x is not a zero of the polynomial def is_not_in_zeros(poly, x0): precision = 0 # start with zero precision if poly(x0[:precision]) != 0: return NOT-IN-SET precision += 1 # up the precision Since we can only evaluate a polynomial up to some finite precision, we start with zero precsion and then gradually get more precise. If some x0 poly , then at some point, we will have that poly(x0) /neq 0 x0 is indeed a root, then we will never halt this process; We will keep getting poly(x0[:precision]) = 0 for all levels of precision. § Spec(R) To setup a topology for the prime spectrum of a ring, we define the topological space as Spec(R) , the set of all prime ideals of R The closed sets of the topology are \{ V(I) : I \text{is an ideal of } R \} V: \text{Ideals of } R \rightarrow 2^{\text{Prime ideals of } R} each ideal to the set of prime ideals that contain it. Formally, V(I) = \{ p \in Spec(R) : I \subseteq p \} We can think of this differently, by seeing that we can rewrite the condition as V(I) = \{ P \in Spec(R) : I \xrightarrow{P} 0 \} : On rewriting using the prime ideal P , we send the ideal I 0 Thus, the closed sets of Spec(R) are precisely the 'zeroes of polynomials' / 'zeroes of ideals'. To make the analogy precise, note that in the polynomial case, we imposed a topology on \mathbb R by saying that the closed sets were V(p_i) = \{ x \in \mathbb R : p(x) = 0 \} p \in \mathbb R[x] Here, we are saying that the closed sets are V(I) = \{ x \in Spec(R) : I(x) = 0 \} I \in R . so we are looking at ideals as functions from the prime ideal to the reduction of the ideal. That is, I(P) = I/P Spec(\mathbb Z) from this perspective \mathbb Z is a PID, we can think of numbers instead of ideals. The above picture asks us to think of a number as a function from a prime to a reduced number. n(p) = n \% p . We then have that the closed sets are those primes which can zero out some number. That is: \begin{aligned} V(I) = \{ x \in Spec(\mathbb Z) : I(x) = 0 \} V((n)) = \{ (p) \in Spec(\mathbb Z) : (n)/(p) = 0 \} V((n)) = \{ (p) \in Spec(\mathbb Z) : (n) \mod (p) = 0 \} \end{aligned} So in our minds eye, we need to imagine a space of prime ideals (points), which are testing with all ideals (polynomials). Given a set of prime ideals (a tentative locus, say a circle), the set of prime ideals is closed if it occurs as the zero of some collection of ideas (if the locus occurs as the zero of some collection of polynomials). § Nilpotents of a scheme f(x) = (x - 1) g(x) = (x - 1)^2 . as functions on \mathbb R . They are indistinguishable based on Zariski, since their zero sets are the same ( \{ 1 \} If we now move to the scheme setting, we get two different schemes: R_g \equiv \mathbb R[X] / (x - 1) \simeq \mathbb R R_g \equiv R[X] / (x - 1)^2 R_g , we have an element (x-1) (x-1)^2 = 0 , which is a nilpotent. This "picks up" on the repeated root. - So the scheme is stronger than Zariski, as it can tell the difference between these two situations. So we have a kind of "infinitesimal thickening" in the case of R_g . For more, check the math.se question: 'Geometric meaning of the nilradical' Another example is to consider (i) the pair of polynomials y = 0 (the x-axis) and y = \sqrt(x) . The intersection is the zero set of the ideal (y, y^2 - x) = (y, 0 - x) = (y, x) Then consider (ii) the pair of y = 0 y = x^2 . Here, we have "more intersection" along the x-axis than in the previous case, as the parabola is "aligned" to the x-axis. The intersection is the zero set of the ideal (y, y - x^2) = (y, 0 - x^2) = (y, x^2) So (i) is governed by \mathbb R[X, Y] / (y, x) , while (ii) is governed by \mathbb R[X, Y] /(y, x^2) which has a nilpotent x^2 . This tells us that in (ii), there is an "infinitesimal thickening" along the x -axis of the intersection.
Singular_homology Knowpia In algebraic topology, singular homology refers to the study of a certain set of algebraic invariants of a topological space X, the so-called homology groups {\displaystyle H_{n}(X).} Intuitively, singular homology counts, for each dimension n, the n-dimensional holes of a space. Singular homology is a particular example of a homology theory, which has now grown to be a rather broad collection of theories. Of the various theories, it is perhaps one of the simpler ones to understand, being built on fairly concrete constructions (see also the related theory simplicial homology). In brief, singular homology is constructed by taking maps of the standard n-simplex to a topological space, and composing them into formal sums, called singular chains. The boundary operation – mapping each n-dimensional simplex to its (n−1)-dimensional boundary – induces the singular chain complex. The singular homology is then the homology of the chain complex. The resulting homology groups are the same for all homotopy equivalent spaces, which is the reason for their study. These constructions can be applied to all topological spaces, and so singular homology is expressible as a functor from the category of topological spaces to the category of graded abelian groups. Singular simplicesEdit A singular n-simplex in a topological space X is a continuous function (also called a map) {\displaystyle \sigma } from the standard n-simplex {\displaystyle \Delta ^{n}} to X, written {\displaystyle \sigma :\Delta ^{n}\to X.} This map need not be injective, and there can be non-equivalent singular simplices with the same image in X. {\displaystyle \sigma ,} {\displaystyle \partial _{n}\sigma ,} is defined to be the formal sum of the singular (n − 1)-simplices represented by the restriction of {\displaystyle \sigma } to the faces of the standard n-simplex, with an alternating sign to take orientation into account. (A formal sum is an element of the free abelian group on the simplices. The basis for the group is the infinite set of all possible singular simplices. The group operation is "addition" and the sum of simplex a with simplex b is usually simply designated a + b, but a + a = 2a and so on. Every simplex a has a negative −a.) Thus, if we designate {\displaystyle \sigma } by its vertices {\displaystyle [p_{0},p_{1},\ldots ,p_{n}]=[\sigma (e_{0}),\sigma (e_{1}),\ldots ,\sigma (e_{n})]} corresponding to the vertices {\displaystyle e_{k}} of the standard n-simplex {\displaystyle \Delta ^{n}} (which of course does not fully specify the singular simplex produced by {\displaystyle \sigma } {\displaystyle \partial _{n}\sigma =\partial _{n}[p_{0},p_{1},\ldots ,p_{n}]=\sum _{k=0}^{n}(-1)^{k}[p_{0},\ldots ,p_{k-1},p_{k+1},\ldots ,p_{n}]=\sum _{k=0}^{n}(-1)^{k}\sigma \mid _{e_{0},\ldots ,e_{k-1},e_{k+1},\ldots ,e_{n}}} is a formal sum of the faces of the simplex image designated in a specific way.[1] (That is, a particular face has to be the restriction of {\displaystyle \sigma } to a face of {\displaystyle \Delta ^{n}} which depends on the order that its vertices are listed.) Thus, for example, the boundary of {\displaystyle \sigma =[p_{0},p_{1}]} (a curve going from {\displaystyle p_{0}} {\displaystyle p_{1}} ) is the formal sum (or "formal difference") {\displaystyle [p_{1}]-[p_{0}]} Singular chain complexEdit Consider first the set of all possible singular n-simplices {\displaystyle \sigma _{n}(X)} on a topological space X. This set may be used as the basis of a free abelian group, so that each singular n-simplex is a generator of the group. This set of generators is of course usually infinite, frequently uncountable, as there are many ways of mapping a simplex into a typical topological space. The free abelian group generated by this basis is commonly denoted as {\displaystyle C_{n}(X)} {\displaystyle C_{n}(X)} are called singular n-chains; they are formal sums of singular simplices with integer coefficients. {\displaystyle \partial } is readily extended to act on singular n-chains. The extension, called the boundary operator, written as {\displaystyle \partial _{n}:C_{n}\to C_{n-1},} is a homomorphism of groups. The boundary operator, together with the {\displaystyle C_{n}} , form a chain complex of abelian groups, called the singular complex. It is often denoted as {\displaystyle (C_{\bullet }(X),\partial _{\bullet })} or more simply {\displaystyle C_{\bullet }(X)} The kernel of the boundary operator is {\displaystyle Z_{n}(X)=\ker(\partial _{n})} , and is called the group of singular n-cycles. The image of the boundary operator is {\displaystyle B_{n}(X)=\operatorname {im} (\partial _{n+1})} , and is called the group of singular n-boundaries. {\displaystyle \partial _{n}\circ \partial _{n+1}=0} {\displaystyle n} -th homology group of {\displaystyle X} is then defined as the factor group {\displaystyle H_{n}(X)=Z_{n}(X)/B_{n}(X).} {\displaystyle H_{n}(X)} are called homology classes.[2] Homotopy invarianceEdit If X and Y are two topological spaces with the same homotopy type (i.e. are homotopy equivalent), then {\displaystyle H_{n}(X)\cong H_{n}(Y)\,} for all n ≥ 0. This means homology groups are homotopy invariants, and therefore topological invariants. In particular, if X is a connected contractible space, then all its homology groups are 0, except {\displaystyle H_{0}(X)\cong \mathbb {Z} } {\displaystyle f_{\sharp }:C_{n}(X)\rightarrow C_{n}(Y).} {\displaystyle \partial f_{\sharp }=f_{\sharp }\partial ,} {\displaystyle f_{*}:H_{n}(X)\rightarrow H_{n}(Y).} {\displaystyle P:C_{n}(X)\rightarrow C_{n+1}(Y)} {\displaystyle \partial P(\sigma )=f_{\sharp }(\sigma )-g_{\sharp }(\sigma )-P(\partial \sigma ).} {\displaystyle f_{\sharp }(\alpha )-g_{\sharp }(\alpha )=\partial P(\alpha ),} i.e. they are homologous. This proves the claim.[3] Homology groups of common spacesEdit The table below shows the k-th homology groups {\displaystyle H_{k}(X)} of n-dimensional real projective spaces RPn, complex projective spaces, CPn, a point S0, spheres Sn( {\displaystyle n\geq 1} ), and a 3-torus T3 with integer coefficients. RPn[4] {\displaystyle \mathbf {Z} } k = 0 and k = n odd {\displaystyle \mathbf {Z} /2\mathbf {Z} } k odd, 0 < k < n CPn[5] {\displaystyle \mathbf {Z} } k = 0,2,4,...,2n S0[6] {\displaystyle \mathbf {Z} } {\displaystyle \mathbf {Z} } k = 0,n T3[7] {\displaystyle \mathbf {Z} } {\displaystyle \mathbf {Z} } Consider first that {\displaystyle X\mapsto C_{n}(X)} is a map from topological spaces to free abelian groups. This suggests that {\displaystyle C_{n}(X)} might be taken to be a functor, provided one can understand its action on the morphisms of Top. Now, the morphisms of Top are continuous functions, so if {\displaystyle f:X\to Y} is a continuous map of topological spaces, it can be extended to a homomorphism of groups {\displaystyle f_{*}:C_{n}(X)\to C_{n}(Y)\,} {\displaystyle f_{*}\left(\sum _{i}a_{i}\sigma _{i}\right)=\sum _{i}a_{i}(f\circ \sigma _{i})} {\displaystyle \sigma _{i}:\Delta ^{n}\to X} is a singular simplex, and {\displaystyle \sum _{i}a_{i}\sigma _{i}\,} is a singular n-chain, that is, an element of {\displaystyle C_{n}(X)} {\displaystyle C_{n}} {\displaystyle C_{n}:\mathbf {Top} \to \mathbf {Ab} } The boundary operator commutes with continuous maps, so that {\displaystyle \partial _{n}f_{*}=f_{*}\partial _{n}} . This allows the entire chain complex to be treated as a functor. In particular, this shows that the map {\displaystyle X\mapsto H_{n}(X)} {\displaystyle H_{n}:\mathbf {Top} \to \mathbf {Ab} } from the category of topological spaces to the category of abelian groups. By the homotopy axiom, one has that {\displaystyle H_{n}} is also a functor, called the homology functor, acting on hTop, the quotient homotopy category: {\displaystyle H_{n}:\mathbf {hTop} \to \mathbf {Ab} .} This distinguishes singular homology from other homology theories, wherein {\displaystyle H_{n}} is still a functor, but is not necessarily defined on all of Top. In some sense, singular homology is the "largest" homology theory, in that every homology theory on a subcategory of Top agrees with singular homology on that subcategory. On the other hand, the singular homology does not have the cleanest categorical properties; such a cleanup motivates the development of other homology theories such as cellular homology. {\displaystyle C_{\bullet }:\mathbf {Top} \to \mathbf {Comp} } which maps topological spaces as {\displaystyle X\mapsto (C_{\bullet }(X),\partial _{\bullet })} and continuous functions as {\displaystyle f\mapsto f_{*}} . Here, then, {\displaystyle C_{\bullet }} is understood to be the singular chain functor, which maps topological spaces to the category of chain complexes Comp (or Kom). The category of chain complexes has chain complexes as its objects, and chain maps as its morphisms. {\displaystyle H_{n}:\mathbf {Comp} \to \mathbf {Ab} } {\displaystyle C_{\bullet }\mapsto H_{n}(C_{\bullet })=Z_{n}(C_{\bullet })/B_{n}(C_{\bullet })} Coefficients in REdit {\displaystyle H_{n}(X;R)\ } {\displaystyle H_{n}(X;\mathbb {Z} )=H_{n}(X)} when one takes the ring to be the ring of integers. The notation Hn(X; R) should not be confused with the nearly identical notation Hn(X, A), which denotes the relative homology (below). The universal coefficient theorem provides a mechanism to calculate the homology with R coefficients in terms of homology with usual integer coefficients using the short exact sequence {\displaystyle 0\to H_{n}(X;\mathbb {Z} )\otimes R\to H_{n}(X;R)\to Tor(H_{n}-1(X;\mathbb {Z} ),R)\to 0.} where Tor is the Tor functor.[8] Of note, if R is torsion-free, then Tor(G, R) = 0 for any G, so the above short exact sequence reduces to an isomorphism between {\displaystyle H_{n}(X;\mathbb {Z} )\otimes R} {\displaystyle H_{n}(X;R).} Relative homologyEdit For a subspace {\displaystyle A\subset X} , the relative homology Hn(X, A) is understood to be the homology of the quotient of the chain complexes, that is, {\displaystyle H_{n}(X,A)=H_{n}(C_{\bullet }(X)/C_{\bullet }(A))} {\displaystyle 0\to C_{\bullet }(A)\to C_{\bullet }(X)\to C_{\bullet }(X)/C_{\bullet }(A)\to 0.} Reduced homologyEdit The reduced homology of a space X, annotated as {\displaystyle {\tilde {H}}_{n}(X)} is a minor modification to the usual homology which simplifies expressions of some relationships and fulfils the intuiton that all homology groups of a point should be zero. For the usual homology defined on a chain complex: {\displaystyle \dotsb {\overset {\partial _{n+1}}{\longrightarrow \,}}C_{n}{\overset {\partial _{n}}{\longrightarrow \,}}C_{n-1}{\overset {\partial _{n-1}}{\longrightarrow \,}}\dotsb {\overset {\partial _{2}}{\longrightarrow \,}}C_{1}{\overset {\partial _{1}}{\longrightarrow \,}}C_{0}{\overset {\partial _{0}}{\longrightarrow \,}}0} To define the reduced homology, we augment the chain complex with an additional {\displaystyle \mathbb {Z} } {\displaystyle C_{0}} and zero: {\displaystyle \dotsb {\overset {\partial _{n+1}}{\longrightarrow \,}}C_{n}{\overset {\partial _{n}}{\longrightarrow \,}}C_{n-1}{\overset {\partial _{n-1}}{\longrightarrow \,}}\dotsb {\overset {\partial _{2}}{\longrightarrow \,}}C_{1}{\overset {\partial _{1}}{\longrightarrow \,}}C_{0}{\overset {\epsilon }{\longrightarrow \,}}\mathbb {Z} \to 0} {\displaystyle \epsilon \left(\sum _{i}n_{i}\sigma _{i}\right)=\sum _{i}n_{i}} The reduced homology groups are now defined by {\displaystyle {\tilde {H}}_{n}(X)=\ker(\partial _{n})/\mathrm {im} (\partial _{n+1})} for positive n and {\displaystyle {\tilde {H}}_{0}(X)=\ker(\epsilon )/\mathrm {im} (\partial _{1})} For n > 0, {\displaystyle H_{n}(X)={\tilde {H}}_{n}(X)} , while for n = 0, {\displaystyle H_{0}(X)={\tilde {H}}_{0}(X)\oplus \mathbb {Z} .} CohomologyEdit By dualizing the homology chain complex (i.e. applying the functor Hom(-, R), R being any ring) we obtain a cochain complex with coboundary map {\displaystyle \delta } . The cohomology groups of X are defined as the homology groups of this complex; in a quip, "cohomology is the homology of the co [the dual complex]". Betti homology and cohomologyEdit Extraordinary homologyEdit ^ Hatcher, 105 ^ Theorem 2.10. Hatcher, 111 ^ Hatcher, 142-143
Create a simple system - Drake Tutorial Before starting, make sure you understand the basic Drake concept at this link. Given a Equation of Motion (EoM) of a particle mass system, create the exact system which moves with external force. Simulate the system and visualize the results. A non-linear system has EoM like this: \dot{x} = f(t,x,u) \\ y = g(t,x,u) Say in our example, the system has EoM like this: \ddot{x} = f(t)/m \\ y(t) = x(t) \\ where, x(t) = [x, \dot{x}]' The system get external force f(t) from input port, then we compute the system state and write the result observation y(t) to output port. f(t) comes from a input source, the output of the system goes to the visualization so we could observe the result. class Particle1dPlant final : public systems::LeafSystem<T> { // Disables the built in copy and move constructor. DRAKE_NO_COPY_NO_MOVE_NO_ASSIGN(Particle1dPlant); /// Constructor defines a prototype for its continuous state and output port. Particle1dPlant(); /// Return the OutputPort associated with this Particle1dPlant. /// This method is called when connecting the ports in the diagram builder. const systems::OutputPort<T>& get_output_port() const { return systems::System<T>::get_output_port(0); /// Returns the current state of this Particle1dPlant as a BasicVector. /// The return value is mutable to allow the calling method to change the /// state (e.g., to set initial values). For example, this method is called /// when building a diagram so initial values can be set by the simulator. /// @param[in] context The Particle1dPlant sub-system context. static systems::BasicVector<T>& get_mutable_state( systems::Context<T>* context) { return get_mutable_state(&context->get_mutable_continuous_state()); // Casts the continuous state vector from a VectorBase to a BasicVector. static const systems::BasicVector<T>& get_state( const systems::ContinuousState<T>& cstate) { return dynamic_cast<const systems::BasicVector<T>&>(cstate.get_vector()); // This method is called in DoCalcTimeDerivative() as a way to update the // state before the time derivatives are calculated. const systems::Context<T>& context) { return get_state(context.get_continuous_state()); // Casts the mutable continuous state vector from a VectorBase to BasicVector. systems::ContinuousState<T>* cstate) { return dynamic_cast<systems::BasicVector<T>&>(cstate->get_mutable_vector()); // This is the calculator method that assigns values to the state output port. void CopyStateOut(const systems::Context<T>& context, systems::BasicVector<T>* output) const; // Method that calculates the state time derivatives. void DoCalcTimeDerivatives(const systems::Context<T>& context, systems::ContinuousState<T>* derivatives) const override; Build the diagram // Parsing the URDF and constructing a RigidBodyTree from it std::make_unique<RigidBodyTree<double>>(); parsers::urdf::AddModelInstanceFromUrdfFileToWorld( FindResourceOrThrow("drake/examples/particle1d/particle1d.urdf"), multibody::joints::kFixed, tree.get()); // Construct and empty diagram, then add a particle plant. Particle1dPlant<double>* particle_plant = builder.AddSystem<Particle1dPlant<double>>(); particle_plant->set_name("RigidBox"); // To set constant parameters (such as mass) in the particle plant, pass // information from the tree (which read constant parameters in the .urdf). particle_plant->SetConstantParameters(*tree); // Add a visualizer block to the diagram, uses the tree and an lcm object. lcm::DrakeLcm lcm; systems::DrakeVisualizer* publisher = builder.AddSystem<systems::DrakeVisualizer>(*tree, &lcm); publisher->set_name("publisher"); // Within the diagram, connect the particle plant's output port to the // visualizer's input port. builder.Connect(particle_plant->get_output_port(), publisher->get_input_port(0)); // Build the diagram described by previous call to connect input/export port. // Constructs a Simulator that advances this diagram through time. systems::Simulator<double> simulator(*diagram); // To set initial values for the simulation: // * Get the Diagram's context. // * Get the part of the Diagram's context associated with particle_plant. // * Get the part of the particle_plant's context associated with state. // * Fill the state with initial values. systems::Context<double>& simulator_context = simulator.get_mutable_context(); systems::Context<double>& particle_plant_context = diagram->GetMutableSubsystemContext(*particle_plant, &simulator_context); systems::BasicVector<double>& state = particle_plant->get_mutable_state(&particle_plant_context); const double x_init = 0.0; const double xDt_init = 0.0; state.SetAtIndex(0, x_init); state.SetAtIndex(1, xDt_init); // Set upper-limit on simulation speed (mostly for visualization). simulator.set_target_realtime_rate(1); // Simulate for 10 seconds (default units are SI, with units of seconds). std::cout << "Starting simulation." << std::endl; simulator.StepTo(10.0); std::cout << "Simulation done." << std::endl; Thanks Manuel Ahumada for providing this example.
A Note on the Square Roots of a Class of Circulant Matrices 2013 A Note on the Square Roots of a Class of Circulant Matrices Ying Zhang, Huisheng Zhang, Guoyan Chen We prove that any k -circulant matrix and any even order skew k -circulant matrix are diagonalizable for any k\in ℂ . Then, we propose two algorithms for computing the square roots of the k -circulant matrix and the skew k -circulant matrix, respectively. In particular, we show that the square roots of the k -circulant matrix are still k -circulant matrices. Both the theoretical analysis and the numerical experiments show that our algorithms are faster than the standard Schur method. Ying Zhang. Huisheng Zhang. Guoyan Chen. "A Note on the Square Roots of a Class of Circulant Matrices." J. Appl. Math. 2013 1 - 6, 2013. https://doi.org/10.1155/2013/601243 Ying Zhang, Huisheng Zhang, Guoyan Chen "A Note on the Square Roots of a Class of Circulant Matrices," Journal of Applied Mathematics, J. Appl. Math. 2013(none), 1-6, (2013)
Bootstrap sampling - MATLAB bootstrp - MathWorks United Kingdom Estimate Density of Bootstrapped Statistic Bootstrapping Multiple Statistics Bootstrap Samples of Observations Bootstrapping Correlation Coefficient Standard Error Compare Bootstrap Samples with Different Observation Weights Bootstrapping Regression Model bootsam bootstat = bootstrp(nboot,bootfun,d) bootstat = bootstrp(nboot,bootfun,d1,...,dN) bootstat = bootstrp(___,Name,Value) [bootstat,bootsam] = bootstrp(___) bootstat = bootstrp(nboot,bootfun,d) draws nboot bootstrap data samples from d, computes statistics on each sample using the function bootfun, and returns the results in bootstat. The bootstrp function creates each bootstrap sample by sampling with replacement from the rows of d. Each row of the output argument bootstat contains the results of applying bootfun to one bootstrap sample. bootstat = bootstrp(nboot,bootfun,d1,...,dN) draws nboot bootstrap samples from the data in dl,...,dN. The nonscalar data arguments in dl,...,dN must have the same number of rows, n. The bootstrp function creates each bootstrap sample by sampling with replacement from the indices 1:n and selecting the corresponding rows of the nonscalar dl,...,dN. The function passes the sample of nonscalar data and the unchanged scalar data arguments in dl,...,dN to bootfun. bootstat = bootstrp(___,Name,Value) specifies options using one or more name-value pair arguments in addition to any of the input argument combinations in previous syntaxes. For example, you can add observation weights to your data or compute bootstrap iterations in parallel. [bootstat,bootsam] = bootstrp(___) also returns bootsam, an n-by-nboot matrix of bootstrap sample indices, where n is the number of rows in the original, nonscalar data. Each column in bootsam corresponds to one bootstrap sample and contains the row indices of the values drawn from the nonscalar data to create that sample. To get the bootstrap sample indices without applying a function to the samples, set bootfun to empty ([]). Estimate the kernel density of bootstrapped means. Compute a sample of 100 bootstrapped means of random samples taken from the vector y. m = bootstrp(100,@mean,y); Plot an estimate of the density of the bootstrapped means. [fi,xi] = ksdensity(m); plot(xi,fi) Compute and plot the means and standard deviations of 100 bootstrap samples. Compute a sample of 100 bootstrapped means and standard deviations of random samples taken from the vector y. stats = bootstrp(100,@(x)[mean(x) std(x)],y); Plot the bootstrap estimate pairs. plot(stats(:,1),stats(:,2),'o') Take bootstrap samples of patient data, compute the mean measurements for each data sample, and visualize the results. Load the patients data set. Create the matrix patientData containing age, weight, and height measurements. Each row of patientData corresponds to one patient. patientData = [Age Weight Height]; Create 200 bootstrap data samples from the data in patientData. To create each sample, randomly select with replacement 100 rows (that is, size(patientData,1)) from the rows in patientData. For each sample, calculate the mean age, weight, and height measurements. Each row of bootstat contains the three mean measurements for one bootstrap sample. bootstat = bootstrp(200,@mean,patientData); Visualize the mean measurements for all 200 bootstrap data samples. Note that bootstrap samples with greater mean weights tend to have greater mean heights. scatter3(bootstat(:,1),bootstat(:,2),bootstat(:,3)) xlabel('Mean Age') ylabel('Mean Weight') zlabel('Mean Height') Compute a correlation coefficient standard error using bootstrap resampling of the sample data. Load the lawdata data set, which contains the LSAT score and law school GPA for 15 students. size(lsat) size(gpa) Create 1000 data samples by resampling the 15 data points, and compute the correlation between the two variables for each data sample. [bootstat,bootsam] = bootstrp(1000,@corr,lsat,gpa); Display the first 5 bootstrapped correlation coefficients. bootstat(1:5,:) Display the indices of the data selected for the first 5 bootstrap samples. bootsam(:,1:5) 13 3 11 8 12 Create a histogram that shows the variation of the correlation coefficient across all the bootstrap samples. histogram(bootstat) The sample minimum is positive, indicating that the relationship between LSAT score and GPA is not accidental. Finally, compute a bootstrap standard of error for the estimated correlation coefficient. se = std(bootstat) Compare bootstrap samples with different observation weights. Create a custom function that computes statistics for each sample. Create 50 bootstrap samples from the numbers 1 through 6. To create each sample, bootstrp randomly chooses with replacement from the numbers 1 through 6, six times. This process is similar to rolling a die six times. For each sample, the custom function countfun (shown at the end of this example) counts the number of 1s in the sample. rng('default') %For reproducibility counts = bootstrp(50,@countfun,(1:6)'); Note: If you use the live script file for this example, the countfun function is already included at the end of the file. Otherwise, you need to create this function at the end of your .m file or add it as a file on the MATLAB® path. Create 50 bootstrap samples from the numbers 1 through 6, but assign different weights to the numbers. Each time bootstrp randomly chooses from the numbers 1 through 6, the probability of choosing a 1 is 0.5, the probability of choosing a 2 is 0.1, and so on. Again, countfun counts the number of 1s in each sample. weights = [0.5 0.1 0.1 0.1 0.1 0.1]'; weightedCounts = bootstrp(50,@countfun,(1:6)','Weights',weights); Compare the two sets of bootstrap samples by using histograms. histogram(counts) histogram(weightedCounts) xlabel('Number of 1s in Sample') ylabel('Number of Samples') The two sets of bootstrap samples have different distributions; in particular, the samples in the second set tend to contain more 1s. For example, of the 50 samples in the first set, only two samples contain more than two 1s. By contrast, of the 50 samples in the second set (with observation weights), 12+14+4+2=32 samples contain more than two 1s. This code creates the function countfun. function numberofones = countfun(sample) numberofones = sum(sample == 1); Estimate the standard errors for a coefficient vector in a linear regression by bootstrapping the residuals. Note: This example uses regress, which is useful when you simply need the coefficient estimates or residuals of a regression model and you need to repeat fitting a model multiple times, as in the case of bootstrapping. If you need to investigate a fitted regression model further, create a linear regression model object by using fitlm. Perform a linear regression, and compute the residuals. Estimate the standard errors by bootstrapping the residuals. se = std(bootstrp(1000,@(bootr)regress(yfit+bootr,x),resid)) Number of bootstrap samples to draw, specified as a positive integer scalar. To create each bootstrap sample, bootstrp randomly selects with replacement n out of the n rows of (nonscalar) data in d or d1,...,dN. For an example that uses a custom function, see Compare Bootstrap Samples with Different Observation Weights. If you use a single vector argument d, you can specify it as a row vector. bootstrp then samples from the elements of the vector. Example: bootstrp(4,@mean,(1:2)','Weights',[0.4 0.6]') specifies to draw four bootstrap samples from the values 1 and 2 and take the mean of each sample. For each draw, the probability of getting a 1 is 0.4, and the probability of getting a 2 is 0.6. Observation weights, specified as the comma-separated pair consisting of 'Weights' and a nonnegative vector with at least one positive element. The number of elements in Weights must be equal to the number of rows n in the data d or d1,...,dN. To obtain one bootstrap sample, bootstrp randomly selects with replacement n out of n rows of data using the weights as multinomial sampling probabilities. Options for computing bootstrap iterations in parallel and setting random numbers during the bootstrap sampling, specified as the comma-separated pair consisting of 'Options' and a structure. Create the Options structure with statset. This table lists the option fields and their values. Streams Specify this value as a RandStream object or cell array of such objects. Use a single object except when the UseParallel value is true and the UseSubstreams value is false. In that case, use a cell array that has the same size as the parallel pool. If you do not specify Streams, then bootstrp uses the default stream or streams. bootstat — Bootstrap sample statistics Bootstrap sample statistics, returned as a column vector or matrix with nboot rows. The ith row of bootstat corresponds to the results of applying bootfun to the ith bootstrap sample. If bootfun returns a matrix or array, then the bootstrp function first converts this output to a row vector before storing it in bootstat. bootsam — Bootstrap sample indices Bootstrap sample indices, returned as an n-by-nboot numeric matrix, where n is the number of rows in the original, nonscalar data. Each column in bootsam corresponds to one bootstrap sample and contains the row indices of the values drawn from the nonscalar data to create that sample. For example, if each data input argument in d1,...,dN contains 16 values, and nboot = 4, then bootsam is a 16-by-4 matrix. The first column contains the indices of the 16 values drawn from d1,...,dN for the first bootstrap sample, the second column contains the indices for the second bootstrap sample, and so on. The bootstrap indices are the same for all input data sets d1,...,dN. To get the bootstrap sample indices bootsam without applying a function to the samples, set bootfun to empty ([]). histogram | bootci | ksdensity | parfor | random | randsample | RandStream | statget | statset
Algorithmic Graph Theory | EMS Press New Jersey Inst. of Technology, Newark, United States Universität GHS Paderborn, Germany The workshop \emph{Algorithmic Graph Theory}, organized by Artur Czumaj (New Jersey), Friedhelm Meyer auf der Heide (Paderborn), Klaus Jansen (Kiel) and Ingo Schiermeyer (Freiberg) was held February 12th--February 18th, 2006. This meeting was attended by 46 participants from a wide range of geographic regions, many of them young researchers. In the morning sessions survey talks providing an overview of recent developments in Algorithmic Graph Theory were presented. \begin{itemize} \item Geometry and Graphs \begin{itemize} \item Random triangulations of planar point sets (Emo Welzl) \item Dynamic routing in graphs with applications to harbour logistics (Rolf M\"ohring) \item Page migration in dynamic networks (Friedhelm Meyer auf der Heide) \item Simple coresets for clustering problems (Christian Sohler) \end{itemize} \item Graph Algorithms \begin{itemize} \item Parallel matching algorithms (Stefan Hougardy) \item Scheduling malleable tasks with precedence constraints (Klaus Jansen) \item Coloring random graphs (Lefteris Kirousis) \item Faster approximation algorithms for packing and covering problems (Daniel Bienstock) \item Algebraic graph algorithms (Piotr Sankowski) \end{itemize} \item Game Theory and Graphs \begin{itemize} \item Graphs, Games and Algorithms (Paul Spirakis) \item Learning wardrop equilibria (Bertold V\"ocking) \end{itemize} \item Graph Structures \begin{itemize} \item Phylogenetic Trees and k -leaf powers (Andreas Brandst\"adt) \item Arbitrarily vertex decomposable graphs (Mariusz Wo{\'z}niak) \item Precoloring extension (Margit Voigt) \item On exact algorithms for treewidth (Hans Bodlaender) \end{itemize} There were 10 shorter talks and two special sessions on "Graph Algorithms" (organized by Christian Sohler) and "Graph Colouring" (organized by Ingo Schiermeyer). The contributions showed progress in the field provided in recent years. Furthermore, several open problems and conjectures were presented, some of them still far from being resolved. Beyond the program there was plenty of time for the participants to use the stimulating atmosphere of the Oberwolfach Institute for fruitful discussions. Artur Czumaj, Klaus Jansen, Friedhelm Meyer auf der Heide, Ingo Schiermeyer, Algorithmic Graph Theory. Oberwolfach Rep. 3 (2006), no. 1, pp. 379–460
Octave - Wikipedia (Redirected from Octaves) In music, an octave (Latin: octavus: eighth) or perfect octave (sometimes called the diapason)[2] is the interval between one musical pitch and another with double its frequency. The octave relationship is a natural phenomenon that has been referred to as the "basic miracle of music," the use of which is "common in most musical systems."[3] The interval between the first and second harmonics of the harmonic series is an octave. A perfect octave between two Cs In Western music notation, notes separated by an octave (or multiple octaves) have the same letter name and are of the same pitch class. 1 Explanation and definition 3.1 Octave of a pitch 3.2 Ottava alta and bassa Explanation and definitionEdit For example, if one note has a frequency of 440 Hz, the note one octave above is at 880 Hz, and the note one octave below is at 220 Hz. The ratio of frequencies of two notes an octave apart is therefore 2:1. Further octaves of a note occur at {\displaystyle 2^{n}} times the frequency of that note (where n is an integer), such as 2, 4, 8, 16, etc. and the reciprocal of that series. For example, 55 Hz and 440 Hz are one and two octaves away from 110 Hz because they are +1⁄2 (or {\displaystyle 2^{-1}} ) and 4 (or {\displaystyle 2^{2}} ) times the frequency, respectively. The number of octaves between two frequencies is given by the formula: {\displaystyle {\text{Number of octaves}}=\log _{2}\left({\frac {f_{2}}{f_{1}}}\right)} Most musical scales are written so that they begin and end on notes that are an octave apart. For example, the C major scale is typically written C D E F G A B C (shown below), the initial and final C's being an octave apart. Because of octave equivalence, notes in a chord that are one or more octaves apart are said to be doubled (even if there are more than two notes in different octaves) in the chord. The word is also used to describe melodies played in parallel in more than multiple[clarification needed] octaves. While octaves commonly refer to the perfect octave (P8), the interval of an octave in music theory encompasses chromatic alterations within the pitch class, meaning that G♮ to G♯ (13 semitones higher) is an Augmented octave (A8), and G♮ to G♭ (11 semitones higher) is a diminished octave (d8). The use of such intervals is rare, as there is frequently a preferable enharmonically-equivalent notation available (minor ninth and major seventh respectively), but these categories of octaves must be acknowledged in any full understanding of the role and meaning of octaves more generally in music. Octave of a pitchEdit Octaves are identified with various naming systems. Among the most common are the scientific, Helmholtz, organ pipe, MIDI[citation needed], and MIDI note systems. In scientific pitch notation, a specific octave is indicated by a numerical subscript number after note name. In this notation, middle C is C4, because of the note's position as the fourth C key on a standard 88-key piano keyboard, while the C an octave higher is C5. C−1 C0 C1 C2 C3 C4 C5 C6 C7 C8 C9 C,,, C,, C, C c c' c'' c''' c'''' c''''' c'''''' 64 Foot 32 Foot 16 Foot 8 Foot 4 Foot 2 Foot 1 Foot 3 Line 4 Line 5 Line 6 Line Dbl Contra Sub Contra Contra Great Small 1 Line 2 Line 3 Line 4 Line 5 Line 6 Line Ottava alta and bassaEdit Example of the same three notes expressed in three ways: (1) regularly, (2) in an 8va bracket, and (3) in a 15ma bracket Similar example with 8vb and 15mb The notation 8a or 8va is sometimes seen in sheet music, meaning "play this an octave higher than written" (all' ottava: "at the octave" or all' 8va). 8a or 8va stands for ottava, the Italian word for octave (or "eighth"); the octave above may be specified as ottava alta or ottava sopra). Sometimes 8va is used to tell the musician to play a passage an octave lower (when placed under rather than over the staff), though the similar notation 8vb (ottava bassa or ottava sotto) is also used. Similarly, 15ma (quindicesima) means "play two octaves higher than written" and 15mb (quindicesima bassa) means "play two octaves lower than written." The abbreviations col 8, coll' 8, and c. 8va stand for coll'ottava, meaning "with the octave", i.e. to play the notes in the passage together with the notes in the notated octaves. Any of these directions can be cancelled with the word loco, but often a dashed line or bracket indicates the extent of the music affected.[4] Demonstration of octave equivalence. The melody to "Twinkle, Twinkle, Little Star" with parallel harmony. The melody is paralleled in three ways: (1) in octaves (consonant and equivalent); (2) in fifths (fairly consonant but not equivalent); and (3) in seconds (neither consonant nor equivalent). After the unison, the octave is the simplest interval in music. The human ear tends to hear both notes as being essentially "the same", due to closely related harmonics. Notes separated by an octave "ring" together, adding a pleasing sound to music. The interval is so natural to humans that when men and women are asked to sing in unison, they typically sing in octave.[5] For this reason, notes an octave apart are given the same note name in the Western system of music notation—the name of a note an octave above A is also A. This is called octave equivalence, the assumption that pitches one or more octaves apart are musically equivalent in many ways, leading to the convention "that scales are uniquely defined by specifying the intervals within an octave".[6] The conceptualization of pitch as having two dimensions, pitch height (absolute frequency) and pitch class (relative position within the octave), inherently include octave circularity.[6] Thus all C♯s, or all 1s (if C = 0), in any octave are part of the same pitch class. Octave equivalence is a part of most advanced[clarification needed] musical cultures, but is far from universal in "primitive" and early music.[7][failed verification][8][clarification needed] The languages in which the oldest extant written documents on tuning are written, Sumerian and Akkadian, have no known word for "octave". However, it is believed that a set of cuneiform tablets that collectively describe the tuning of a nine-stringed instrument, believed to be a Babylonian lyre, describe tunings for seven of the strings, with indications to tune the remaining two strings an octave from two of the seven tuned strings.[9] Leon Crickmore recently proposed that "The octave may not have been thought of as a unit in its own right, but rather by analogy like the first day of a new seven-day week".[10] Monkeys experience octave equivalence, and its biological basis apparently is an octave mapping of neurons in the auditory thalamus of the mammalian brain.[11] Studies have also shown the perception of octave equivalence in rats,[12] human infants,[13] and musicians[14] but not starlings,[15] 4–9 year old children,[16] or nonmusicians.[14][6][clarification needed] Blind octave – Music composition and performance technique Decade – Unit for measuring ratios on a logarithmic scale Eight-foot pitch – Standard pitch designation Octave species – Classification of musical key or scale in ancient Greek music theory Pitch circularity – Fixed series of tones that appear to ascend or descend endlessly in pitch Pythagorean interval – Musical interval Solfège – Music teaching method ^ a b c Duffin, Ross W. (2008). How equal temperament ruined harmony : (and why you should care) (First published as a Norton paperback. ed.). New York: W. W. Norton. p. 163. ISBN 978-0-393-33420-3. Archived from the original on 5 December 2017. Retrieved 28 June 2017. ^ William Smith & Samuel Cheetham (1875). A Dictionary of Christian Antiquities. London: John Murray. ISBN 9780790582290. Archived from the original on 2016-04-30. ^ Cooper, Paul (1973). Perspectives in Music Theory: An Historical-Analytical Approach, p. 16. ISBN 0-396-06752-2. ^ Prout, Ebenezer & Fallows, David (2001). "All'ottava". In Sadie, Stanley & Tyrrell, John (eds.). The New Grove Dictionary of Music and Musicians (2nd ed.). London: Macmillan. ‎ ^ "Music". Vox Explained. Event occurs at 12:50. Retrieved 2018-11-01. When you ask men and women to sing in unison, what typically happens is they actually sing an octave apart. ^ a b c Burns, Edward M. (1999). "Intervals, Scales, and Tuning". In Diana Deutsch (ed.). The Psychology of Music (2nd ed.). San Diego: Academic Press. p. 252. ISBN 0-12-213564-4. ^ e.g., Nettl,[clarification needed] 1956;[incomplete short citation] Sachs, C[urt]. and Kunst, J[aap]. (1962). In The Wellsprings of Music, ed. Kunst, J. The Hague: Marinus Nijhoff.[incomplete short citation] ^ e.g., Nettl, 1956;[incomplete short citation] Sachs, C. and Kunst, J. (1962).[incomplete short citation] Cited in Burns 1999, p. 217. ^ Clint Goss (2012). "Flutes of Gilgamesh and Ancient Mesopotamia". Flutopedia. Archived from the original on 2012-06-28. Retrieved 2012-01-08. ^ Leon Crickmore (2008). "New Light on the Babylonian Tonal System". ICONEA 2008: Proceedings of the International Conference of Near Eastern Archaeomusicology, Held at the British Museum, December 4–6, 2008. 24: 11–22. ^ "The mechanism of octave circularity in the auditory brain Archived 2010-04-01 at the Wayback Machine", Neuroscience of Music. ^ Blackwell & Schlosberg 1943. ^ Demany & Armand 1984. ^ Cynx 1993. ^ Sergeant 1983. Allen, David. 1967. "Octave Discriminability of Musical and Non-Musical Subjects". Psychonomic Science 7:421–22. Blackwell, H. R., & H. Schlosberg. 1943. "Octave Generalization, Pitch Discrimination, and Loudness Thresholds in the White Rat". Journal of Experimental Psychology 33:407–419. Cynx, Jeffrey. 1996. "Neuroethological Studies on How Birds Discriminate Song". In Neuroethology of Cognitive and Perceptual Processes, edited by C. F. Moss and S. J. Shuttleworth, 63. Boulder: Westview Press. Demany, Laurent, and Françoise Armand. 1984. "The Perceptual Reality of Tone Chroma in Early Infancy". Journal of the Acoustical Society of America 76:57–66. Sergeant, Desmond. 1983. "The Octave: Percept or Concept?" Psychology of Music 11, no. 1:3–18. Anatomy of an Octave by Kyle Gann Retrieved from "https://en.wikipedia.org/w/index.php?title=Octave&oldid=1089594631"
§ Perspectives on Yoneda We can try to gain intuition for Yoneda by considering a finite category where we view arrows as directed paths. The "interpretation" of a path is taken by going from edges to labels and then concatenating all edge labels. We "interpret" the label id_x as "" (the empty string), and we "interpret" all other arrows a as some unique string associated to each arrow. Composition of arrows becomes concatenation of strings. This obeys all the axioms of a category. We are basically of a category as a free monoid. Let's being our consideration of the covariant functor Hom(O, -): C → Set. Note that the objects of this category are sets of arrows Hom(O, P). To every arrow a: P → Q we associate the set function a': Hom(O, P) → Hom(O, Q). a'(op) = pq . op. Now, to apply Yoneda, we need another covariant functor G: S → Set. We now need to show that the set of natural transformations η: F → G are in bijection with the set G[O] ∈ Set. We do this by the following consideration. Recall that for the natural transformation, we have the commuting diagram: x -p-> y Hom(o, x) -p'-> Hom(o, y) ηx ηy G[x] -G[p] --> G[y] ∀ x y ∈ C, o2x ∈ Hom(o, x), p ∈ Hom(x, y), G[p](ηx(o2x)) = ηy(p'(o2x)) Which on using the definition of p' becomes: G[p](ηx(o2x)) = ηy(o2x . p') Now pick the magic x = o and o2x = o2o = id_o. This gives: x = o, y ∈ C o2x = id_o ∀ y ∈ C, p ∈ Hom(o, y), G[p](ηo(id_o)) = ηy(id_o . p') G[p](ηo(id_o)) = ηy(p') [By identity arrow] [assume we fix ηo(id_o) ∈ G[o] ] ηy(p') = G[p](ηo(id_o)) [ηy is now forced. everything on the RHS is known] Hence, we learn how to map every other arrow ηy(p). If we know how to map the arrows, we can map the objects in the hom-sets as images of the arrows, since we know what ηo[id_o] maps to. Concretely: § Images ηo(q) for q ∈ Hom(o, o) after ηo(id_o) is fixed: We have the relation id_o . q = q. So we get that the arrow q': Hom(o, o) → Hom(o, o) takes id_o to q. By the structure of the natural transformation, we have that: Pick x = y = o, o2x = id_o, p = q. This gives: G[q](ηo(id_o)) = ηo(q'(id_o)) G[q](ηo(id_o)) = ηo(q' . id_o) G[q](ηo(id_o)) = ηo(q') ηo(q') = G[q](ηo(id_o)) Hence, we've deduced ηo(q'), so we know what element q gets mapped to. The same process works with any arrow! § A shift in perspective: Yoneda as partial monoid. Since we're considering the sets Hom(o, -) , note that we can always pre-compose any element of Hom(o, o) Hom(o, p) . More-over, if we know the value of id_o , then we have the equation that id_o \circ a = a id_o is the identity. Moreover, id_o is the only identity arrow we possess across all Hom(o, -) : We can only access the identity arrow inside Hom(o, o) . For all other Hom(o, p) p \neq o , we do not have the identity arrow id_o id_p . So we have a sort of partial monoid , where we have a unique identity element id_o , and arrows that compose partially based on domain/codomain conditions. From this perspective, we can read the commutative diagram laws as a sort of "Cayley's theorem". We have as elements the elements of the set Hom(o, -) . For every arrow a: p \rightarrow q , we have the action Hom(o, p) \xrightarrow{a} Hom(o, q) . From this perspective, it is trivial to see that: Every monoid can be embedded into its action space (Cayley's theorem). This mapping of yoneda from Hom(o, -) to arbitrary sets is like a "forgetful" functor from a monoid into a semigroup. If our monoid is "well represented" by a semigroup, then once we know what the identity maps to, we can discover all of the other elements by using the relation f(ex) = f(e) f(x) . The only "arbitrariness" introduced by forgetting the monoid structure is the loss of the unique identity. NOTE : This is handwavy, since the data given by a natural transformation is somehow "different", in a way that I'm not sure how to make precise.
Integrating sphere - Wikipedia An integrating sphere (also known as an Ulbricht sphere) is an optical component consisting of a hollow spherical cavity with its interior covered with a diffuse white reflective coating, with small holes for entrance and exit ports. Its relevant property is a uniform scattering or diffusing effect. Light rays incident on any point on the inner surface are, by multiple scattering reflections, distributed equally to all other points. The effects of the original direction of light are minimized. An integrating sphere may be thought of as a diffuser which preserves power but destroys spatial information. It is typically used with some light source and a detector for optical power measurement. A similar device is the focusing or Coblentz sphere, which differs in that it has a mirror-like (specular) inner surface rather than a diffuse inner surface. Large integrating sphere for measurement on light bulbs and small lamps In 1892, W. E. Sumpner published an expression for the throughput of a spherical enclosure with diffusely reflecting walls.[1] Ř. Ulbricht developed a practical realization of the integrating sphere, the topic of a publication in 1900.[2] It has become a standard instrument in photometry and radiometry and has the advantage over a goniophotometer that the total power produced by a source can be obtained in a single measurement. Other shapes, such as a cubical box, have also been theoretically analyzed.[3] Even small commercial integrating spheres cost many thousands of dollars, as a result their use is often limited to industry and large academic institutions. However, 3D printing and homemade coatings have seen the production of experimentally accurate DIY spheres for very low cost.[4] 2 Total exit irradiance The theory of integrating spheres is based on these assumptions: Light hitting the sides of the sphere is scattered in a diffuse way i.e. Lambertian reflectance Only light that has been diffused in the sphere hits the ports or detectors used for probing the light Using these assumptions the sphere multiplier can be calculated. This number is the average number of times a photon is scattered in the sphere, before it is absorbed in the coating or escapes through a port. This number increases with the reflectivity of the sphere coating and decreases with the ratio between the total area of ports and other absorbing objects and the sphere inner area. To get a high homogeneity a recommended sphere multiplier is 10-25.[5] The theory further states that if the above criteria are fulfilled then the irradiance on any area element on the sphere will be proportional to the total radiant flux input to the sphere. Absolute measurements of instance luminous flux can then be done by measuring a known light source and determining the transfer function or calibration curve. Total exit irradianceEdit For a sphere with radius r, reflection coefficient ρ, and source flux Φ, the initial reflected irradiance is equal to: {\displaystyle E=\rho {\frac {\Phi }{4\pi r^{2}}}\,} Every time the irradiance is reflected, the reflection coefficient exponentially grows. The resulting equation is {\displaystyle E={\frac {\Phi }{4\pi r^{2}}}\,\rho (1+\rho +\rho ^{2}+...)} Since ρ ≤ 1, the geometric series converges and the total exit irradiance is:[6] {\displaystyle E={\frac {\Phi }{4\pi r^{2}}}\,{\frac {\rho }{1-\rho }}\,} Simplified principle of the use of an integrating sphere to measure the transmittance and reflectance of a test sample Integrating spheres are used for a variety of optical, photometric or radiometric measurements. They are used to measure the total light radiated in all directions from a lamp. An integrating sphere can be used to create a light source with apparent intensity uniform over all positions within its circular aperture, and independent of direction except for the cosine function inherent to ideally diffuse radiating surfaces (Lambertian surfaces). An integrating sphere can be used to measure the diffuse reflectance of surfaces, providing an average over all angles of illumination and observation. A number of methods exist to measure the absolute reflectance of a test object mounted on an integrating sphere. In 1916, E. B. Rosa and A. H. Taylor published the first such method.[7] Subsequent work by A. H. Taylor,[8][9] Frank A. Benford,[10][11] C. H. Sharpe & W. F. Little,[12] Enoch Karrer,[13] and Leonard Hanssen & Simon Kaplan[14][15] expanded the number of unique methods which measure port-mounted test objects. Edwards et al.,[16] Korte & Schmidt,[17] and Van den Akker et al.[18] developed methods which measure center-mounted test objects. Light scattered by the interior of the integrating sphere is evenly distributed over all angles. The integrating sphere is used in optical measurements. The total power (flux) of a light source can be measured without inaccuracy caused by the directional characteristics of the source, or the measurement device. Reflection and absorption of samples can be studied. The sphere creates a reference radiation source that can be used to provide a photometric standard. Since all the light incident on the input port is collected, a detector connected to an integrating sphere can accurately measure the sum of all the ambient light incident on a small circular aperture. The total power of a laser beam can be measured, free from the effects of beam shape, incident direction, and incident position, as well as polarization. Commercial integrating sphere. This particular model from Electro Optical Industries employs four separate lamps that can be specified to achieve the required spectral output from ultraviolet through infrared. The optical properties of the lining of the sphere greatly affect its accuracy. Different coatings must be used at visible, infrared and ultraviolet wavelengths. High-powered illumination sources may heat or damage the coating, so an integrating sphere will be rated for a maximum level of incident power. Various coating materials are used. For visible-spectrum light, early experimenters used a deposit of magnesium oxide, and barium sulfate also has a usefully flat reflectance over the visible spectrum. Various proprietary PTFE compounds are also used for visible light measurements. Finely-deposited gold is used for infrared measurements. An important requirement for the coating material is the absence of fluorescence. Fluorescent materials absorb short-wavelength light and re-emit light at longer wavelengths. Due to the many scatterings this effect is much more pronounced in an integrating sphere than for materials irradiated normally. The theory of the integrating sphere assumes a uniform inside surface with diffuse reflectivity approaching 100%. Openings where light can exit or enter, used for detectors and sources, are normally called ports. The total area of all ports must be small, less than about 5% of the surface area of the sphere, for the theoretical assumptions to be valid. Unused ports should therefore have matching plugs, with the interior surface of the plug coated with the same material as the rest of the sphere. Integrating spheres vary in size from a few centimeters in diameter up to a few meters in diameter. Smaller spheres are typically used to diffuse incoming radiation, while larger spheres are used to measure integrating properties like the luminous flux of a lamp or luminaries which is then placed inside the sphere. If the entering light is incoherent (rather than a laser beam), then it typically fills the source-port, and the ratio of source-port area to detector-port area is relevant. Baffles are normally inserted in the sphere to block the direct path of light from a source-port to a detector-port, since this light will have non-uniform distribution.[19] Sculpture of an integrating sphere. Located on the campus of the Technical University of Dresden ^ Sumpner, W. E. (1892). "The diffusion of light". Proceedings of the Physical Society of London. 12 (1): 10–29. doi:10.1088/1478-7814/12/1/304. ^ Ulbricht, Ř. (1900). "Die bestimmung der mittleren räumlichen lichtintensität durch nur eine messung". Electroteknische Zeit. (in German). 21: 595–610. ^ Sumpner, W. E. (1910). "The direct measurement of the total light emitted from a lamp". The Illuminating Engineer. 3: 323. ^ Tomes, John J.; Finlayson, Chris E. "Low cost 3D-printing used in an undergraduate project: an integrating sphere for measurement of photoluminescence quantum yield". European Journal of Physics. 37 (5): 055501. doi:10.1088/0143-0807/37/5/055501. ISSN 0143-0807. Retrieved 2021-10-12. ^ "Integrating Sphere Design and Applications" (PDF). SphereOptics. SphereOptics LLC. Archived from the original (PDF) on 2009-08-15. ^ Schott, John R. (2007). Remote Sensing: The Image Chain Approach. Oxford University Press. ISBN 978-0-19-972439-0. Retrieved 17 June 2020. ^ Rosa, E. B.; Taylor, A. H. (1916). "The integrating photometric sphere, its construction and use". Transactions of the Illumination Engineering Society. 11: 453. ^ Taylor, A. H. (1920). "The Measurement of Diffuse Reflection Factors and a New Absolute Reflectometer". Journal of the Optical Society of America. 4 (1): 9–23. doi:10.1364/JOSA.4.000009. Retrieved 2021-10-12. ^ Taylor, A. H. (1935). "Errors in Reflectometry". Journal of the Optical Society of America. 25 (2): 51–56. doi:10.1364/JOSA.25.000051. Retrieved 2021-10-12. ^ Benford, Frank A. (1920). "An absolute method for determining coefficients of diffuse reflection". General Electric Review. 23: 72–75. ^ Benford, Frank A. (1934). "A Reflectometer for All Types of Surfaces". Journal of the Optical Society of America. 24 (7): 165–174. doi:10.1364/JOSA.24.000165. Retrieved 2021-10-12. ^ Sharpe, C. H.; Little, W. F. (1920). "Measurements of Reflectance Factors". Transactions of the Illumination Engineering Society. 15: 802. ^ Karrer, Enoch (1921). "Use of the Ulbricht Sphere in measuring reflection and transmission factors". Scientific Papers of the Bureau of Standards. 17: 203–225. doi:10.6028/nbsscipaper.092. Retrieved 2021-10-12. ^ Hanssen, Leonard; Kaplan, Simon (1999-02-02). "Infrared diffuse reflectance instrumentation and standards at NIST". Analytica Chimica Acta. 380 (2–3): 289–302. doi:10.1016/S0003-2670(98)00669-2. Retrieved 2021-10-12. ^ Hanssen, Leonard (2001-07-01). "Integrating-sphere system and method for absolute measurement of transmittance, reflectance, and absorptance of specular samples". Applied Optics. 40 (19): 3196–3204. Bibcode:2001ApOpt..40.3196H. doi:10.1364/AO.40.003196. PMID 11958259. Retrieved 2021-10-12. ^ Edwards, D. K.; Gier, J. T.; Nelson, K. E.; Roddick, R. D. (1961). "Integrating Sphere for Imperfectly Diffuse Samples". Journal of the Optical Society of America. 51 (11): 1279–1288. doi:10.1364/JOSA.51.001279. Retrieved 2021-10-12. ^ Korte, H.; Schmidt, M. (1967). "Über Messungen des Leuchtdichtefaktors an beliebig reflektierenden Proben". Lichttechnik (in German). 19: 135A–137A. ^ Van den Akker, J. A.; Dearth, Leonard R.; Shillcox, Wayne M. (1966). "Evaluation of Absolute Reflectance for Standardization Purposes". Journal of the Optical Society of America. 56 (2): 250–252. doi:10.1364/JOSA.56.000250. Retrieved 2021-10-12. ^ Hanssen, Leonard M.; Prokhorov, Alexander V.; Khromchenko, Vladimir B. (2003-11-14). Specular baffle for improved infrared integrating sphere performance. Optical Science and Technology, SPIE's 48th Annual Meeting. Vol. 5192. San Diego, California, United States: SPIE. doi:10.1117/12.508299. Retrieved 2021-10-12. RP Photonics, Encyclopedia of Laser Physics and Technology, Integrating spheres Pike Technologies, Integrating Spheres – Introduction and Theory, Pike Technologies Application Note Newport, Flange Mount Integrating Spheres Whitehead, Lorne A.; Mossman, Michele A. (2006). "Jack O'Lanterns and integrating spheres: Halloween Physics". American Journal of Physics. 74 (6): 537–541. Bibcode:2006AmJPh..74..537W. doi:10.1119/1.2190687. Ducharme, Alfred; Daniels, Arnold; Grann, Eric; Boreman, Glenn (1997). "Design of an Integrating Sphere as a Uniform Illumination Source". IEEE Transactions on Education. 40 (2): 131–134. Bibcode:1997ITEdu..40..131D. doi:10.1109/13.572326. Peter Hiscocks, Integrating Sphere for Luminance Calibration, Rev 6, May 2016 Ci Systems, Integrating sphere introduction, mechanical structure, calibration and sources Electro-Optical Industries, Integrating Spheres The Status of Integrating Sphere in China Retrieved from "https://en.wikipedia.org/w/index.php?title=Integrating_sphere&oldid=1049654785"
{\displaystyle Powerloss=\Delta p_{LS}\cdot Q_{tot}} {\displaystyle \Delta p_{LS}} is around 2 MPa (290 psi). If the pump flow is high the extra loss can be considerable. The power loss also increases if the load pressures vary a lot. The cylinder areas, motor displacements and mechanical torque arms must be designed to match load pressure in order to bring down the power losses. Pump pressure always equals the maximum load pressure when several functions are run simultaneously and the power input to the pump equals the (max. load pressure + ΔpLS) x sum of flow. Bucher HydraulicsHydraulic brakeHydraulic cylinderHydraulic engineeringHydraulic headHydraulic press수리학Linde Hydraulics기계공학Fluid powerAngle grinderLubricationAutomatic lubricationTribology기계요소
Not All ROEs Are the Same Return on equity (ROE) is a ratio that provides investors with insight into how efficiently a company (or more specifically, its management team) is handling the money that shareholders have contributed to it. In other words, return on equity measures the profitability of a corporation in relation to stockholders’ equity. The higher the ROE, the more efficient a company's management is at generating income and growth from its equity financing. ROE is often used to compare a company to its competitors and the overall market. The formula is especially beneficial when comparing firms of the same industry since it tends to give accurate indications of which companies are operating with greater financial efficiency and for the evaluation of nearly any company with primarily tangible rather than intangible assets. Return on equity (ROE) is a financial ratio that shows how well a company is managing the capital that shareholders have invested in it. To calculate ROE, one would divide net income by shareholder equity. The higher the ROE, the more efficient a company's management is at generating income and growth from its equity financing. When utilizing ROE to compare companies, it is important to compare companies within the same industry, as with all financial ratios. Formula and Calculation of Return on Equity (ROE) The basic formula for calculating ROE is: ROE= \frac{\text{Net Income}}{\text{Shareholder Equity}} ROE=Shareholder EquityNet Income​ The net income is the bottom-line profit—before common-stock dividends are paid—reported on a firm’s income statement. Free cash flow (FCF) is another form of profitability and can be used instead of net income. Shareholder equity is assets minus liabilities on a firm’s balance sheet and is the accounting value that's left for shareholders should a company settle its liabilities with its reported assets. Note that ROE is not to be confused with the return on total assets (ROTA). While it is also a profitability metric, ROTA is calculated by taking a company's earnings before interest and taxes (EBIT) and dividing it by the company's total assets. ROE can also be calculated at different periods to compare its change in value over time. By comparing the change in ROE's growth rate from year to year or quarter to quarter, for example, investors can track changes in management's performance. The ROE of the entire stock market as measured by the S&P 500 was 12% in the fourth quarter of 2020. A first, critical component of deciding how to invest involves comparing certain industrial sectors to the overall market. For example, a look at ROE figures categorized by industry might show the stocks of the railroad sector performing very well compared to the market as a whole, with an ROE value of 19.66%, while the general utilities and retail sales sectors had ROEs of 5.77% and 18.11%, respectively. This could indicate that railroad companies have been a steady growth industry and have provided excellent returns to investors. The next step involves looking at individual companies to compare their ROEs with the market as a whole and with companies within their industry. For instance, at the end of FY 2019, Procter & Gamble (PG) reported a net income of $4 billion and total shareholders' equity of $47.6 billion. Thus, PG's ROE as of FY 2019 was: $4 billion ÷ $47.6 billion = 8.4% P&G's ROE was below the average ROE for the consumer goods sector of 14.41% at that time. In other words, for every dollar of shareholders' equity, P&G generated 8.4 cents in profit. Measuring a company's ROE performance against that of its sector is only one comparison. For example, in the fourth quarter of 2020, Bank of America Corporation (BAC) had an ROE of 8.4%. According to the Federal Deposit Insurance Corporation (FDIC), the average ROE for the banking industry during the same period was 6.88%. In other words, Bank of America outperformed the industry. In addition, the FDIC calculations deal with all banks, including commercial, consumer, and community banks. The ROE for commercial banks was 5.62% in the fourth quarter of 2020. Since Bank of America is, in part, a commercial lender, its ROE was above that of other commercial banks. In short, it's not only important to compare the ROE of a company to the industry average but also to similar companies within that industry. In evaluating companies, some investors use other measurements too, such as return on capital employed (ROCE) and return on operating capital (ROOC). Investors often use ROCE instead of the standard ROE when judging the longevity of a company. Generally speaking, both are more useful indicators for capital-intensive businesses, such as utilities or manufacturing. When Shareholder Equity Is Negative There can be circumstances when a company's equity is negative. This usually occurs when a company has incurred losses for a period of time and has had to borrow money to continue staying in business. In this case, liabilities will be greater than assets. If one were to calculate return on equity in this scenario when profits are positive, they would arrive at a negative ROE; however, this number would not be telling the entire story. It could indicate that a company is actually not making any profits, running at a loss because if a company was operating at a loss and had positive shareholder equity, the ROE would also be negative. In a situation when ROE is negative because of negative shareholder equity, the higher the negative ROE, the better. This is so because it would mean profits are that much higher, indicating possible long-term financial viability for the company. ROE will always tell a different story depending on the financials, such as if equity changes because of share buybacks or income is small or negative due to a one-time write-off. Understanding the components is critical. What Does Return on Equity Tell You? ROE tells you how efficiently a company can generate profits. Generally the higher the ROE the better, but it is best to look at companies within the same industry or sector with one another in order to make comparisons. What Is the Average ROE for U.S. Stocks? The S&P 500 had an average ROE in 2021 of 18.4%. Of course, different industry groups will have ROEs that are typically higher or lower than this average. How Do You Calculate ROE Using DuPont Analysis? ROE can be alternatively calculated using DuPont analysis. There are two such versions, one decomposing ROE with three steps and the second with five: ROE = Net Profit Margin x Asset Turnover x Equity Multiplier ROE = (Earnings before tax/Sales) x (Sales/Assets) x (Assets/Equity) x (1 - Tax Rate) Return on equity (ROE) is an important financial metric that investors can use to determine how efficient management is at utilizing equity financing provided by shareholders. It compares the net income to the equity of the firm. The higher the number, the better, but it is always important to measure apples to apples, meaning companies that operate in the same industry, as each industry has different characteristics that will alter their profits and use of financing. As with all investment analysis, ROE is just one metric highlighting only a portion of a firm's financials. It is critical to utilize a variety of financial metrics to get a full understanding of a company's financial health before investing. CSIMarket. "Management Effectiveness Information & Trends - Total Market - 2020." CSIMarket. "Management Effectiveness Information & Trends - Railroads Industry - 2020." CSIMarket. "Management Effectiveness Information & Trends - Utilities Sector - 2020." CSIMarket. "Management Effectiveness Information & Trends - Retail Sector - 2020." Proctor & Gamble. "2019 Annual Report," Pages 36 and 38. CSIMarket. "Management Effectiveness Information & Trends - Consumer Non-cyclical - 2019." Bank of America. Investor Relations. Quarterly Earnings. Q4. "News Release". Page 1. Federal Deposit Insurance Corporation. "Quarterly Banking Profile - Fourth Quarter 2020," Page 5. CSIMarket. "Management Effectiveness Information & Trends - Commercial Banks - 2020." CSIMarket. "S&P 500."
Span (engineering) - Wikipedia WikiHero Span (engineering) - Wikipedia Distance between supports of an arch, bridge, etc. Side view of a simply supported beam (top) bending under an evenly distributed load (bottom). The span is a significant factor in finding the strength and size of a beam as it determines the maximum bending moment and deflection. The maximum bending moment {\displaystyle M_{max}} and deflection {\displaystyle \delta _{max}} in the pictured beam is found using:[1] {\displaystyle M_{max}={\frac {qL^{2}}{8}}} {\displaystyle \delta _{max}={\frac {5M_{max}L^{2}}{48EI}}={\frac {5qL^{4}}{384EI}}} {\displaystyle q} = Uniformly distributed load {\displaystyle L} = Length of the beam between two supports (span) {\displaystyle E} = Modulus of elasticity {\displaystyle I} = Area moment of inertia Note that the maximum bending moment and deflection occur midway between the two supports. From this it follows that if the span is doubled, the maximum moment (and with it the stress) will quadruple, and deflection will increase by a factor of sixteen. For long-distance rope spans, used as power line, antenna or for aerial tramways, see list of spans. ^ Gere, James M.; Goodno, Barry J. Mechanics of Materials (Eighth ed.). p. 1086. ISBN 978-1-111-57773-5. Retrieved from "https://en.wikipedia.org/w/index.php?title=Span_(engineering)&oldid=1016808925" Spans (architecture)
FAQ - ICE Poker What is the ICE token contract? You can find the ICE token contract on Polygon here. Where can I trade ICE? You can trade ICE on QuickSwap through the ICE-USDC pool here. How do I play and earn? Step 1: Player Buys (or Receives Delegation for) ICE Wearable. A player purchases or receives delegation for an ICE Wearable. This enables the player to earn ICE from playing poker with Chips. Step 2: Player Plays And Earns ICE By Playing Poker. Players use Chips to complete daily challenges and compete for ICE multipliers for placing in the daily leaderboard. Step 3: Player Upgrades NFTs by burning ICE. Players upgrade their ICE Wearables by burning ICE. A higher ranked wearable yields a larger ICE bonus and a new, more exclusive look. Which token(s) can I use to play and earn? Initially, we will only support gameplay in Chips tokens to play and earn ICE. You receive a variable amount of Chips based on how many wearables you own or have delegated to you. Chips are off chain tokens given to each NFT holder to use to complete daily challenges and battle in the daily leaderboard. How many Chips do I get? Each player starts with a specific amount of Chips based on the amount of wearables they have equipped or delegated to them. Your Chips balance is reset each day at midnight UTC. Wearable Count How do I earn ICE rewards? You can earn ICE by checking in (winning one hand) and completing your allocated daily challenges. You then may amplify these ICE rewards by performing well on the daily leaderboard through a positive performance multiplier. When do I get ICE rewards? You receive ICE rewards at the end of each day at midnight UTC. Your total ICE reward is based on your daily challenges amount times your performance multiplier based on your daily leaderboard percentile. These ICE rewards are claimable on the DG Account page. What is a performance multiplier? A performance multiplier is based on your daily performance with respect to all the other players. Your performance multiplier is determined by their daily percentile ranking in net profits, which are calculated as: Score_{Player} = Chips_{Current} - Chips_{Starting} This multiplier adds or subtracts to their total ICE payout at the end of each day. What happens if I send an ICE Wearable to another address? If you send an ICE Wearable to another address, then it must be reactivated for 500 DG.
§ Subobject classifier measures how much we need to pay to access fact Truths are free. We don't pay any of the monoid (given A \rightarrow B , subobj assigns full monoid to image of A B We go bankrupt trying to prove really false things (subobj assigns emptyset) To things that are truth adjacent, we spend some of our monoid (by dividing the filter) to get to the element. So we "spend money" to "fix the lie" of the element in B not being truthful, but close enough to the truth.
Internal_resistance Knowpia Internal resistance model of a source of voltage A battery may be modeled as a voltage source in series with a resistance. These types of models are known as equivalent circuit models. Another common model being physiochemical models that are physical in nature involving concentrations and reaction rates. In practice, the internal resistance of a battery is dependent on its size, state of charge, chemical properties, age, temperature, and the discharge current. It has an electronic component due to the resistivity of the component materials and an ionic component due to electrochemical factors such as electrolyte conductivity, ion mobility, speed of electrochemical reaction and electrode surface area. Measurement of the internal resistance of a battery is a guide to its condition, but may not apply at other than the test conditions. Measurement with an alternating current, typically at a frequency of 1 kHz, may underestimate the resistance, as the frequency may be too high to take into account slower electrochemical processes. Internal resistance depends on temperature; for example, a fresh Energizer E91 AA alkaline primary battery drops from about 0.9 Ω at -40 °C, when the low temperature reduces ion mobility, to about 0.15 Ω at room temperature and about 0.1 Ω at 40 °C.[1] A large part of this drop is due to the increase in the magnitude of the electrolyte diffusion coefficient. {\displaystyle R_{\text{int}}=\left({{\frac {V_{\text{NL}}}{V_{\text{FL}}}}-1}\right){R_{\text{L}}}} This can also be expressed in terms of the Overpotential η and the current I: {\displaystyle R_{int}={\frac {\eta }{I}}} ^ "Battery Internal Resistance" (PDF). Energizer Technical Bulletin. Energizer Battery. December 2005. ^ Testing batteries with ESR meter ^ Understanding RC LiPo Batteries ^ ESR Meter For 2 – 6 Cell Lipo Packs - instructions
Belleville washer - 3D CAD Models & 2D Drawings Belleville washer (16410 views - Mechanical Engineering) A Belleville washer, also known as a coned-disc spring, conical spring washer, disc spring, Belleville spring or cupped spring washer, is a conical shell which can be loaded along its axis either statically or dynamically. A Belleville washer is, then, a type of spring shaped like a washer. It is the frusto-conical shape that gives the washer a spring characteristic. The "Belleville" name comes from the inventor Julien Belleville who in Dunkerque, France, in 1867 patented a spring design which already contained the principle of the disc spring. The real inventor of Belleville washers is, then, unknown. Through the years, a lot of different profiles for disc springs have been developed. Today the most used are the profiles with or without contact flats, while some other profiles, like disc springs with trapezoidal cross-section, have lost importance. 3D CAD Models - tellerfedern Licensed under Creative Commons Attribution-Share Alike 3.0 (The original uploader was Pud at English Wikipedia). A Belleville washer, also known as a coned-disc spring,[1] conical spring washer,[2] disc spring, Belleville spring or cupped spring washer, is a conical shell which can be loaded along its axis either statically or dynamically. A Belleville washer is, then, a type of spring shaped like a washer. It is the frusto-conical shape that gives the washer a spring characteristic. The "Belleville" name comes from the inventor Julien Belleville who in Dunkerque, France, in 1867 patented a spring design which already contained the principle of the disc spring.[1][3] The real inventor of Belleville washers is, then, unknown. Through the years, a lot of different profiles for disc springs have been developed. Today the most used are the profiles with or without contact flats, while some other profiles, like disc springs with trapezoidal cross-section, have lost importance. 4 Disc springs with contact flats and reduced thickness Through the years, the request of an always greater precision in calibrations process, the availability of new materials and new and more accurate calculation methods, led to the introduction of the disc spring into all areas of technology and now disc springs are used in a lot of industries: automotive, building, defense and so on. In the different fields, if they are used as springs or to apply a pre-load of flexible quality to a bolted joint or bearing, Belleville washers can be used as a single spring or as a stack. In a springs stack, disc springs can be stacked in the same or in an alternating direction and of course it possible to stack packets of multiple springs stacked in the same direction. Disc springs has a number of advantageous properties compared to other types of springs:[4] Very large loads can be supported with a small installation space, Due to the nearly unlimited number of possible combinations of individual disc springs, the characteristic curve and the column length can be further varied within additional limits, High service life under dynamic load if the spring is properly dimensioned, Provided the permissible stress is not exceeded, no impermissible relaxation occurs, With suitable arrangement, a large damping (high hysteresis) effect may be achieved, As said, thanks to these advantageous properties, Belleville washers are today used in a large number of fields, some examples are listed in the following. In the army industry, Belleville springs are used, for instance, in a number of landmines e.g. the American M19, M15, M14, M1 and the Swedish Tret-Mi.59. The target (a person or vehicle) exerts pressure on the belleville spring, causing it to exceed a trigger threshold and flip the adjacent firing pin downwards into a stab detonator, firing both it and the surrounding booster charge and main explosive filling. Belleville washers have been used as return springs in artillery pieces, one example being the French Canet range of marine/coastal cannon from the late 1800s (75 mm, 120mm, 152 mm). Some makers of bolt action target rifles use Belleville washer stacks in the bolt instead of a more traditional spring to release the firing pin, as they reduce the time between trigger actuation and firing pin impact on the cartridge.[5] They may also be used as locking devices, but only in applications with low dynamic loads, such as down-tube shifters for bicycles. Another example where they aid locking is a joint that experiences a large amount of thermal expansion and contraction. They will supply the required pre-load, but the bolt may have an additional locking mechanism (like Thread-locking fluid) that would fail without the Belleville. Belleville washers should never overhang the joint they are trying to secure, as all of their clamping force is on the outside perimeter edge of the washer. If a belleville washer overhangs a joint, its clamping effectiveness is exponentially reduced, and it is a poor application that will eventually fail under thermal expansion and contraction. Belleville washers, when used on the mounting bolts, are also useful as an indicator of swelling or shrinkage of wooden propellers on aircraft (typically experimental aircraft). By torquing their associated bolts to provide a specific gap between sets of washers placed with "high ends" facing each other, a change in relative moisture content in the propeller wood will result in a change of the gaps which is often great enough to be detected visually. As propeller balance depends on the weight of blades being equal, a radical difference in Belleville washer gaps may indicate a difference in moisture content—and thus weight—in the adjacent blades. As they provide extremely detailed tuning ability, disc springs are used in the automotive (even in Formula One cars[6]) and aircraft industries as vibration-damping elements: the Cirrus SR2x series, uses a Belleville washer setup to damp out nose gear oscillations (or "shimmy").[7] In the building industry, in Japan stacks of disc springs have been used under buildings as vibration dampers for earthquakes.[8] As said, multiple Belleville washers may be stacked to modify the spring constant (or spring rate) or the amount of deflection. Stacking in the same direction will add the spring constant in parallel, creating a stiffer joint (with the same deflection). Stacking in an alternating direction is the same as adding common springs in series, resulting in a lower spring constant and greater deflection. Mixing and matching directions allow a specific spring constant and deflection capacity to be designed. Generally, if n disc springs are stacked in parallel (facing the same direction), standing the load, the deflection of the whole stack is equal to that of one disc spring divided by n, then, to obtain the same deflection of a single disc spring the load to apply has to be n times that of a single disc spring. On the other hand, if n washers are stacked in series (facing in alternating directions), standing the load, the deflection is equal to n times that of one washer while the load to apply at the whole stack to obtain the same deflection of one disc spring has to be that of a single disc spring divided by n. In a parallel stack, hysteresis (load losses) will occur due to friction between the springs. The hysteresis losses can be advantageous in some systems because of the added damping and dissipation of vibration energy. This loss due to friction can be calculated using hysteresis methods. Ideally, no more than 4 springs should be placed in parallel. If a greater load is required, then factor of safety must be increased in order to compensate for loss of load due to friction. Friction loss is not as much of an issue in series stacks In a series stack, the deflection is not exactly proportional to the number of springs. This is because of a bottoming out effect when the springs are compressed to flat as the contact surface area increases once the spring is deflected beyond 95%. This decreases the moment arm and the spring will offer a greater spring resistance. Hysteresis can be used to calculate predicted deflections in a series stack. The number of springs used in a series stack is not as much of an issue as in parallel stacks even if, generally, the stack height should not be greater than three times the outside diameter of the disc spring. If it is not possible to avoid a longer stack, then it should be divided into 2 or possibly 3 partial stacks with suitable washers. These washers should be guided as exactly as possible. As previously said, Belleville washers are useful for adjustments because different thicknesses can be swapped in and out and they can be configured to achieve essentially infinite tunability of spring rate while only filling up a small part of the technician's tool box. They are ideal in situations where a heavy spring force is required with minimal free length and compression before reaching solid height. The downside, though, is weight, and they are severely travel limited compared to a conventional coil spring when free length is not an issue. A wave washer also acts as a spring, but wave washers of comparable size do not produce as much force as Belleville washers, nor can they be stacked in series. Disc springs with contact flats and reduced thickness For disc springs with a thickness of more than 6.0 mm, DIN 2093 specifies small contact surfaces at points I and III (that is the point where the load is applied and the point where the load touches the ground) in addition to the rounded corners. These contact flats improve definition of the point of load application and, particularly for spring stacks, reduce friction at the guide rod. The result is a considerable reduction in the lever arm length and a corresponding increase in the spring load. This is in turn compensated for by a reduction in the spring thickness. The reduced thickness is specified in accordance with the following conditions:[4] The overall height remains unaltered, The width of the contact flats (that is the annulus width) is to be approximately 1/150 of the outside diameter, The load applied to the reduced-thickness spring to obtain a deflection equal to the 75% of the free height (of an unreduced spring) must be the same as for an unreduced spring. As the overall height is not reduced, springs with reduced thickness inevitably have an increased flank angle and a greater cone height than springs of the same nominal dimension without reduced thickness.[4] Therefore, the characteristic curve is altered and becomes completely different. Starting from 1936, when J. O. Almen e A.Làszlò published a simplified method of calculation,[9] always more accurate and complex methods appeared also in order to include in calculations disc springs with contact flats and reduced thickness. So, although today there are more accurate methods of calculation,[10] the most used are the simple and convenient formulas of DIN 2092 as, for standard dimensions, they produce values which correspond well to the measured results. Considering a Belleville washer with outside diameter {\displaystyle {D_{e}}} , inside diameter {\displaystyle {D_{i}}} , height l and thickness t, where {\displaystyle {h_{0}}} is the free height, that is the difference between the height and the thickness, the following coefficients are obtained: {\displaystyle \delta ={\frac {D_{e}}{D_{i}}}} {\displaystyle {C_{1}}={\frac {\left({\frac {t'}{t}}\right)^{2}}{\left({\frac {1}{4}}\cdot {\frac {l}{t}}-{\frac {t'}{t}}+{\frac {3}{4}}\right)\cdot {\left({\frac {5}{8}}\cdot {\frac {l}{t}}-{\frac {t'}{t}}+{\frac {3}{8}}\right)}}}} {\displaystyle {C_{2}}={\frac {C_{1}}{\left({\frac {t'}{t}}\right)^{3}}}\cdot \left[{\frac {5}{32}}\cdot \left({\frac {l}{t}}-1\right)^{2}+1\right]} {\displaystyle {K_{4}}={\sqrt {-{\frac {C_{1}}{2}}+{\sqrt {\left({\frac {C_{1}}{2}}\right)^{2}+C_{2}}}}}} The equation to calculate the load to apply to a single disc spring in order to obtain a deflection s is:[11] {\displaystyle F={\frac {4E}{1-\mu ^{2}}}\cdot {\frac {t^{4}}{K_{1}-{D_{e}}^{2}}}\cdot {K_{4}}^{2}\cdot {\frac {s}{t}}\cdot \left[{K_{4}}^{2}\cdot \left({\frac {h_{0}}{t}}-{\frac {s}{t}}\right)\cdot \left({\frac {h_{0}}{t}}-{\frac {s}{2t}}\right)+1\right]} Note that for disc springs with constant thickness, t' is equal to t and consequently {\displaystyle {K_{4}}} For what concerns disc springs with contact flats and reduced thickness it has to be said that a paper published on July 2013, demonstrated that the {\displaystyle {K_{4}}} equation as defined inside the standard norms is not correct as it would result in every reduced thickness being considered right and this is, of course, impossible. As written in that paper {\displaystyle {K_{4}}} should be replaced with a new coefficient, {\displaystyle {R_{d}}} , which depends not only from the {\displaystyle {\frac {t'}{t}}} ratio but also from the flank angles of the spring.[12] The spring constant (or spring rate) is defined as: {\displaystyle {k}={\frac {dF}{ds}}} If friction and bottoming-out effects are ignored, the spring rate of a stack of identical Belleville washers can be quickly approximated. Counting from one end of the stack, group by the number of adjacent washers in parallel. For example, in the stack of washers to the right, the grouping is 2-3-1-2, because there is a group of 2 washers in parallel, then a group of 3, then a single washer, then another group of 2. The total spring coefficient is: {\displaystyle K={\frac {k}{\sum _{i=1}^{g}{\frac {1}{n_{i}}}}}} {\displaystyle K={\frac {k}{{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{1}}+{\frac {1}{2}}}}} {\displaystyle K={\frac {3}{7}}\cdot {k}} {\displaystyle n_{i}} = the number of washers in the ith group g = the number of groups k = the spring constant of one washer So, a 2-3-1-2 stack (or, since addition is commutative, a 3-2-2-1 stack) gives a spring constant of 3/7 that of a single washer. These same 8 washers can be arranged in a 3-3-2 configuration (K = 6/7 {\displaystyle \cdot } k), a 4-4 configuration (K = 2 {\displaystyle \cdot } k), a 2-2-2-2 configuration (K = 1/2 {\displaystyle \cdot } k), and various other configurations. The number of unique ways to stack n washers is defined by the integer partition function p(n) and increases rapidly with large n, allowing fine-tuning of the spring constant. However, each configuration will have a different length, requiring the use of shims in most cases. DIN 2092 — Disc springs — Calculation DIN 2093 — Disc springs - Quality specifications – Dimensions DIN 6796 — Conical spring washers for bolted connections[2] Wikimedia Commons has media related to Belleville washers. ^ a b Shigley, Joseph Edward; Mischke, Charles R.; Brown, Thomas H. (2004), Standard handbook of machine design (3rd ed.), McGraw-Hill Professional, p. 640, ISBN 978-0-07-144164-3. ^ a b Smith, Carroll (1990), Carroll Smith's Nuts, Bolts, Fasteners, and Plumbing Handbook, MotorBooks/MBI Publishing Company, p. 116, ISBN 0-87938-406-9. ^ Bhandari, V. B. (2010), Design of Machine Elements (3rd ed.), Tata McGraw-Hill, p. 441, ISBN 978-0-07-068179-8. ^ a b c Schnorr Handbook, Schnorr, 2016 ^ Actionclear Modern Rifles ^ Infiniti Red Bull RB10 Renault. ^ Cirrus airplane maintenance manual (PDF), Cirrus, 2014, p. 32,34 ^ Nakamura, Takashi; Suzuki, Tetsuo; Nobata, Arihide (1998), Study on earthquake response characteristics of base-isolated building using the friction dampers with coned disc springs (PDF), Proceedings of The 10th Earthquake Engineering Symposium, pp. 2901–2906 ^ Almen, J. O.; Làszlò, A. (1936), The uniform-section disk spring, ASME 58, pp. 305–314 ^ Curti, Graziano; Orlando, M. (1979), A new calculation of coned annular disc springs, Wire(28) 5, pp. 199–204 ^ DIN 2092: Disc springs - Calculation, DIN, 2006 ^ Ferrari, Giammarco (2013), A new calculation method for Belleville disc springs with contact flats and reduced thickness, IJMMME 3(2)
Section 5.22 (08ZW): Profinite spaces—The Stacks project Section 5.22: Profinite spaces (cite) 5.22 Profinite spaces Definition 5.22.1. A topological space is profinite if it is homeomorphic to a limit of a diagram of finite discrete spaces. This is not the most convenient characterization of a profinite space. Lemma 5.22.3. A limit of profinite spaces is profinite. Proof. Let $i \mapsto X_ i$ be a diagram of profinite spaces over the index category $\mathcal{I}$. Let us use the characterization of profinite spaces in Lemma 5.22.2. In particular each $X_ i$ is Hausdorff, quasi-compact, and totally disconnected. By Lemma 5.14.1 the limit $X = \mathop{\mathrm{lim}}\nolimits X_ i$ exists. By Lemma 5.14.5 the limit $X$ is quasi-compact. Let $x, x' \in X$ be distinct points. Then there exists an $i$ such that $x$ and $x'$ have distinct images $x_ i$ and $x'_ i$ in $X_ i$ under the projection $X \to X_ i$. Then $x_ i$ and $x'_ i$ have disjoint open neighbourhoods in $X_ i$. Taking the inverse images of these opens we conclude that $X$ is Hausdorff. Similarly, $x_ i$ and $x'_ i$ are in distinct connected components of $X_ i$ whence necessarily $x$ and $x'$ must be in distinct connected components of $X$. Hence $X$ is totally disconnected. This finishes the proof. $\square$ Lemma 5.22.4. Let $X$ be a profinite space. Every open covering of $X$ has a refinement by a finite covering $X = \coprod U_ i$ with $U_ i$ open and closed. Proof. Write $X = \mathop{\mathrm{lim}}\nolimits X_ i$ as a limit of an inverse system of finite discrete spaces over a directed set $I$ (Lemma 5.22.2). Denote $f_ i : X \to X_ i$ the projection. For every point $x = (x_ i) \in X$ a fundamental system of open neighbourhoods is the collection $f_ i^{-1}(\{ x_ i\} )$. Thus, as $X$ is quasi-compact, we may assume we have an open covering \[ X = f_{i_1}^{-1}(\{ x_{i_1}\} ) \cup \ldots \cup f_{i_ n}^{-1}(\{ x_{i_ n}\} ) \] Choose $i \in I$ with $i \geq i_ j$ for $j = 1, \ldots , n$ (this is possible as $I$ is a directed set). Then we see that the covering \[ X = \coprod \nolimits _{t \in X_ i} f_ i^{-1}(\{ t\} ) \] refines the given covering and is of the desired form. $\square$ Lemma 5.22.5. Let $X$ be a topological space. If $X$ is quasi-compact and every connected component of $X$ is the intersection of the open and closed subsets containing it, then $\pi _0(X)$ is a profinite space. Proof. We will use Lemma 5.22.2 to prove this. Since $\pi _0(X)$ is the image of a quasi-compact space it is quasi-compact (Lemma 5.12.7). It is totally disconnected by construction (Lemma 5.7.9). Let $C, D \subset X$ be distinct connected components of $X$. Write $C = \bigcap U_\alpha $ as the intersection of the open and closed subsets of $X$ containing $C$. Any finite intersection of $U_\alpha $'s is another. Since $\bigcap U_\alpha \cap D = \emptyset $ we conclude that $U_\alpha \cap D = \emptyset $ for some $\alpha $ (use Lemmas 5.7.3, 5.12.3 and 5.12.6) Since $U_\alpha $ is open and closed, it is the union of the connected components it contains, i.e., $U_\alpha $ is the inverse image of some open and closed subset $V_\alpha \subset \pi _0(X)$. This proves that the points corresponding to $C$ and $D$ are contained in disjoint open subsets, i.e., $\pi _0(X)$ is Hausdorff. $\square$ Comment #5552 by Curious dilettante on October 23, 2020 at 05:12 What is the condition of being cofiltered in Lemma 0ET8 required for? I attempted to carry out the proof and it appears that I did not use this condition. I mean, two different points in the limit are separated by a map to a space of the diagram and then you can lift the desired properties of separation ( T_2 and totally disconnected) from that space to the limit, right? OK, yes, thanks. I added your suggested improvement with your proof. Hopefully correct. See this. Comment #6209 by Amnon Yekutieli on May 05, 2021 at 10:39 The spaces in condition (2) are called Stone spaces in topology literature. I think (based on recent investigations, and on reading the Scholze-Clausen condensed math notes) that "limit" and "cofiltered limits" should be replaced by "inverse limit of finite discrete spaces". In other words, there is an inverse system (in the most naive sense of directed sets) of finite discrete spaces (X_{i}) i \in I X \cong \lim_{\leftarrow i} X_i Also, it seems that the set of partitions of a Stone space X into finite disjoint unions of closed (and open) sets is actually codirected (the opposite of directed set), by the relation of refinement. This codirected set accounts for all finite discrete quotients of X. Thus X is the inverse limit of this inverse system. This is analogous to "open subgroups of finite index" in the case of profinite abelian groups. Furtheremore, the caegory of Stone spaces (or profinite sets) seems to be equivalent to Pro(Set_{fin}), the cat of pro-objects of the cat of finite sets. @#6209 In the stacks project we use colimits and limits to distinguish between "projective limits" and "direct limits" and we do not use this terminology. See Section 4.14. Having said this, I think what you say in your second paragraph is the content ot Lemma 5.22.2. What you say in your third paragraph follows easily from Lemmas 5.22.4 and 5.22.2; we just have a different order of the arguments. We don't have the characterization of the category of profinite spaces you mention in your fifth paragraph. I may add this the next time I go through all the comments. Thanks! Comment #6302 by Zhouhang MAO on June 26, 2021 at 02:33 @#6209 Another point that Johan did not address: for any filtered ( \infty -)category \mathcal I , there exists a functor \mathcal J\to\mathcal I \mathcal J is a directed partially ordered set. See https://kerodon.net/tag/02QA @#6302. See also Lemma 4.21.5. Overlap between Kerodon and the Stacks project! Comment #7082 by Amnon Yekutieli on February 26, 2022 at 16:07 In Thm 0.17 of the paper Rings of Bounded Continuous Functions there is a characterization of profinite spaces (aka Stone spaces) in terms of their rings of continuous real valued functions. This provides (see Cor 0.18 there) a conceptual proof that the Stone-Cech compatification of a discrete space is profinite, cf. Example 090C.
Programming/Kdb/Factorial - Thalesians Wiki 1 The factorial 2 Implementing the factorial in q 2.1 A dummy implementation 2.2 A recursive implementation using the conditional evaluation 2.3 A recursive implementation using the imperative if statement 2.4 A problem with the recursive implementations 2.5 An iterative implementation using the imperative do statement 2.6 An iterative implementation using the imperative while statement 2.7 An iterative implementation using the Over iterator 2.8 An implementation using prd 3 Assessing the performance of our implementations {\displaystyle n} {\displaystyle n!} {\displaystyle n} {\displaystyle n!=n\cdot (n-1)\cdot (n-2)\cdot (n-3)\cdot \ldots \cdot 3\cdot 2\cdot 1.} {\displaystyle 5!=5\cdot 4\cdot 3\cdot 2\cdot 1=120.} {\displaystyle 0!} is 1, according to the convention for an empty product. The factorial operation is encountered in many areas of mathematics, notably in combinatorics, algebra, and mathematical analysis. Its most basic use counts the number of distinct sequences—the permutations—of {\displaystyle n} distinct objects: there are {\displaystyle n!} Implementing the factorial in q A dummy implementation We'll implement the factorial as a q function. That function will take a single parameter, x, and return the result. We'll start with a dummy, incorrect implementation that always returns 1: fact:{[x]1} This defines a function, {...}, which takes a single argument, [x], which always (irrespective of the argument) returns 1. We assign, :, a name to that function, fact, so we can call it again and again, as required. This is a function of a single argument, in other words, a monadic or unary function. We could have a dyadic (binary), triadic (ternary), etc. function. Here is an example of a dyadic function: add:{[x;y]x+y} Incidentally, it is possible to have a niladic (nullary) function. Such functions are not necessarily constant and may have useful side effects. The following function will print out the current time: tm:{[].z.p} When q encounters the variables named x, y, and z in the body of a function, it knows that these are the function's arguments. So we could skip [x;y] above: add:{x+y} We wouldn't be able to do this if we named the arguments differently: add:{[a;b]a+b} A recursive implementation using the conditional evaluation The most basic implementation of a factorial is the recursive one. In general, recursion is a method of solving a problem where the solution depends on the solutions to smaller instances of the same problem. Such problems can generally be solved by iteration, but this needs to identify and index the smaller instances at programming time. Recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science. —Niklaus Wirth, Algorithms + Data Structures = Programs, 1976. Most computer programming languages support recursion by allowing a function to call itself from within its own code. Q is no exception: fact:{$[x=0;1;x*fact x-1]} We have just used the conditional evaluation, the ternary overload of $: $[expr_cond; expr_true; expr_false] Here expr_cond is an expression that evaluates to a boolean atom. The result of expr_cond can be any type whose underlying value is an integer. The result of the conditional is the evaluation of expr_true when expr_cond is not zero and expr_false if it is zero. Notice that we don't need to have a boolean zero in expr_cond for expr_false to be evaluated; it could well be a long zero. Thus we can rewrite fact more tersely: fact:{$[x;x*fact x-1;1]} How would the function call itself if it weren't assigned the name fact? In q, there is a provision for that. The function can call itself using .z.s: fact:{$[x;x*.z.s x-1;1]} This works even if the function is anonymous, for example: q){$[x;x*.z.s x-1;1]}[5] Let's check the first few values: q)fact[0] Let us look at our recursive function again: A recursive function definition has one or more base cases, meaning input(s) for which the function produces a result trivially (without recurring), and one or more recursive cases, meaning input(s) for which the program recurs (calls itself). In our case the base case is {\displaystyle 0!=1} . You can see it here: fact:{$[x;...;1]} Our recursive case is, for all {\displaystyle x>0} {\displaystyle n!=n(n-1)!} fact:{$[...;x*.z.s x-1;...]} Because the base case breaks the chain of recursion, it is sometimes also called the "terminating case". A recursive implementation using the imperative if statement Instead of the conditional evaluation we could have used the imperative if statement, which conditionally evaluates a sequence of expressions: if[expr_cond;expr_1;...;expr_n] The expr_cond is evaluated and, if it is nonzero, the expressions expr_1, ..., expr_n are evaluated in left-to-right order. As with other conditionals, the brackets do not create a lexical scope, so variables defined in the body exist in the same scope as the if. Unlike $[...], there is no "else" to go with if. Also unlike $[...], if[...] is not a function and does not return a value. Therefore we need to force the return using :: fact1:{if[x>0;:x*.z.s x-1];1} A problem with the recursive implementations Recursive implementations tend to underperform their iterative peers. In addition, they may run out of stack space: q)fact 2001 [1799fact:{$[x;x*.z.s x-1;1f]} This doesn't matter that much specifically for the factorial, because the factorial of 26 is already enormous and overflows the long: q)fact 25 For many other recursive functions 'stack is a serious issue. An iterative implementation using the imperative do statement do[expr_count;expr_1;...;expr_n] where expr_count must evaluate to a nonnegative integer. The expressions expr_1, ..., expr_n are evaluated expr_count times in left-to-right order. Note that do is a statement, not a function, and does not have an explicit result. Here is a (rather awkward) implementation of a factorial using do: fact2:{do[-1+f:r:x;r*:f-:1];r} fact3:{r:1;do[x;r*:x;x-:1];r} An iterative implementation using the imperative while statement The imperative while statement is an iterator of the form while[expr_cond;expr_1;...;expr_n] where expr_cond is evaluated and the expressions expr_1, ..., expr_n are evaluated repeatedly in left-to-right order as long as expr_cond is nonzero. The while statement is not a function, does not have an explicit result, and does not introduce a lexical scope. Here is an iterative implementation of a factorial using while: fact4:{r:1;while[x;r*:x;x-:1];r} The following implementation is seemingly superior, as it is terser, but it fails at x = 0: fact4a:{f:r:x;while[f-:1;r*:f];r} An iterative implementation using the Over iterator Iterators, which used to be called adverbs, modify the meaning of verbs. For example, the verb Join (,) ordinarily concatenates lists. The result of ("foo";"bar";"baz"),("qux";"quux";"quuz") ("foo";"bar";"baz";"qux";"quux";"quuz") However, if we modify this verb with Each ('), we get a pairwise application of Join: ("foo";"bar";"baz"),'("qux";"quux";"quuz") ("fooqux";"barquux";"bazquuz") Each is what is known as a map iterator. There are other map iterators, and there are accumulator iterators, such as Over (/): (+/)1+til 5 (((1+2)+3)+4)+5 If we replace the verb + with the verb * and likewise modify it with the iterator (adverb) /, we obtain the foundation of our next implementation of the factorial: (*/)1+til 5 (((1*2)*3)*4)*5 Hence here is our next implementation of the factorial—using the Over iterator: fact5:{(*/)1+til x} Iterators are examples of higher-order functions of functional programming. In functional programming, a higher-order function is a function that does at least one of the following: takes one or more functions as arguments, This is what Jeffry A. Borror says about iterators in Q for Mortals: If you are new to functional programming, you my think, "Big deal, I write for loops in my sleep." Granted. But the advantage of the higher-order function approach is that there is no chance of being off by one in the loop counter or accidentally running off the end of a data structure. More importantly, you can focus on what you want done without the irrelevant scaffolding of how to set up control structures. This is called declarative programming. An implementation using prd Our latest implementation is more idiomatic than the previous ones. However, there is still the question of code reuse—the use of existing software, or software knowledge, to build new software following the reusability principles. There is something that we are not reusing, but could reuse, to our advantage. prd x returns the product of the numeric list x. Moreover, unlike, for example, ij, which is implemented in k, q)ij k){.Q.ft[{x[j],'(. y)i j:&(#y)>i:(!y)?(!+!y)#x}[;y]]x} prd is implemented in low-level C: q)prd We confirm this—prd is in .Q.res: q).Q.res `abs`acos`asin`atan`avg`bin`binr`cor`cos`cov`delete`dev`div`do`enlist`exec`exit`exp`getenv`hopen`if`in`insert`last`like`log`max`min`prd`select`setenv`sin`sqrt`ss`sum`tan`update`var`wavg`while`within`wsum`xexp Here is the resulting implementation: fact6:{prd 1+til x} Assessing the performance of our implementations Let us list all of our implementations of the factorial: fact:{$[x;x*.z.s x-1;1]}; fact1:{if[x>0;:x*.z.s x-1];1}; fact2:{do[-1+f:r:x;r*:f-:1];r}; fact3:{r:1;do[x;r*:x;x-:1];r}; fact4:{r:1;while[x;r*:x;x-:1];r}; fact5:{(*/)1+til x}; Let's time them: q)\t:100000 fact first 1?26 q)\t:100000 fact1 first 1?26 Unsurprisingly, fact6 emerges as a leader, followed closely by fact5. We can see that there is a performance reason for preferring iterators over loops, in addition to the previously stated observation that iterators are more idiomatic that loops. However, the fastest results are obtained by using an optimized function, prd, written in low-level C. Let us try breaking our implementation fact6: q)fact6 -1 'domain [1] fact6:{prd 1+til x} q)fact6"foo" [2] (.q.til) We can recover from these errors using protected evaluation, for example: q).[fact6;"foo";[0N!"recovering";0N]] Both error messages are perfectly good by q standards. Suppose that we want to produce a more informative error message instead of 'domain. That is possible too: fact7:{$[x<0;'"Argument must be nonnegative";prd 1+til x]} q)fact7 5 'Argument must be nonnegative [0] fact7 -1 Retrieved from "https://wiki.thalesians.com/index.php?title=Programming/Kdb/Factorial&oldid=682"
Arbeitsgemeinschaft: Julia Sets of Positive Measure | EMS Press P:\C\to \C can be considered as a dynamical system. We are interested in the sequences (z_n) defined by induction: z_0\in \C\quad\text{and}\quad z_{n+1}=P(z_n). The filled-in Julia set K_P z_0\in \C for which the sequence (z_n) is bounded. This set is compact. The Julia set J_P K_P . In particular, it has empty interior. There is a small collection of polynomials, for instance P(z) = z^d\quad ,\quad P(z) = z^2-2, for which the Julia set can be fairly easily understood, but most exhibit ``fractal'' geometry and ``chaotic'' behavior, the analysis of which requires serious tools from complex analysis, dynamical systems, topology, combinatorics, \ldots This subject has a fairly long history, with contributions by Koenigs, Schr\"{o}der, B\"{o}ttcher in the late 19th century, and the great memoirs of Fatou and Julia around 1920. There followed a dormant period, with notable contributions by Cremer (1936) and Siegel (1942), and a rebirth in the 1960's (Brolin, Guckenheimer, Jakobson). Since the early 1980's, partly under the impetus of computer graphics, the subject has grown vigorously, with major contributions by Douady, Hubbard, Sullivan, Thurston, and more recently Lyubich, McMullen, Milnor, Shishikura, Yoccoz \ldots Fatou found sufficient conditions for the boundary of the basin of an attracting fixed point to be a Cantor set with Lebesgue measure equal to 0 . He could not tell whether or not the measure could be positive. For some time and until the 1990's, the conjecture, reinforced by the analogy with Ahlfors's conjecture on the area of limit sets of Kleinian groups, was that no Julia set of a polynomial could have positive area. Results in this direction were obtained by Douady and Hubbard in the case of hyperbolic or subhyperbolic maps, by Branner, Hubbard and McMullen in the case of non-renormalizable cubic polynomials with an escaping critical point, by Lyubich and Shishikura in the case of finitely renormalizable quadratic polynomials without indifferent cycles, by Petersen in the case of quadratic polynomials having a Siegel disk with bounded type rotation number. In the 1990's, Douady began to catch a glimpse of a method for Julia sets of positive area: in the family of degree 2 polynomials with an indifferent Cremer fixed point. Recently, we brought Douady's method to completion. The Arbeitsgemeinschaft {\em Julia sets of positive measure} focused on the proof of existence of quadratic polynomials having a Julia set of positive area. It was held March 30th--April 5th, 2008. It was attended by 36 participants. Xavier Buff, Arnaud Cheritat, Arbeitsgemeinschaft: Julia Sets of Positive Measure. Oberwolfach Rep. 5 (2008), no. 2, pp. 869–906
Resource Element Groups (REGs) - MATLAB & Simulink - MathWorks India Resource Element Group Indexing Size and Location of REGs Antenna Port Configurations REG Arrangement with a Normal Cyclic Prefix One or Two Antenna Port Configuration Four Antenna Port Configuration REG Arrangement with an Extended Cyclic Prefix Resource-element groups (REG) are used to define the mapping of control channels to resource elements (RE). REGs are blocks of consecutive REs within the same OFDM symbol. The REGs within a subframe are located in the first four OFDM symbols and are identical in size and number for each corresponding subframe on every antenna port. REGs are represented by an index pair \left({k}^{\prime },{l}^{\prime }\right) {k}^{\prime } is the subcarrier index of the RE within the REG with the lowest subcarrier index k. The index {l}^{\prime } is the OFDM symbol index of the REG (l). This index pair is illustrated in the following figure. The number of REs within a REG is such that a REG contains four REs which are not occupied by a cell specific reference signal on any antenna port in use. All REs within a resource block in one of the first four OFDM symbols are allocated to a REG. Therefore the number of REs within each REG and the number of REGs within an OFDM symbol is affected by the number of cell-specific reference signals present on all antenna ports. The number and location of cell specific reference signals are dependent on the number of antenna ports and the type of cyclic prefix used. Each antenna port has a unique cell specific reference signal associated with it. As the REG arrangement is affected by cell specific reference signals, the REG arrangement for a one or two antenna port configuration or four antenna port configuration is different. The REG arrangement for each resource block within a subframe and for every antenna port is identical. The REG arrangement for each antenna port configuration is described below for a normal cyclic prefix. When antenna port 0 or ports 0 and 1 are used it is assumed the cell specific reference signal is present on both antenna ports 0 and 1. This leads to a REG arrangement for each resource block as shown in the following figure. Cell-specific reference signals are present within the first OFDM symbol. As four REs not containing cell specific reference signals are required in a REG, the twelve REs in the first symbol are divided into two REGs, each containing six REs (two containing cell specific reference signals and four empty). In the second and third OFDM symbols no cell specific reference signal is present therefore the twelve REs in each symbol are divided between three REGs, each containing four REs. The REG arrangement in each resource block for four antenna port configuration is shown in the following figure. The REG allocation within the first OFDM symbol is the same as for a one or two antenna port configuration. Four cell specific reference signals are present in the second OFDM symbol therefore eight REs are available for the mapping of control data. The twelve REs are divided into two REGs, each containing six REs. The third and fourth OFDM symbols contain no reference signals so three REGs are available. An extended cyclic prefix subframe contains twelve OFDM symbols as opposed to fourteen for a normal cyclic prefix. As the number of cell specific reference signals in a normal or extended cyclic prefix subframe is the same, the limited number of OFDM symbols in an extended cyclic prefix subframe requires the OFDM symbol spacing of the cell specific reference signals to be reduced compared to when using a standard cyclic prefix. This reduction is spacing causes cell specific reference signals to be present within the fourth OFDM symbol of an extended cyclic prefix subframe whilst in a normal cyclic prefix subframe no cell specific reference signals are present. Therefore when an extended cyclic prefix is used two REGs, each containing six REs, are present in the fourth OFDM symbol. The number of cell specific reference symbols within the first three OFDM symbols is identical for normal or extended cyclic prefix therefore the REG configurations are identical. The REG arrangement for a one or two antenna port configuration when using an extended cyclic prefix is shown in the following figure. The REG arrangement for a four antenna port configuration when using an extended cyclic prefix is shown in the following figure. ltePDCCH | ltePHICH | ltePCFICH
Slender extrusion with elastic properties for deformation - MATLAB - MathWorks 日本 General Flexible Beam Slender extrusion with elastic properties for deformation The General Flexible Beam block models a slender beam of constant, general cross-section that can have small and linear deformations. These deformations include extension, bending, and torsion. The block calculates the beam cross-sectional properties, such as the axial, flexural, and torsional rigidities, based on the geometry and material properties that you specify. The geometry of the flexible beam is an extrusion of its cross-section. The beam cross-section, defined in the xy-plane, is extruded along the z-axis. You can use this block to create flexible beams with simply or multiply connected cross-sections. For example, you can create the beam shown in the figure by entering these values for the Cross-section in the block's dialog box: {[-0.25,-0.50;0.25,-0.50;0.25,0.50;-0.25,0.50],[-0.15,-0.40;0.15,-0.40;0.15,-0.05;-0.15,-0.05],[-0.15,0.05;0.15,0.05;0.15,0.40;-0.15,0.40]}. Cross-section — Cross-section coordinates specified on the XY plane [0.5 0.5; -0.5 0.5; -0.5 -0.5; 0.5 -0.5] m (default) | N-by-2 matrix | M-by-1 or 1-by-M cell array of N-by-2 matrices Coordinates used to specify the boundaries of a beam cross-section. Specify the beam cross-section using one of the following methods: Use an N-by-2 matrix of xy coordinates to specify a simply connected section. Each row gives the [x,y] coordinates of a point on the cross-section boundary. The points connect in the order given to form a closed polyline. To ensure that the polyline is closed, a line segment is always inserted between the last and first points. Use an M-by-1 or 1-by-M cell array of N-by-2 matrices of xy coordinates to specify a multiply connected section. The first entry in the cell represents the outer boundary and subsequent entries specify the hole boundaries. To properly define the cross-section of beams, any two boundaries should not intersect, overlap, or touch. Additionally, each individual boundary should have: No repeated vertices. No self-intersections. At least three non-collinear points. The beam's length. The beam is modeled by extruding the specified cross-section along the z-axis of the local reference frame. The extrusion is symmetric about the xy-plane, with half of the beam being extruded in the negative direction of the z-axis and half in the positive direction. \left[{I}_{x},{I}_{y}\right]=\left[\underset{A}{∫}{\left(y−{y}_{c}\right)}^{2}dA,\underset{A}{∫}{\left(x−{x}_{c}\right)}^{2}dA\right] {I}_{xy}=\underset{A}{∫}\left(x−{x}_{c}\right)\left(y−{y}_{c}\right)dA {I}_{P}={I}_{x}+{I}_{y} \left[C\right]=\mathrm{α}\left[M\right]+\mathrm{β}\left[K\right] Flexible Angle Beam | Flexible Channel Beam | Flexible Cylindrical Beam | Flexible I Beam | Flexible Rectangular Beam | Flexible T Beam | Extruded Solid | Reduced Order Flexible Solid | Rigid Transform
C*-Algebren | EMS Press \noindent The aim of the workshop {\em C ^* -algebras} was to bring together researchers from basically all areas related to operator algebra theory. This gives a unique opportunity to obtain a broader view of the subject and to create new interactions between researchers with different background. The organizers, Claire Anantharaman-Delaroche, Siegfried Echterhoff, Uffe Haagerup, and Dan Voiculescu took special care to invite a good number of young researchers, some of them already being leading experts in their fields. As a result, several contributions in this report are from researchers who, at the time of the workshop, were less then 30 years old. There were 29 lectures presented at this workshop with topics from Ergodic Theory, \Leb^2 -(co-)homology, classification of C ^* -algebras, Operator Theory, von Neumann algebras, \KK -theory and the Baum-Connes conjecture, quantum spaces and quantum groups, mathematical physics, non-commutative probability theory, and the theory of operator spaces. To name some special highlights we can mention the reports on recent developements in the study of ``boundary actions'' of quantum groups due to S.~Vaes and R.~Vergnioux, the new results on classification theory of amenable C ^* -algebras in terms of studying algebras which are stable under tensoring with the Jiang-Su algebra \mathcal Z (see the lectures of A.~Toms and M.~R{\o}rdam), or the report by S.~Popa on recent progress in the study of strong rigidity for II _1 factors associated to equivalence relations. But this is only a very small selection of the interesting lectures on new results presented at this workshop. It is a pleasure for the organizers of the conference to use this opportunity to thank all participants of the workshop for their contributions---either in lectures held at the workshop or in stimulating discussions following the lectures. We also thank the Mathematisches Forschungsinstitut Oberwolfach for providing a great environment and strong support for organizing this conference. Special thanks go to the very competent and helpful staff of the institute and to the chef de cuisine. Siegfried Echterhoff, Dan-Virgil Voiculescu, Claire Anantharaman-Delaroche, Uffe Haagerup, C*-Algebren. Oberwolfach Rep. 2 (2005), no. 3, pp. 2305–2374
§ Matching problems (TODO) G = (V, E) a matching is a collection of edges of G where every node has degree 1. § Perfect matching A matching is perfect if it has size |V|/2 , or no vertex is left isolated. That is, everyone is matched with someone. § Weighted (Perfect) Matching Some matchings maybe more preferable than orders, by giving weights.Usually, lower weights are more desirable. We may want to find the minimum weight matching. n The weight of a matching is the sum of the weights on the edges of M . In this context, we usually always ask for a perfect matching . Otherwise, one can trivially not match anyone to get a min-weight matching of weight 0. So the definition of a min-weight matching for the graph G is a perfect matching with minimum weight. We don't see these in 6.042. Will have to read flows/hungarian to study this. § Preference matching Given a matching, (x, y) form a rogue couple if they both prefer each other over their matched mates. Ie, they both wish to defect from their 'matched mates'. A matching is stable if there aren't any rogue couples. The goal is to find a perfect stable matching. That is, get everyone married up, and make it stable! The point is, not everyone has to become happy! It's just that we don't allow rogue couples who can mutually get a benefit. § Bad situation for preference matching If boys can love boys as well as girls, then we can get preference orderings where no stable marriage is possible. The idea is to create a love triangle. Alex prefers Bobby over Robin Bobby prefers Robin over Alex Robin prefers Alex over Bobby. And then there is mergatoid, who is the third choice for everyone. mergatoid's preferences don't matter § Theorem: there does not exist a stable matching for this graph Proof: assume there does exist a stable matching, call it M . Mergatoid must be matched with someone. WLOG, assume mergatoid is matched to Alex by symmetry. If mergatoid is matched to alex, then we must have Robin matched to Bobby Alex and Bobby are not rogue, because Bobby likes Robin more than Alex. Alex and Robin are the rogue, because (1) Robin prefers Alex over Bobby, and (2) Alex prefers Robin over Mergatoid. Hence, we found a rogue couple. So M was not stable. § Stable Marriage Problem: success in some cases! N N girls [need the same number of each ]. Each boy has his own ranked preference list of all the girls. Each gil has her own ranked preference list of all the boys. The lists are complete and there are no ties. We have to find a perfect matching with no rogue couples. § Mating algorithm / Mating ritual The ritual takes place over several days. In the morning, the girl comes out to the balcony. Each boy goes to his favourite girl who hasn't been crossed off in his list and serenades her. In the afternoon, if a girl has suitors, she tells her favourite suitor "maybe I'll marry you, come back tomrrow". Girls don't make it too easy. For all the lower priority boys, she says "now way I'm marrying you". In the night, all the boys who heard a no cross that girl off from their list. If the boy heard a maybe, he will serenade her. If we encounter a day where every girl has at most one suitor, the algorithm terminates. So we don't have two or more boys under one balcony. § Things to prove Show that the algorithm terminates. Show that everyone gets married. Show that there are no rogue couples. We may want to show it runs quickly. Fairness? is this good for girls, or for boys? § Termination, terminates quickly: N^2 + 1 days Proof by contradiction. suppose TMA does not terminate in N^2+1 days. Claim If we don't terminate on a day, then that's because a girl had two or more boys under her balcony. Thus, at least one boy crosses the girl off of his list. We measure progress by the cross-out. In N^2 days, all boys would have crossed out all girls. § Invariant P is that if a girl G every rejected a boy B then she has a suitor who she prefers to B To prove that this is indeed an invariant, induction on time. At the beginning, no girl has rejected any boy, so it's vacuously true. Assume P holds at the end of day d . At the end of day d+1 , there's two cases. G rejects B on day d+1 , there must be a better boy, hence P is true. G B on day less than d+1 G must have had a better suitor B' d by the induction hypothesis. Now on day d+1 , she either has B' or someone even better, B'' came along. § Everyone is married Proof by contradiction. Assume not everyone was married. Assume that some boy B is not married. (If there is no boy who is not married then everyone is married). If we was not married at the end, then he must be rejected by everyone . If he were not rejected by everyone, then he would be under someone's balcony trying to serenade them. That he is unmatched means that all the girls have rejected him. This means that every girl has somebody better than B , which is not possible, because that would mean that every girl was married. That's not possible as there are an equal number of boys and girls. Sid note: I don't buy this! we need to show that in the course of the algorithm, it's impossible for a girl to end up empty handed. I'd prove this by noticing that at each round, a girl acts like some kind of "gated compare and swap" where only the highest value is allowed to be put into the mutex, and all of the others are rejected. Thus, if there is a girl who has multiple writes, she will only allow one of the writes to happen, and permanently disallow the other writes. Thus, the other writes have to move to other girls. § No rogue couples Contradiction: assume that there is a pair that are not married, call them bob and gail. We need to show that they will not go rogue. Since bob and gail are not married, either (1) gail rejected bob, or (2) gail did not reject bob because bob never serenaded her. If bob had serenaded her and was not rejected, then they would have been married!. (1) If gail rejected bob, then gail has marries someone she likes better than bob since she's rejected bob. Thus, gail and bob can't be a rogue couple because she likes her spouse more than bob. (2) bob never serenaded gail. This means that he married someone who he prefers more than gail, cause he never reached gail. § Fairness The girls get to pick the best ones who come to them. The boys get to go out and try their first choice though. A girl may wait for her Mr. Right who will never come along, and thus satisfice. Which is better? proposors or acceptors? Sociological question! Here it turns out that boys have all the power . S be the set of all stable matchings. We know that S is not empty because the algorithm is able to produce at least one stable matching. For each person P , we define the realm of possibility for P Q of people that they can be matched to in a stable matching. So Q_p \equiv \{ q : (p, q) \in M, M \in S \} . That is, there's a stable marriage where you're married to them. A person's optimal mate is their most favourite in the realm of possibility. Their pessimal mate is their least favourite in the realm of possibility. § Theorem: No two boys can have the same optimal mate. Assume two boys do have the same optimal mate. Say (b^\star, g) (b, g) . WLOG let g b^\star b . Now, there exists some "stable matching" where g is matched with b g is the optimal mate, hence in the realm of possibility of b . However, this matching is unstable because (b^\star, g) is a rogue couple: g b^\star b b^\star g best! § Theorem: No two girls can have the same optimal mate Redo previous proof by switching girl and boy. It's not a proof about the algorithm , but about the structure of stable matches themselves. § Theorem: The algorithm matches every boy with his optimal mate Proof by contradiction. Assume that Nicole is optimal for Keith, but Keith winds up not marrying Nicole. This means he must have crossed off Nicole in some day (bad day). Note that he must have gotten to Nicole , because no girls he prefers over nicole would have led to a stable marriage, and would thus not be an output generated by the algorithm. This, all girls he prefers above nicole must reject him at some step of the algorithm till he reaches Nicole. We assume that in this instance of the algorithm, he does not get Nicole, thus Nicole too must have rejected him. Let us assume that Keith gets rejected by Nicole on the earliest bad day . When Nicole rejects Keith, this means that Nicole had a suitor she likes better than Keith. Call him Tom. Tom >Nicole Keith. Furthermore, since this is the earliest bad day, tom has not crossed off his optimal girl, and thus nicole must be the "best girl" for tom --- either out of his league, or the optimal feasible math. Thus, Nicole >Tom optimal-feasible-mate-for-tom. But this means that in a stable marriage with (Nicole, Keith), we would have (Nicole, Tom) be a rogue couple! This contradicts the fact that nicole is optimal for keith. Proof from Optimal Stable Matching Video, MIT 6.042J Mathematics for Computer Science, Spring 2015. § We match every girl with her pessimal mate § Theorem: matchings form a lattice M = ((b_1, g_1), (b_2, g_2), \dots (b_n, g_n)) M' = ((b_1, g'_1), (b_2, g'_2), \dots, (b_n, g'_n)) be two stable matchings. Then. M \lor M' \equiv ((b_1, \max_{b_1}(g_1, g_1'), \dots, (b_n, \max_{b_n}(g_n, g_n'))) is a stable matching. § Step 1: This is a matching First we show that it is indeed a matching: the marriages are all monogamous. Assume that we had g_1 = \max_{b_1}(g_1, g_1') = \max_{b_2}(g_2, g_2') = g_2' (b_2, g_2') is the match in M' g_1 = g_2' (b_2, g_1) M' g_1 >_{b_1} g_1' from the assumption. Since the matching M' is stable, we need to ensure that (g_1, b_1) is not a rogue couple; b_1 prefers g_1 g_1' . Thus, we must have that b_2 >_{g_1} b_1 (b_2, g_1) is stable. However M is stable, and (b_1, g_1) \in M b_2 >_{g_1} b_1 (b_1, g_1) to be stable, we must ensure that (b_2, g_1) is not a rogue couple, since g_1 b_2 b_1 . This we must have that g'_1 >_{b_2} g_1 . But this contradicts the equation \max{b_2}(g_2, g_2') = g_2' = g_1 § Sid musings the girls are monotonic filters, where they only allow themselves to match higher. The propogate (in the kmett/propogators sense of the word) information to all lower requests that they will not match. The boys are in some kind of atomic write framework with conflict resolution, where a girl allows a boy to "write" into her 'consider this boy' state if the boy is accepted by her filter. MIT OCW Math for comp sci: lecture 7 --- matching SPOJ problem Knuth: Stable matching and its relation to other combinatorial problems Math for Comp Sci: Optimal stable matching
Konvexgeometrie | EMS Press Paul R. Goodey The meeting {\em Konvexgeometrie} organised by K.\ M.\ Ball, P.\ Goodey and P.\ M.\ Gruber, was held from December 17 to December 23, 2006. The meeting was attended by some 40 participants working in all areas of convex geometry. The program involved 10 plenary lectures of one hour's duration and about 15 shorter lectures. Some highlights of the program were as follows. Grigoris Paouris explained his proof that if K is an isotropic convex body of volume 1 in {\mathbf R}^n X is uniformly distributed on K , then for some absolute constant C \mbox{P} (|X| > C \sigma t) \leq e^{-\sqrt{n} t} t > 1 \sigma^2 X . The estimate is optimal apart from the value of C . Olivier Gu\'{e}don then explained joint work with Fleury and Paouris, showing how the method of Paouris yields the central limit theorem for convex bodies, conjectured in 1996 by Ball and recently proved by Klartag. Assaf Naor described his new results with Manor Mendel which now give a complete picture of the non-linear Dvoretzky Theorem. 20 years ago, Bourgain, Figiel and Milman proved that any n -point metric space has a subset of size about \log n which can be embedded in Hilbert space with a constant distortion. In 2003, Bartal, Linial, Mendel and Naor discovered a remarkable threshold phenomenon: that if we allow distortion larger than a factor 2, there are subsets of size a power of n which are embeddable in Hilbert space. In the recent work the authors determine exactly the correct dependence of the power on the distortion: for each \epsilon >0 there are subsets of size n^{1-\epsilon} O(1/\epsilon) embeddable. Mark Rudelson described his recent estimates for the smallest singular values of almost square random (Gaussian) matrices. Considerably sharpening earlier work of Litvak, Pajor, himself, Tomczak and Vershynin and (in a slightly different direction) Candes and Tao, he established strong bounds for the probability that a random N \times n matrix maps a point of the unit sphere in {\mathbf R}^n to a point of small \ell_1^N norm. This is equivalent to understanding the maximum radius of an almost full-dimensional random section of the cross-polytope. The passage from such estimates to singular numbers uses standard techniques. Ralph Howard spoke about his recent work (joint with Paul Goodey) on bodies of constant brightness. This follows Howard's recent solution of the problem (dating back to 1926) of whether there exist bodies in 3-space which are of constant width and constant brightness but which are not Euclidean balls. Richard Gardner gave an account of several new algorithms for the reconstruction of convex bodies from their x-rays in a small number of directions. The new algorithms are robust in that they can accommodate noisy data and are sufficiently simple that convergence proofs are rendered quite straightforward. Matthias Reitzner gave a survey of the well-developed theory of random polytopes focussing on deviation estimates for the numbers of faces of given dimension. This covers work of himself, B\'{a}r\'{a}ny, Vu and others. Ryabogin and Zvavitch described their joint work with Nazarov solving a conjecture of Weil about the characterisation of zonoids. Semyon Alesker gave an impromptu evening presentation of the recent work of Greg Kuperberg who has given a very short and self-contained proof of the Bourgain-Milman theorem on the product of the volumes of a convex body and its polar. Mahler conjectured that this volume product is minimised by a simplex: the Bourgain-Milman theorem proves this up to a factor of (\mbox{constant})^n in -dimensions which is what is needed for most applications. Keith M. Ball, Paul R. Goodey, Peter M. Gruber, Konvexgeometrie. Oberwolfach Rep. 3 (2006), no. 4, pp. 3321–3396
Slip on an active wedge thrust from geodetic observations of the 8 October 2005 Kashmir earthquake | Geology | GeoScienceWorld Rebecca Bendick; University of Montana, Department of Geosciences, Missoula, Montana 59812-1296, USA Roger Bilham; University of Colorado, Department of Geological Sciences, 399 UCB, Boulder, Colorado 80309-0399, USA M. Asif Khan; University of Peshawar, National Centre of Excellence in Geology, Peshawar, NWFP 20005, Pakistan S. Faisal Khan Rebecca Bendick, Roger Bilham, M. Asif Khan, S. Faisal Khan; Slip on an active wedge thrust from geodetic observations of the 8 October 2005 Kashmir earthquake. Geology 2007;; 35 (3): 267–270. doi: https://doi.org/10.1130/G23158A.1 By combining global positioning system observations of surface displacements and the locations of aftershocks, we infer that the 8 October 2005 Kashmir earthquake occurred on multiple fault planes. Mean slip of ∼5.1 m occurred on a rupture between Bagh and Balakot with strike 331° and dip 29°. Additional slip occurred at depth on a NNE-dipping fault plane extending WNW from Balakot, and on an intersecting nearly fiat dislocation at ∼5 km depth, forming an active wedge thrust. Both the simple fault plane and the blind wedge accommodate convergence between Peshawar and Leh, Ladakh, accumulating at 7 ± 2 mm/yr, suggesting a 680 ± 150 yr recurrence interval for Kashmir 2005-like events. Mw Middle-late Miocene (>10 Ma) formation of the Main Boundary thrust in the western Himalaya
Talk:Label - PyMOLWiki Revision as of 11:50, 27 May 2016 by Pavel (talk | contribs) (→‎Unicdoe Fonts) 2.1 New Page Overview 3.3.1.1 UTF8 Fonts 3.3.1.2 Unicode Fonts 3.4.1 Partial Charge 3.5 Users Comments 3.5.1 Labels Using ID Numbers 3.5.2 Labels Using One Letter Abbreviations New Page Overview This is the content for the new labels page. label allows one to configure the appearance of text labels for PyMOL objects. It labels one or more atoms properties over a selection using the python evaluator with a separate name space for each atom. The symbols defined in the name space are: All strings in the expression must be explicitly quoted. This operation typically takes several seconds per thousand atoms altered. To clear labels, simply omit the expression or set it to . Label is great for labeling atoms, residues and objects. For a scene label, see Pseudoatom. label (selection),expression There are 10 different scalable fonts. set label_font_id, number where number is 5 through 14. UTF8 Fonts New fonts in PyMol. Notice the alpha and beta characters. Newer versions support UTF8 fonts; use label_font_id from above to 15 or 16. The good news about the UTF8 fonts is that they support the alpha and beta characters. (See image.) Find the code for your character at Unicode Charts. The Angstrom character, {\displaystyle {\textrm {\mathrm {\AA} }}} is u"\u00c5" and {\displaystyle \pm } is u"\u00b1". Label the selection. For simple strings, just type the string in double quote, -- "like this" -- and append to the end of that .encode('utf-8') -- "like this".encode('utf-8'). A working example is shown here, label i. 30, "4.1" + u"\u00c5\u00b2 \u00b1 0.65 \u00c5\u00b2 ".encode('utf-8') The font size can be adjusted set label_size, number where number is the point size (or -number for Angstroms) Set a label's color by set label_color, color where color is a valid PyMol color. If the coloring of the labels is not exactly the same as you'd expect (say black turns out grey, or red turns out pink), then try the following settings: unset ray_label_specular To set what the label reads (see above) label selection, expression label resi 10, b To position labels then ctrl-middle-click-and-drag to position the label in space. (On Windows systems this appears to be shift-left-click-and-drag, presumably because those mice lack a true middle button.) ctrl-shift-left-click-and-drag alters a label's z-plane. (Windows only? This may use the middle button, rather than shift-left, under *NIX / 3-button mice systems.) The following image was created with and finally, some labels were moved around in edit_mode. This example shows how to label a selection with the XYZ coordinates of the atoms label SELECTION, " %s" % ID label SELECTION, " %s:%s %s" % (resi, resn, name) . Second, instead of: Retrieved from "http://pymolwiki.org/index.php?title=Talk:Label&oldid=12387"
Asset/Liability Management Definition Asset/liability management is the process of managing the use of assets and cash flows to reduce the firm’s risk of loss from not paying a liability on time. Well-managed assets and liabilities increase business profits. The asset/liability management process is typically applied to bank loan portfolios and pension plans. It also involves the economic value of equity. Understanding Asset/Liability Management The concept of asset/liability management focuses on the timing of cash flows because company managers must plan for the payment of liabilities. The process must ensure that assets are available to pay debts as they come due and that assets or earnings can be converted into cash. The asset/liability management process applies to different categories of assets on the balance sheet. [Important: A company can face a mismatch between assets and liabilities because of illiquidity or changes in interest rates; asset/liability management reduces the likelihood of a mismatch.] Factoring in Defined Benefit Pension Plans A defined benefit pension plan provides a fixed, pre-established pension benefit for employees upon retirement, and the employer carries the risk that assets invested in the pension plan may not be sufficient to pay all benefits. Companies must forecast the dollar amount of assets available to pay benefits required by a defined benefit plan. Assume, for example, that a group of employees must receive a total of $1.5 million in pension payments starting in 10 years. The company must estimate a rate of return on the dollars invested in the pension plan and determine how much the firm must contribute each year before the first payments begin in 10 years. Asset/liability management is also used in banking. A bank must pay interest on deposits and also charge a rate of interest on loans. To manage these two variables, bankers track the net interest margin or the difference between the interest paid on deposits and interest earned on loans. Assume, for example, that a bank earns an average rate of 6% on three-year loans and pays a 4% rate on three-year certificates of deposit. The interest rate margin the bank generates is 6% - 4% = 2%. Since banks are subject to interest rate risk, or the risk that interest rates increase, clients demand higher interest rates on their deposits to keep assets at the bank. An important ratio used in managing assets and liabilities is the asset coverage ratio which computes the value of assets available to pay a firm’s debts. The ratio is calculated as follows: \begin{aligned} &\text{Asset Coverage Ratio} = \frac{ ( \text{BVTA} - \text{IA} ) - ( \text{CL} - \text{STDO}) }{ \text{Total Debt Outstanding} } \\ &\textbf{where:} \\ &\text{BVTA} = \text{book value of total assets} \\ &\text{IA} = \text{intangible assets} \\ &\text{CL} = \text{current liabilities} \\ &\text{STDO} = \text{short term debt obligations} \\ \end{aligned} ​Asset Coverage Ratio=Total Debt Outstanding(BVTA−IA)−(CL−STDO)​where:BVTA=book value of total assetsIA=intangible assetsCL=current liabilitiesSTDO=short term debt obligations​ Tangible assets, such as equipment and machinery, are stated at their book value, which is the cost of the asset less accumulated depreciation. Intangible assets, such as patents, are subtracted from the formula because these assets are more difficult to value and sell. Debts payable in less than 12 months are considered short-term debt, and those liabilities are also subtracted from the formula. The coverage ratio computes the assets available to pay debt obligations, although the liquidation value of some assets, such as real estate, may be difficult to calculate. There is no rule of thumb as to what constitutes a good or poor ratio since calculations vary by industry. Asset/liability management reduces the risk that a company may not meet its obligations in the future. The success of bank loan portfolios and pension plans depend on asset/liability management processes. Banks track the difference between the interest paid on deposits and interest earned on loans to ensure that they can pay interest on deposits and to determine what a rate of interest to charge on loans. [Fast Fact: Asset/liability management is a long-term strategy to manage risks. For example, a home-owner must ensure that they have enough money to pay their mortgage each month by managing their income and expenses for the duration of the loan.] Adjusted Underwriting Profit Definition Adjusted underwriting profit is the profit an insurance company generates after paying out any claims and expenses.
Specker sequence - Wikipedia A Specker sequence. The nth digit of xk is 4 if n ≤ k and the nth Turing machine in a computable Gödel numbering halts on input n after k steps; otherwise it is 3. In computability theory, a Specker sequence is a computable, monotonically increasing, bounded sequence of rational numbers whose supremum is not a computable real number. The first example of such a sequence was constructed by Ernst Specker (1949). The existence of Specker sequences has consequences for computable analysis. The fact that such sequences exist means that the collection of all computable real numbers does not satisfy the least upper bound principle of real analysis, even when considering only computable sequences. A common way to resolve this difficulty is to consider only sequences that are accompanied by a modulus of convergence; no Specker sequence has a computable modulus of convergence. More generally, a Specker sequence is called a recursive counterexample to the least upper bound principle, i.e. a construction that shows that this theorem is false when restricted to computable reals. The least upper bound principle has also been analyzed in the program of reverse mathematics, where the exact strength of this principle has been determined. In the terminology of that program, the least upper bound principle is equivalent to ACA0 over RCA0. In fact, the proof of the forward implication, i.e. that the least upper bound principle implies ACA0, is readily obtained from the textbook proof (see Simpson, 1999) of the non-computability of the supremum in the least upper bound principle. The following construction is described by Kushner (1984). Let A be any recursively enumerable set of natural numbers that is not decidable, and let (ai) be a computable enumeration of A without repetition. Define a sequence (qn) of rational numbers with the rule {\displaystyle q_{n}=\sum _{i=0}^{n}2^{-a_{i}-1}.} It is clear that each qn is nonnegative and rational, and that the sequence qn is strictly increasing. Moreover, because ai has no repetition, it is possible to estimate each qn against the series {\displaystyle \sum _{i=0}^{\infty }2^{-i-1}=1.} Thus the sequence (qn) is bounded above by 1. Classically, this means that qn has a supremum x. It is shown that x is not a computable real number. The proof uses a particular fact about computable real numbers. If x were computable then there would be a computable function r(n) such that |qj - qi| < 1/n for all i,j > r(n). To compute r, compare the binary expansion of x with the binary expansion of qi for larger and larger values of i. The definition of qi causes a single binary digit to go from 0 to 1 each time i increases by 1. Thus there will be some n where a large enough initial segment of x has already been determined by qn that no additional binary digits in that segment could ever be turned on, which leads to an estimate on the distance between qi and qj for i,j > n. If any such a function r were computable, it would lead to a decision procedure for A, as follows. Given an input k, compute r(2k+1). If k were to appear in the sequence (ai), this would cause the sequence (qi) to increase by 2−k-1, but this cannot happen once all the elements of (qi) are within 2−k-1 of each other. Thus, if k is going to be enumerated into ai, it must be enumerated for a value of i less than r(2k+1). This leaves a finite number of possible places where k could be enumerated. To complete the decision procedure, check these in an effective manner and the return 0 or 1 depending on whether k is found. B.A. Kushner (1984), Lectures on constructive mathematical analysis, American Mathematical Society, Translations of Mathematical Monographs v. 60. Jakob G. Simonsen (2005), "Specker sequences revisited", Mathematical Logic Quarterly, v. 51, pp. 532–540. doi:10.1002/malq.200410048 S. Simpson (1999), Subsystems of second-order arithmetic, Springer. E. Specker (1949), "Nicht konstruktiv beweisbare Sätze der Analysis" Journal of Symbolic Logic, v. 14, pp. 145–158. Retrieved from "https://en.wikipedia.org/w/index.php?title=Specker_sequence&oldid=1089512814"
problemJWT_payload HASH(0x56508bb9ea08) If a stone is thrown vertically upward from the surface of the moon with a velocity of 10 m/s, its height (in meters) after [math] t seconds is [math] h(t)= 10t-0.83t^2 (a) What is the velocity of the stone after 2 seconds? (b) What is the velocity of the stone after it has risen 25 m? v(2)= v=
Lenticular lens - Wikipedia A series of cylindrical lenses molded in a plastic substrate. A lenticular lens is an array of lenses, designed so that when viewed from slightly different angles, different parts of the image underneath are shown.[1][2][failed verification – see discussion] The most common example is the lenses used in lenticular printing, where the technology is used to give an illusion of depth, or to make images that appear to change or move as the image is viewed from different angles. 1.1 Lenticular printing 1.3 Lenticular screens 1.4 Lenticular color motion picture processes 2 Angle of view of a lenticular print 2.1 Angle within the lens 2.2 Angle outside the lens 3 Rear focal plane of a lenticular network Lenticular printing[edit] Main article: Lenticular printing Lenticular printing is a multi-step process consisting of creating a lenticular image from at least two existing images, and combining it with a lenticular lens. This process can be used to create various frames of animation (for a motion effect), offsetting the various layers at different increments (for a 3D effect), or simply to show a set of alternate images which may appear to transform into each other. Corrective lenses[edit] Lenticular lenses are sometimes used as corrective lenses for improving vision. A bifocal lens could be considered a simple example. Lenticular eyeglass lenses have been employed to correct extreme hyperopia (farsightedness), a condition often created by cataract surgery when lens implants are not possible. To limit the great thickness and weight that such high-power lenses would otherwise require, all the power of the lens is concentrated in a small area in the center. In appearance, such a lens is often described as resembling a fried egg: a hemisphere atop a flat surface. The flat surface or "carrier lens" has little or no power and is there merely to fill up the rest of the eyeglass frame and to hold or "carry" the lenticular portion of the lens. This portion is typically 40 mm (1.6 in) in diameter but may be smaller, as little as 20 mm (0.79 in), in sufficiently high powers. These lenses are generally used for plus (hyperopic) corrections at about 12 diopters or higher. A similar sort of eyeglass lens is the myodisc, sometimes termed a minus lenticular lens, used for very high negative (myopic) corrections. More aesthetic aspheric lens designs are sometimes fitted.[3] A film made of cylindrical lenses molded in a plastic substrate as shown in above picture, can be applied to the inside of standard glasses to correct for diplopia. The film is typically applied to the eye with the good muscle control of direction. Diplopia (also known as double vision) is typically caused by a sixth cranial nerve palsy that prevents full control of the muscles that control the direction the eye is pointed in. These films are defined in the number of degrees of correction that is needed where the higher the degree, the higher the directive correction that is needed. Lenticular screens[edit] Screens with a molded lenticular surface are frequently used with projection television systems. In this case, the purpose of the lenses is to focus more of the light into a horizontal beam and allow less of the light to escape above and below the plane of the viewer. In this way, the apparent brightness of the image is increased. Ordinary front-projection screens can also be described as lenticular. In this case, rather than transparent lenses, the shapes formed are tiny curved reflectors. Lenticular screens are most often used for Ambient Light Rejecting projector screens for Ultra Short Throw projectors. The lenticular structure of the surface reflects the light from the projector to the viewer without reflecting the light from sources above the screen. 3D television[edit] As of 2010[update], a number of manufacturers were developing auto-stereoscopic high definition 3D televisions, using lenticular lens systems to avoid the need for special spectacles. One of these, Chinese manufacturer TCL, was selling a 42-inch (110 cm) LCD model—the TD-42F—in China for around US$20,000.[4] In 2021 only specialist manufacturers are making these kinds of display.[5] Lenticular color motion picture processes[edit] Lenticular lenses were used in early color motion picture processes of the 1920s such as the Keller-Dorian system and Kodacolor. This enabled color pictures with the use of merely monochrome film stock.[6] Angle of view of a lenticular print[edit] The angle of view of a lenticular print is the range of angles within which the observer can see the entire image. This is determined by the maximum angle at which a ray can leave the image through the correct lenticule. Angle within the lens[edit] The diagram at right shows in green the most extreme ray within the lenticular lens that will be refracted correctly by the lens. This ray leaves one edge of an image strip (at the lower right) and exits through the opposite edge of the corresponding lenticule. {\displaystyle R} is the angle between the extreme ray and the normal at the point where it exits the lens, {\displaystyle p} is the pitch, or width of each lenticular cell, {\displaystyle r} is the radius of curvature of the lenticule, {\displaystyle e} is the thickness of the lenticular lens {\displaystyle h} is the thickness of the substrate below the curved surface of the lens, and {\displaystyle n} is the lens's index of refraction. {\displaystyle R=A-\arctan \left({p \over h}\right)} {\displaystyle A=\arcsin \left({p \over 2r}\right)} {\displaystyle h=e-f} is the distance from the back of the grating to the edge of the lenticule, and {\displaystyle f=r-{\sqrt {r^{2}-\left({p \over 2}\right)^{2}}}} Angle outside the lens[edit] The angle outside the lens is given by refraction of the ray determined above. The full angle of observation {\displaystyle O} {\displaystyle O=2(A-I)} {\displaystyle I} is the angle between the extreme ray and the normal outside the lens. From Snell's Law, {\displaystyle I=\arcsin \left({n\sin(R) \over n_{a}}\right)} {\displaystyle n_{a}\approx 1.003} is the index of refraction of air. Consider a lenticular print that has lenses with 336.65 µm pitch, 190.5 µm radius of curvature, 457 µm thickness, and an index of refraction of 1.557. The full angle of observation {\displaystyle O} would be 64.6°. Rear focal plane of a lenticular network[edit] The focal length of the lens is calculated from the lensmaker's equation, which in this case simplifies to: {\displaystyle F={r \over n-1}} {\displaystyle F} is the focal length of the lens. The back focal plane is located at a distance {\displaystyle BFD} from the back of the lens: {\displaystyle BFD=F-{e \over n}.} A negative BFD indicates that the focal plane lies inside the lens. In most cases, lenticular lenses are designed to have the rear focal plane coincide with the back plane of the lens. The condition for this coincidence is {\displaystyle BFD=0} {\displaystyle e={nr \over n-1}.} This equation imposes a relation between the lens thickness {\displaystyle e} and its radius of curvature {\displaystyle r} The lenticular lens in the example above has focal length 342 µm and back focal distance 48 µm, indicating that the focal plane of the lens falls 48 micrometers behind the image printed on the back of the lens. Fresnel lens, a different 'flat' lens technology ^ "Lenticular, how it works". Lenstar.org. Archived from the original on 3 May 2016. Retrieved 25 May 2017. ^ DIY Printed Holographic Display (Lenticular Optics Explained), archived from the original on 21 December 2021, retrieved 8 May 2021 ^ Jalie, Mo (2003). Ophthalmic Lenses and Dispensing. Elsevier Health Sciences. p. 178. ISBN 0-7506-5526-7. ^ "Give Me 3D TV, Without The Glasses". Archived from the original on 13 February 2010. Retrieved 6 May 2010. ^ Factory, Looking Glass. "Looking Glass Factory · The World's Leading Holographic Display". Looking Glass Factory · The World's Leading Holographic Display. Retrieved 8 May 2021. ^ "Lenticular films on Timeline of Historical Film Colors". Archived from the original on 9 July 2014. Retrieved 29 June 2014. Bartholdi, Paul (1997). "Quelques notions d'optique" (in French). Observatoire de Genève. Retrieved 19 December 2007. Soulier, Bernard (2002). "Principe de fonctionnement de l'optique lenticulaire" (in French). Séquence 3d. Retrieved 22 December 2007. Okoshi, Takanori Three-Dimensional Imaging Techniques Atara Press (2011), ISBN 978-0-9822251-4-1. Lecture slides covering lenticular lenses (PowerPoint) by John Canny Choosing the right lenticular sheet for inkjet printer http://www.microlens.com/pdfs/history_of_lenticular.pdf Retrieved from "https://en.wikipedia.org/w/index.php?title=Lenticular_lens&oldid=1075985703"
Mini-Workshop: Surface Modeling and Syzygies | EMS Press Austrian Academy of Sciences, Linz, Austria A central problem in geometric modeling is to find the implicit equation for a curve or surface defined by a rational map. For surfaces, the two most common situations are the images of parameterizations \P^1 \times \P^1 \dashrightarrow \P^3 \P^2 \dashrightarrow \P^3 . studied in \cite{sch1}. The implicitization problem involves interesting commutative algebra. Implicitization is a problem in elimination theory, which allows one to use standard tools such as Gr\"obner bases or resultants. The surprise is that more sophisticated tools from commutative algebra are also being used, and syzygies play a leading role. We now describe some aspects of this, referring to \cite{c01} for a detailed survey and to \cite{bj} for a more general algebraic point of view. studied in \cite{sch1}. In \cite{sc_1}, Sederberg and Chen introduced the method of moving curves and surfaces. For a curve parametrization (a,b,c) , a {\it moving line} that follows the parametrization is a element of the syzygy module on the generators of the ideal I = \langle a,b,c\rangle . The syzygy module of I is free of rank two, and a Hilbert function computation shows that if a,b,c n without common factors, then there is an -dimensional vector space of moving lines of degree n-1 . Write each moving line as A_i(s,t)x+B_i(s,t)y+C_i(s,t)z x,y,z are placeholders, representing the fact that A_i\cdot a + B_i \cdot b +C_i \cdot c = 0 . By collecting coefficients, we can write A_i(s,t)x+B_i(s,t)y+C_i(s,t)z = \sum_{j=0}^{n-1} L_{ij}(x,y,z)s^jt^{n-1-j}. A main theorem of \cite{csc_1} is that the determinant of the n\times n matrix of the L_{ij} is a power of the implicit equation for the image. studied in \cite{sch1}. For a surface parametrization given by (a,b,c,d) , a moving plane that follows the parametrization is an element of the syzygy module on the generators of the ideal I = \langle a,b,c,d\rangle , and a moving quadric that follows the parametrization is an element of the syzygy module on the generators of I^2 . The moving surface method of \cite{cgz} requires knowing that a syzygy of the form (c_1 a + c_2 b + c_3 c + c_4 d) a + (c_5 b + c_6 c + c_7 d) b + (c_8 c + c_9 d) c = 0 comes from the Koszul complex when a,b,c,d have no common zeros. In the case of \P^2 (i.e., when a,b,c,d are homogeneous polynomials), this is proved by observing that a,b,c form a regular sequence, so that every syzygy comes from the Koszul complex. For \P^1 \times \P^1 a,b,c,d are bihomogeneous polynomials), the Koszul complex is not exact in all bidegrees, but by vanishing theorems for cohomology, it can be seen that the sequence is exact in the bidegree of interest. studied in \cite{sch1}. The main result of \cite{cgz} involves parameterizations without base points. When base points are allowed, \cite{bcd} uses results of \cite{cs} to show that for \P^2 , the moving surface method of \cite{cgz} applies when the base points are local complete intersections. This is also true for \P^1\times\P^1 , by \cite{ahw}. The proofs in \cite{bcd} use results about the regularity of I I^2 ; in a similar way, the proofs in \cite{ahw} use results about bigraded regularity. In \cite{ccl} a special case of the Serre conjecture is used to conclude that syzygy modules are always free for affine surface parameterizations. studied in \cite{sch1}. Interestingly, the abstract general setting for this application of syzygies for the implicitization problem is given by the approximation complexes defined by W. Vasconcelos and coauthors \cite{hsv,v}. The application of this method to compute the implicit equation of a parameterized hypersurface has been developed recently in \cite{bj, bc, ch}. It has been implemented in the case of finite nice base points by Bus\'e \cite{b}. The implicit equation is obtained from a double complex which provides a resolution of the blow up algebra of the ideal of base points, in case this ideal is of linear type, i.e. in case the associated Rees algebra and symmetric algebra coincide. The article \cite{bchj} studies optimal degree estimates and the extraneous factors that appear when the base points are almost local complete intersections, as well as a link with some particular resultant computations. studied in \cite{sch1}. As noted in \cite{bj}, the set of all moving hypersurfaces that follow a given hypersurface parametrization form an ideal that is the ideal of relations defining the Rees algebra associated with the parametrization. The structure of this ideal is investigated in some special cases in \cite{coxrees} using local cohomology, local duality and the \emph{Sylvester forms} introduced by Jouanolou \cite{j}. The paper \cite{coxrees} uses results from the commutative algebra literature (the paper \cite{morey} of Morey and Ulrich) and from the geometric modeling literature (the paper \cite{sgd} of Sederberg, Goldman and Du). Understanding the general case for curves is an interesting open problem in both commutative algebra and geometric modeling. The surface case is also completely open. studied in \cite{sch1}. The parametrization is also not an intrinsic property of the parametrized variety, in contrast to the implicit equation. In some cases, it seems worthwile to replace the given parametrization by a simpler one before one attempts to implicitize. In the curve case and in the surface case, we can simplify the parametrization (if possible) without implicitizing, as shown in \cite{sch3}. The smallest possible parametric degree has been studied in \cite{sch1}. Another line of research developed in \cite{ek, sty} aims to determine a priori the Newton polytope of the resulting equation, to translate then the implicitization problem to an interpolation linear algebra problem. These articles are based in the theory of sparse resultants and the use of tropical geometry. Numerical issues in the implementation of the theoretical results are also relevant \cite{sch2}. David A. Cox, Henry K. Schenck, Josef Schicho, Alicia Dickenstein, Mini-Workshop: Surface Modeling and Syzygies. Oberwolfach Rep. 4 (2007), no. 4, pp. 3181–3208
This was the most recent in a long series of annual conferences in Oberwolfach covering all areas of algebraic and geometric topology, and the last such conference before going over to the new two-year cycle of meetings. According to the records kept in the library of the institute, the first topology meeting was held in 1963, and meetings have been held every year since then except for 1968. None of the participants in the first meeting is still active in research. Of the people present at this meeting, it was Rainer Vogt who has been attending this series for the longest period: since 1969. Every year for the last twelve years, a ``keynote speaker'' has been chosen to give some focus to the topology meeting. Thus while we do have talks which cover all areas of algebraic and geometric topology, we try to focus on one particular area of current interest. This year, the keynote speaker was Yair Minsky, who talked about the classification of non-compact hyperbolic 3-manifolds N with finitely generated fundamental group. The two main conjectures (now proved) in this area are Marden's Tameness Conjecture and Thurston's Ending Lamination Conjecture. The Tameness Conjecture is about the topology of N , and asserts that any end of N is {\em topologically tame}, i.e., is homeomorphic to S \times \mathbb{R} for some closed surface S . The Ending Lamination Conjecture is about the geometry of N , and concerns the data needed to determine N up to isometry. The most interesting case is that of a geometrically infinite (and topologically tame) end \varepsilon N . Thurston showed how to associate to such an \varepsilon a geodesic lamination \lambda S , the {\em ending lamination} of \varepsilon , and conjectured that \varepsilon is determined up to isometry by \lambda . The Ending Lamination Conjecture was recently proved by Minsky, partly in joint work with Brock and Canary, and making use of the earlier result of Masur and Minsky that the curve complex of a surface is hyperbolic in the sense of Gromov. At the meeting, Minsky gave the keynote series of three lectures on the background to and proof of the Ending Lamination Conjecture. Within the last year, the Tameness Conjecture has also been proved, by Agol (an independent proof has also been announced by Calegari and Gabai). In his talk at the meeting Agol sketched some of the ideas of the proof and outlined several applications to other problems in 3-dimensional topology. The Tameness and Ending Lamination Conjectures together give a complete para\-metrization of the set AH(M) of (non-compact) hyperbolic 3-manifolds homotopy equivalent to a given compact 3-manifold M with non-empty boundary. However, if AH(M) is given the natural {\em algebraic topology}, i.e., that coming from its containment in the \operatorname{PSL}_2(\mathbb{C}) character variety of \pi_1(M) , then the classifying data is not continuous. As a consequence, the topological structure of AH(M) , in other words the deformation theory of these hyperbolic structures, is not completely clear. This was the topic of Canary's talk. He described how, although (in the case that \partial M is incompressible) the components of the interior of AH(M) are open topological cells, their closures can intersect in unexpected and wild ways. Probably the main problem left in this whole area is to better understand the topology of this deformation space AH(M) . For example, what is the Hausdorff dimension of its boundary? In contrast, if M is a closed hyperbolizable 3-manifold, then AH(M) is simply a pair of points, by Mostow rigidity. However, even in this case the relation between the geometry of M (i.e., its hyperbolic metric) and its topology is not well understood. Souto's talk addressed an interesting question in this context, namely the relation between the Heegaard genus of M and the lengths of geodesics in M S g Heegaard surface in M , then, although it is easy to see that there is no lower bound on the lengths of closed geodesics in M that depends only on g , Souto showed that there exists \varepsilon_g > 0 such that the set of primitive closed geodesics in M \le \varepsilon_g is unknotted in the sense that it can be isotoped to lie on parallel copies of S . The other talks were chosen to cover as many different areas of topology as possible, and hence it is difficult to find an overall theme to describe them. On the more geometric side, Nathalie Wahl described her work on diffeomorphism groups of 3-manifolds obtained by attaching certain types of handles to S^3 , and their connection to groups of self equivalences of certain graphs. Among the applications of this work are new proofs of homological stability of \textup{Aut}(F_n) \textup{Out}(F_n) F_n a free group), the vanishing of H_*(\textup{Aut}(F_n);\mathbb{Z}^n) in a range, and the construction of an infinite loop map from \mathbb{Z}\times{}B\Gamma_\infty^+ (the limit of mapping class groups of surfaces) to \mathbb{Z}\times{}B\textup{Aut}_\infty^+ (the limit of the \textup{Aut}(F_n) ). Ian Hambleton described his recent proof that for any pair of finite periodic groups G G' G\times{}G' acts freely and smoothly on S^n\times{}S^n n --- even in the cases (already well known) when G G' themselves do not act freely on any spheres. Stefan Bauer described his recent work on invariants of 4-manifolds, including a refinement of the Seiberg-Witten invariants due to him and Furuta. Hyam Rubinstein talked about an interesting generalization of the class of small Seifert fibered 3-manifolds, in which the three solid tori whose union is the manifold are replaced by handlebodies of genus2. In a more algebraic direction, Jesper Grodal described some of the latest developments in the field of 2-compact groups --- spaces which are complete at the prime 2, and whose loop space has finite mod 2 cohomology (i.e., looks like the 2-completion of a finite complex). The goal is to classify all simply connected 2-compact groups (this has already been done at odd primes), and understand how close they are to being 2-completions of classifying spaces of compact connected Lie groups. In his talk, Grodal focused on the problem of defining root systems for 2-compact groups, and some of the problems which arise at the prime 2 and did not arise for odd primes. Among the other algebraic talks, Kathryn Hess described new algorithms for describing the Hopf algebra structure on the homology of the loop space of a space X , in terms of an appropriate model for chains on X . In the field of geometric group theory, Martin Bridson talked about subgroups of direct products of hyperbolic groups, and described his counterexample with Grunewald to a conjecture of Grothendieck, where they construct a homomorphism of finitely presented, residually finite groups which induces an equivalence of representation categories but is not an isomorphism.
Mean anomaly - Simple English Wikipedia, the free encyclopedia In the study of orbital dynamics the mean anomaly of an orbiting body is the angle the body would have traveled about the center of the orbit's auxiliary circle. Unlike other measures of anomaly, the mean anomaly grows linearly with time. Because of the linear growth with time, the mean anomaly makes calculating the time of flight between two points on the orbit very easy. The mean anomalies for the two points are calculated and their difference is found. Knowing this, the ratio of this difference relative to the entire {\displaystyle 2\pi } encompassing one orbit is simply equal to ratio of the time of flight to the orbital period of one whole orbit (i.e. {\displaystyle {\frac {M_{2}-M_{1}}{2\pi }}={\frac {t}{T}}} Retrieved from "https://simple.wikipedia.org/w/index.php?title=Mean_anomaly&oldid=6885209"
problemJWT_payload HASH(0x565091ba90d0) The equation [math] 2x^{2}+xy+2y^{2} = 50 represents an ellipse which lies obliquely in the plane; that is, an ellipse whose axes are not parallel to the coordinate axes. Find the slopes of the tangent lines to this ellipse at its [math] y -intercepts. (Note: The slopes should be the same!) The tangent lines to the ellipse at its [math] y -intercepts have slope .
Smarter EMRs: Raising the Bar for Hypertensive Patient Care The field of medicine and patient health was not immune to the benefits and innovations that came with the digital age. A key innovation in medicine has been the development and improvement of Electronic Medical Records (EMRs for short) that have massively improved the productivity and overall performance of healthcare providers and their organizations. It is time now, as we are slowly witnessing, for the Electronic Medical Record to undergo another transformation with the age of Artificial Intelligence. This time they are getting smarter! To set some ground work, let's define what an Electronic Medical Record is and does. Not to be confused with an "Electronic Health Record", an EMR is simply the digital version of what you would find in a patient chart/file. A medical record contains a patients diagnoses, allergies, treatment histories, hospitalization records, immunizations, and other medical history. These medical records are usually specific to individual health facilities (hospitals and clinics), and are rarely accessible outside of that facility. Tangent: The fragmentation of patient medical records leads to a "broken" and "incomplete" patient history, as a patient can have multiple medical records with different health facilities and no information is shared between the two. A later post will show how Elsa is built from the ground up to tackle this challenge by shifting the dynamic and making patients the owners and custodians of their own data. Understanding Hypertension - High Blood Pressure Hypertension, defined as elevated blood pressure, is a serious condition where the pressure inside your arteries is high. Blood pressure is measured with two numbers: Systolic Pressure: This is the pressure in your arteries when your heart "beats" or "contracts" - this is the active part of your heart beat. Diastolic Pressure: This is the pressure in your arteries when your heart rests between contractions and beats - this is the inactive/relaxed part of your heart beat. \begin{align} \frac{Systolic Pressure}{Diastolic Pressure} \end{align} The two numbers can be self measured at home and are often correlated. When measured, they give healthcare providers a glimpse into your overall cardiovascular health. Hypertension can be diagnosed if your systolic pressure is above 140 mmHg and/ or your diastolic pressure is greater than 90 mmHg when measured on two different days. It is important to know that many things can affect your blood pressure at the time of measurement, including caffeine consumption, climbing a set of stairs, or simply being nervous about the results of the readings. To learn more about the condition, please go Ref Understanding Artificial Intelligence and (its often-confused-with child) Machine Learning Artificial Intelligence is changing how we interact with the world and is completely shifting how work and services are being delivered. From the simple nice-to-haves like face unlock/ facial recognition on your iPhone Ref, to the life and death applications of surgery Ref, it is clear to us at Elsa Health that Artificial Intelligence is here to stay. Simply put, Artificial Intelligence is non-natural/ man-made intelligence using knowledge from multiple disciplines like mathematics, computer science, information theory, and more. Machine Learning is one way to build artificially intelligent systems where the AI developer shows the computer/ "machine" some examples and guides it to "learn" from those examples in the hopes that after it has learnt, it can be used autonomously or to augment human performance. Generally speaking, the more examples (ie: data), the better the "machine" can "learn", meaning it will be able to perform better in the real world as it would have seen more examples of a varying nature. It is worth noting that in the grand scheme of things, and with respect to the potential of these technologies, both AI and ML are very young and new innovations that push the envelope daily. This also means there is some disagreement in the community and ecosystem over many things, including the very definitions of the terms. To learn more about Artificial Intelligence and Machine Learning, please go Ref Leveraging the Data Troves of EMRs It should come as no surprise that Electronic Medical Record systems can have massive amounts of data that, more often than not, is only used for historical record keeping for the hospital and for reporting purposes. The usual challenges, more common in LMICs (Low and Middle Income Countries), of undigitized data are almost completely solved by EMR systems (provided adequate technical literacy and following best practices). Assuming the best case scenario where proper care has been used when recording patient visits and histories, the data stored in hospitals and other health facilities can be extremely beneficial to patient care and in decision support for care takers and doctors. Machine learning systems are showing promise in the both the early identification of Hypertension and in supporting the management of patients with Hypertension. This promise can be realized when applied to medical record systems, allowing them to be more supportive to the care providers and patients. ML/ AI in Identification and Prediction of Hypertension Early identification, or better yet, ahead-of-time onset prediction of Hypertension can be crucial in management and intervention planning for both patients with Hypertension and those at high risk of developing Hypertension over the coming months. Researchers have validated the use of Machine & Statistical learning from EMR databases using popular AI techniques like Gradient boosting Ref, Artificial Neural Networks Ref, Logistic Regressions Ref, and more. All these techniques have one thing in common, they use existing data to develop a mathematical/statistical representation of how the patient information relates to their hypertensive status. The learning of the representation from data is called "training" the model. Once the model is trained, and assuming its performance is acceptable Ref, the algorithms can then be used to mine and learn from EMR records and either: Flag patients who might have hypertension, but do not know their status, or Flag patients at risk of developing hypertension over the next few months/years Ref Two studies that research the use of algorithms (Gradient boosting using XGBoost Ref) show great performance: Highly precise risk prediction model for new-onset hypertension using artificial intelligence techniques. By Hiroshi Kanegae, et. al Prediction of Incident Hypertension Within the Next Year: Prospective Study Using Statewide Electronic Health Records and Machine Learning. By Chengyin Ye et al ML/ AI in the Management of Hypertension The combination of powerful Machine Learning algorithms and Big Data Ref means management of Hypertension can be better planned and even more personalized. Learning algorithms can analyse thousands of other patients' management plans to predict the most effective treatment plans for a specific patient. Furthermore, these algorithms can monitor patient progress and perform continuous risk assessment while providing recommendations for any alterations to the treatment plans. Meta learners Ref have been successful in achieving exactly this as shown in this study by Liu C et. al: Clinical Value of Predicting Individual Treatment Effects for Intensive Blood Pressure Therapy How we, Elsa Health, "smarten-up" EMR systems. The digital health ecosystem is full of "systems" and "platforms" that help health facilities manage their patients data. This ecosystem is also extremely fragmented, with many developers and maintainers of the digital tools developing only their systems and not prioritizing interoperability of health data. This results in a messy, siloed, inconsistent, and incoherent medical data landscape where the patients ultimately pay the price. Because of this, Elsa is prioritizing the development of easily composable and applied health algorithms that any EMR service can use to easily add decision support and automatic information extraction to their databases.
Mini-Workshop: Amalgams for Graphs and Geometries | EMS Press Sergey V. Shpectorov This meeting was well attended with 17 participants with broad geographic representation from 3 continents. There were 16 talks during the workshop including an invited talk by Anda Degeratu a participant in the competing String Theory workshop. The method of group amalgams is a highly effective way of classifying mathematical objects possessing high degrees of symmetry. The idea of the method is separation of the study of the local structure of the acting group from the question of it's global isomorphism type. The method of group amalgams has been successfully applied to problems in graph theory and diagram geometry. It also featured prominently in group theory. For example, the fact that the Monster sporadic simple group is the universal completion of the amalgam associated with the tilde geometry formed the foundation of the solution of the famous Y -group conjecture that the Y_{555} presentation defines the Bimonster (the direct product of two copies of the Monster sporadic simple group extended by a group of order 2). J.H. Conway coined for this theorem the name 'NICE' where \textbf{N} is for \textbf{N}orton, \textbf{I} for \textbf{I}vanov, \textbf{C} for \textbf{C}onway and \textbf{E} for anyone \textbf{E}lse. The proof of the NICE theorem based on the method of group amalgams is presented in the two volume monograph of the organisers published by Cambridge University Press. \par Recently a dramatic progress was made within the study of flag-transitive diagram geometries. The importance of the notion of \textit{constrained} completions of amalgams was realised. Within this framework many geometries of sporadic groups were characterised as the constrained completions of suitable amalgams. This approach also gives a general criterion about possible shapes of diagrams of flag-transitive geometries. This enables the area of diagram geometries to leave it's "botanical" stage of example collection and enter the stage of theory building. \par Among other applications we would like to mention recent applications of the amalgam method to the cohomologies of finite groups. These ideas were described in the notes of M. Aschbacher on calculation of the Schur multiples of some finite simple groups. \par During the workshop we had discussed in detail the proofs of a number of results obtained along the lines of the amalgam method, as well as of directions of future research. We believe that the abstract of the talks given at the workshop facilitate for the younger mathematicians access to these extremely important, yet very technically complex tools of mathematical research. Alexander A. Ivanov, Sergey V. Shpectorov, Mini-Workshop: Amalgams for Graphs and Geometries. Oberwolfach Rep. 1 (2004), no. 2, pp. 1311–1342
Opportunistic encryption - Citizendium 1 Opportunistic encryption of mail 2 Opportunistic encryption for IP 3 Projects with similar goals Opportunistic encryption, often abbreviated OE is the attempt to arrange network communication systems so that any two nodes can encrypt their communication, without any connection-specific setup by the system administrators. Once two machines are set up for OE, they can set up secure connections automatically. Other encryption systems aim at providing encryption wherever necessary, but opportunistic encryption tries to encrypt wherever possible. The reasoning behind it is that a secure encrypted connection is almost always preferable to an insecure connection, so encryption should be the default, used whenever possible. Some encryption systems come into play only when the user asks for encryption, for example applying PGP to an email message (instead of sending in the clear), logging in to a remote system with SSH (instead of unencrypted telnet), or requesting an encrypted web connection by using https (instead of unencypted http). Some infrastructure is required — you must know the recipient's key for PGP, have the password to log in with SSH, and check the server's certificate for https. For other systems, administrators must configure each connection which is to be encrypted. For example, in building a VPN between two offices, the administrators on the two ends must co-operate to set up the connection. If you want your laptop to connect either to a wireless access point or to your office VPN, then you need to get some information from the system administrator and configure your machine to match; at the very least, you need a password and there may be other things to set up. In these cases, you are being the second administrator configuring your end of the connection. Alternately, you might give the laptop to your IT staff and let them set it up, but in any case someone has to set up both ends of each connection. Opportunistic encryption aims to avoid all that. Once a machine is set up for OE, it automatically checks whether the other end of any connection is capable of OE. If so, the two machines automatically set up an encrypted connection. This works without any user requests and without any need for administrators to configure connections. It even works when the two administrators have had no contact with each other. Of course, there is still some administrative work involved; the machines must be set up for OE and related policies set. An important policy decision is what to do if OE fails — communicate in the clear or refuse the connection. One benefit is a reduction in administrative workload. If the administrators must set up every connection, worst case effort for a network of N machines scales by N2. Of course, some networks are simpler; if all you need is N machines connecting to a single server or wireless access point, then you need only set up N+1 devices. However, for N machines with everyone able to talk to everyone, there are {\displaystyle N(N-1)/2} connections; if you must configure each of them and N is large, this becomes highly problematic. There are several ways to avoid this disaster on large networks. A centralised authentication system such as Kerberos can manage authentication and keying for many machines, a public key infrastructure may help (though it also brings its own complications), and a few strategically placed encryption devices — whether hardware encryption at link level or IPsec gateways at network level — can provide an encryption service to many clients. These techniques can often reduce the workload to something manageable. However, none of them scales very well to a large heterogeneous network such as the Internet. OE, however, cuts the Gordian knot. For OE, the effort scales linearly; the work to set up N machines so that any of them can communicate securely with any other for OE is just N. Once OE is set up, any two OE-capable machines can secure their connections. This could, at least in theory, scale to the whole Internet. This was a large part of the political motivation for FreeS/WAN, the project that invented OE; their goal was to encrypt a large portion of the Internet and block various government monitoring programs. If OE were sufficiently widespread, then secure connections could be the default, more-or-less everything would be encrypted, and monitoring the net would become nearly impossible. This is what the cypherpunks on the FreeS/WAN project wanted to achieve. The concept of opportunistic encryption can be applied at any level of the protocol stack. The most widespread application is for SMTP mail transfers, described in the next section. The most general effects are obtained by applying OE at the IP level; this is covered in the OE for IP section. There are also systems which apply the OE principle to TCP, covered in the similar projects section. Opportunistic encryption of mail The most widely deployed OE system encrypts server-to-server SMTP mail transfers. The original implementation was ssmail or Secure Sendmail [1], which built encryption into the mail server code. The current standard[2] instead relies on TLS. In both systems, some extra things are added in the SMTP setup dialog; these let either server query whether the other can handle the encryption. If both can, the link is encrypted. This does not provide all of the benefits of end-to-end mail encryption systems such as PGP; in particular it provides no protection against an enemy with privileged access to one of the mail servers involved, or against someone monitoring the connection between the user and the mail server. However, it does prevent attacks at routers between the mail servers. It provides partial protection against wholesale mail monitoring, forcing a government that wants to do large-scale monitoring either to subvert mail servers or to get the server owners to co-operate. There are also TLS-based systems for encrypting the link between user and mail server. [3] [4] These are not opportunistic; the user must request encryption. However, they combine nicely with Secure SMTP to give an almost end-to-end solution; the combination blocks all eavesdropping "on the wire". Note however that — unlike a genuine end-to-end method such as PGP — it does not block eavesdropping by anyone with privileged access to a mail server. There has been some recent work on an opportunistic end-to-end encryption system for email called STEED for "Secure Transmission of Encrypted Electronic Data"[1]. Opportunistic encryption for IP The term "opportunistic encryption" comes from the FreeS/WAN project, who built OE into a Linux implementation of IPsec and wrote an RFC[5] documenting the design. Like any encryption scheme, an OE system must rely on some form of source authentication; it does no good at all to encrypt messages so that only the recipient can read them unless the recipient is who you think it is. Different OE designs rely on different authentication mechanisms. FreeS/WAN used DNS to manage authentication data.[6] In particular, they put the authentication keys in the DNS reverse maps so that they could be looked up when all the IPsec software knows is the IP address it needs to communicate with. The DNS reverse maps also had data which supported a single OE gateway doing IPsec on behalf of a range of client addresses; the partner could discover the gateway address with DNS lookup on any client address. The technique is designed to avoid any requirement for digital certificates or a complex public key infrastructure; if you have the authority for the reverse map of an address range, then you can set up OE for that range. DNS already provides a hierarchical system for delegating control over address ranges; FreeS/WAN OE simply used that, rather than introduce complications. There are no certificates involved and no attempt is made to handle the difficult problem of binding names to cryptographic credentials. An authentication key (a plain hexadecimal string, not embedded in a certificate) and a gateway address are bound to a range of client addresses; that is all. Given those, the other end can set up an IPsec tunnel to the gateway and route all traffic for the client address range via that tunnel. Used alone, this is secure against passive eavesdroppers who only try to listen in; encrypting the connection stops them. Add DNS security to protect the authentication data and it is also secure against active attackers who try to trick systems into communicating with them instead of legitimate partners, to alter messages in transit, or to send bogus messages. DNS security protects both the keys in the reverse maps and the mapping from domain names to IP addresses in the forward maps, so (assuming both IPsec and DNS security are safe), OE with DNS security is secure against man-in-the-middle attacks and other attacks based on spoofing DNS information or packet IP addresses. FreeS/WAN-style OE without secure DNS is not secure against active attacks; you need authentication to block those attacks, and authentication data obtained from insecure DNS is not trustworthy. That said, the attacker needs a fair effort to subvert the system, even without secure DNS. First he must subvert two DNS servers, or trick the two target IPsec gateways into using the wrong DNS servers. This lets him provide bogus authentication data and set up a man-in-the-middle attack. Then he must conduct that attack, intercepting and replacing packets in the Internet Key Exchange (IKE) protocol. This gets him the encryption and authentication keys for the Encapsulated Security Payload (ESP) protocol. At that point, all is lost; the enemy can both read the encrypted traffic and forge messages that the recipient systems will accept as genuine. However, it may not be permanently lost. The keys for ESP are changed regularly and OE always uses the perfect forward secrecy option in IPsec, so every time those keys change, the attacker must conduct another successful man-in-the-middle attack on IKE to get the new keys. In short, even without DNS security, FreeS/WAN-style OE is secure against all passive attackers (anyone just eavesdropping) and an active attack against it needs significant skill and resources. The Planete project are building OE for IPv6. They claim "Unlike existing schemes (e.g. FreeS/WAN), our proposal does not rely on any global Third Trusted Party (such as DNSSEC or a PKI). Hence, we claim it is more secure, easier to deploy and more robust." They have a novel authentication technique based on "IPv6 Anycast, Authorization certificates and Crypto-Based Identifiers (CBID)". OE done at the IP layer of the protocol stack protects everything above that layer, and does so without any assistance from higher-layer protocols and generally entirely transparently to the users. OE at the IP level offers one way to encrypt more-or-less the entire net, but it is not the only way. There are other projects which have similar aims. Better-than-Nothing Security or BTNS [7] is IPsec done without authentication; one of the RFC authors was a former FreeS/WAN team member. This gives basically the same security level as FreeS/WAN-style OE done without DNS security; it is secure against passive attacks, but not against active attacks. However, since BTNS does not use authentication at all, active attacks against it are simpler than against FreeS/WAN-style OE. There are also systems which apply opportunistic techniques to TCP connections, Google's obfuscated TCP and the later TCP crypt. These too are secure against passive attacks but vulnerable to active attacks, in particular to man-in-the-middle attacks. The EFF project HTTPS Everywhere aims at encrypting most web traffic by making https the default, always trying that first and only falling back to http if that fails. This is essentially opportunistic; it makes the browser use https encryption whenever the server supports it. HTTPS Everywhere resists passive attacks and moreover is secure against active attacks provided that the SSL protocol underlying https is. SSL is designed to be secure against such attacks, but it depends on certificates and therefore on certificate authorities and there is room for considerable doubt about some certificate authorities. If an authority were subverted, then a man-in-the-middle attack using bogus certificates would be possible. ↑ Damian Bentley, Greg Rose, Tara Whalen (1999), ssmail: Opportunistic Encryption in sendmail ↑ P. Hoffman (February 2002), SMTP Service Extension for Secure SMTP over Transport Layer Security, RFC3027 ↑ C. Newman (June 1999), Using TLS with IMAP, POP3 and ACAP, RFC2595 ↑ K. Zeilenga, Ed. (August 2006), The PLAIN Simple Authentication and Security Layer (SASL) Mechanism, RFC4616 ↑ M. Richardson & D.H. Redelmeier (December 2005), Opportunistic Encryption using the Internet Key Exchange (IKE), RFC4322 ↑ M. Richardson (February 2005), A Method for Storing IPsec Keying Material in DNS, RFC4025 ↑ N. Williams, M. Richardson, ed. (November 2008), Better-Than-Nothing Security: An Unauthenticated Mode of IPsec, RFC5386 Retrieved from "https://citizendium.org/wiki/index.php?title=Opportunistic_encryption&oldid=558522"
§ "Cheap" proof of euler characteristic If we punch a hole in a sphere, we create an edge with no vertex or face. This causes V - E + F to go down by 1. If we punch two holes, that causes V - E + F to go down by two. But we can glue the two edges together. This gluing gives us a handle, so each hole/genus reduces the euler characteristic by two!
The Influence of Blade Lean on Straight and Annular Turbine Cascade Flow Field | J. Turbomach. | ASME Digital Collection The Influence of Blade Lean on Straight and Annular Turbine Cascade Flow Field Gabriele D’Ippolito, Gabriele D’Ippolito , I-20133 Milano, Italy e-mail: gabriele.dippolito@polimi.it D’Ippolito, G., Dossena, V., and Mora, A. (September 21, 2010). "The Influence of Blade Lean on Straight and Annular Turbine Cascade Flow Field." ASME. J. Turbomach. January 2011; 133(1): 011013. https://doi.org/10.1115/1.4000536 The work proposes a detailed description of the flow field throughout leaned turbine nozzles and reports a sensitivity analysis with respect to the lean angle. A phenomenological approach focuses the attention on pressure contours distribution on both inside and outside the passage. The study involves both straight and annular cascades mounting a typical intermediate reaction degree section, designed for steam turbines. Blades are built by stacking the same 2D profile along different linear axes, characterized by different angles with respect to the normal or radial direction: α=0 deg for prismatic blade and α=10 deg ⁠, 15 deg, and 20 deg for the leaned ones are considered. Experimental and numerical tests were performed at the nominal inlet flow angle in order to avoid any effect related to blade sweep. Experimental tests were carried out at the design outlet Mach number of 0.65; measurements were performed at the Laboratorio di Fluidodinamica delle Macchine of Politecnico di Milano. Only linear cascades with prismatic and 20 deg leaned blades were experimentally tested, providing data both downstream and inside the blade passage by means of pressure probe traversing, endwall pressure taps, and oil flow visualization. Experimental results were also used to validate the numerical setup, which provided a detailed computational picture of the flow field throughout the channel. The influence of the pressure contours’ shape on secondary vorticity activity downstream of the passage is highlighted and discussed, focusing the attention on secondary structures and loss distribution in this region. The resulting description of the flow field, based on the representation of pressure contours, supports the sensitivity analysis with respect to the blade lean angle, identifying the mechanism that leads the secondary vorticity to grow in regions where secondary losses and blade loading decrease. blades, flow visualisation, nozzles, sensitivity analysis, steam turbines, vortices Blades, Flow (Dynamics), Pressure, Cascades (Fluid dynamics), Pressure gradient, Flow visualization, Vorticity The Exploitation of Three-Dimensional Flow in Turbomachinery Design Aerodynamic Design of 50 Per Cent Reaction Steam Turbine The Application of Stator Blade Compound Lean at Root to Increase the Efficiency of LP Turbine Stages From Low to Nominal Load ,” ASME Paper No. IJPGC2000-15015. Influence of Blade Curving on Flow Fields of Turbine Stator Cascades A Parametrical Analysis of the Effects Produced by Leaning and Bowing Techniques on Turbine Cascade Flow Field Proceedings of the Seventh ETC Congress An Investigation of Secondary Flows in Nozzle Guide Vanes Effects of Incidence on Three-Dimensional Flow in a Linear Cascade A New Alternative for Reduction of Secondary Flows in Low Pressure Turbine Dynamics of the Vorticity Distribution in Endwall Junctions On the Application of Leaning Technique on Annular Turbine Cascades
§ Articulation points I find DFS fascinating, and honestly insane for how much structral information of the graph it manages to retain. A vertex v is an articulation point of a graph G if the removal of v disconnects the induced subgraph. § Tactic 1 - Inductively We first solve the super easy case with the root, and then try to see if we can treat other cases like the root node case, then we're good. Here, we are given an graph G , and we are thinking about a DFS tree T_G of the graph G § Thinking about the root When is the root an articulation point? If the root has multiple children, then it is an articulation point; If we remove the root, then it disconnects the children. This is because we have an undirected graph, where we only have back edges, and no cross edges. A back edge can only go from a node to its ancestor. If we remove the root, the back edges cannot save us, for there is no ancestor higher than the root to act as an alternate path § Non root vertex When is a non root vertex v an articulation point? When there is some child w v such that the subtree of w cannot escape the subtree of v . That is, all back edges from w do not go above v . If we were to now remove v would be disconnected from the rest of the graph. Alternate phrasing: When all cycles in the subtree of w are within the subtree of v . This means that the backedges cannot go above v w could build a cycle that goes above v v would not be an articulation point, because it'll be involved in some cycle v \mapsto w \mapsto \mapsto p \mapsto v , which gives us an alternative path to reach w v One way to imagine this maybe to imagine v as the new root, and the other stuff that's above v to be to the left of w . That way, if we could go to w , we get a cross edge from the "new root"(v) and the "other section" (the part that's connected by a cross edge). If we prevent the existence of these "fake cross edges", we're golden, and v is then an articulation point. § Tactic 2 - Structurally / Characterization Next we follow a "mathematical" development, where we build theorems to characterize k-connectedness and use this to guide our algorithm design § Menger's theorem G be a connected undirected graph. Let u, v be two non-adjacent vertices. The minimum number of vertices whose removal from G u v is equal to the maximal number of vertex disjoint paths from v u § Whitney's theorem (corollary) k connected iff k vertices must be removed to disconnect the graph. § Biconnected components Menger's theorem tells us that a graph is not biconnected iff we can find a vertex whose removal disconnected the graph. Such a vertex is an articulation vertex. A biconnected component is a maximal subset of edges, such that the induced subgraph is biconnected. Vertices can belong to many components; Indeeed, articulation vertices are those that belong to more than one component. § Lemma: Characterization of biconnected components Two edges belong to the same biconnected component iff there is a cycle containing both of them. [This lemma is silent about biconnected components of single edges ] We show that a cycle is always contained in a single binconnected component. If a cycle contains edges from more than one biconnected component, then we can "fuse" the biconnected components together into a single, larger, biconnected component. § Lemma: Each edge belongs to exactly one biconnected component § Tactic 3 - 'Intuitively' We look at pictures and try to figure out how to do this. § DFS for articulation vertices - undirected: The connectivity of a graph is the smallest number of vertices that need to be deleted to disconnect the graph. If the graph has an articulation vertex, the connectivity is 1. More robust graphs that don't have a single point of failure/articulation vertex are said to be binconnected . To test for an articulation vertex by brute force, delete each vertex, and check if the graph has disconnected into components. this is O(V(V+E)) Joke: an articulate vertex is one that speaks very well, and is thus important to the functioning of the graph. If it is killed, it will disconnect society, as there is no one to fulfil its ability to cross barriers with its eloquent speech. § Articulation vertices on the DFS tree - undirected If we think of only the DFS tree for a moment of an undirected graph and ignore all other edges, then all interneal non-leaf vertices become articulation vertices, because they disconnect the graph into two parts: the part below them (for concreteness, think of a child leaf), and the root component. Blowing up a leaf has no effect, since it does not connect two components , a leaf only connects itself to the main tree. The root of the tree is special; If it has only one child, then it acts like a leaf, since the root connects itself to the only component. On the other hand, if there are multiple components, then the root acts like an internal node, holding these different components together, making the root an articulation vertex. § Articulation vertices on the DFS graph - undireced DFS of a general undirected graph also contains back edges . These act as security cables that link a vertex back to its ancestor. The security cable from x to y ensures that none of the nodes on the path [x..y]can be articulation vertices. So, to find articulation vertices, we need to see how far back the security cables go. int anc[V]; int dfs_outdeg[V]; void processs_vertex_early(int v) { anc[v] = v; } void process_edge(int x, int y) { if (dfsedge[x][y].type == TREE) { dfs_outdeg[x]++; } // y <-* // BACK // x --* if (dfsedge[x][y].type == BACK && (parent[y] != x)) { if(entry_time[y] < entry_time[anc[x]]) { anc[x] = y; In a DFS tree, a vertex v (other than the root) is an articulation vertex iff v is not a leaf and some subtree of v has no back edge incident until a proper ancestor of v. Udi Manber: Introduction to algorithms: A creative approach. Steven Skeina: The algorithm design manual. Codeforces: problems to solve A2OJ articulation point problems INOI advanced graph algorithms
Fourier-transform spectroscopy - Wikipedia (Redirected from Fourier transform spectroscopy) Spectroscopy based on time- or space-domain data Fourier-transform spectroscopy is a measurement technique whereby spectra are collected based on measurements of the coherence of a radiative source, using time-domain or space-domain measurements of the radiation, electromagnetic or not. It can be applied to a variety of types of spectroscopy including optical spectroscopy, infrared spectroscopy (FTIR, FT-NIRS), nuclear magnetic resonance (NMR) and magnetic resonance spectroscopic imaging (MRSI),[1] mass spectrometry and electron spin resonance spectroscopy. There are several methods for measuring the temporal coherence of the light (see: field-autocorrelation), including the continuous-wave and the pulsed Fourier-transform spectrometer or Fourier-transform spectrograph. The term "Fourier-transform spectroscopy" reflects the fact that in all these techniques, a Fourier transform is required to turn the raw data into the actual spectrum, and in many of the cases in optics involving interferometers, is based on the Wiener–Khinchin theorem. 1 Conceptual introduction 1.1 Measuring an emission spectrum 1.2 Measuring an absorption spectrum 2 Continuous-wave Michelson or Fourier-transform spectrograph 2.1 Extracting the spectrum 3 Pulsed Fourier-transform spectrometer 3.1 Examples of pulsed Fourier-transform spectrometry 3.2 Free induction decay 3.3 Nanoscale spectroscopy with pulsed sources 4 Stationary forms of Fourier-transform spectrometers Conceptual introduction[edit] Measuring an emission spectrum[edit] An example of a spectrum: The spectrum of light emitted by the blue flame of a butane torch. The horizontal axis is the wavelength of light, and the vertical axis represents how much light is emitted by the torch at that wavelength. One of the most basic tasks in spectroscopy is to characterize the spectrum of a light source: how much light is emitted at each different wavelength. The most straightforward way to measure a spectrum is to pass the light through a monochromator, an instrument that blocks all of the light except the light at a certain wavelength (the un-blocked wavelength is set by a knob on the monochromator). Then the intensity of this remaining (single-wavelength) light is measured. The measured intensity directly indicates how much light is emitted at that wavelength. By varying the monochromator's wavelength setting, the full spectrum can be measured. This simple scheme in fact describes how some spectrometers work. Fourier-transform spectroscopy is a less intuitive way to get the same information. Rather than allowing only one wavelength at a time to pass through to the detector, this technique lets through a beam containing many different wavelengths of light at once, and measures the total beam intensity. Next, the beam is modified to contain a different combination of wavelengths, giving a second data point. This process is repeated many times. Afterwards, a computer takes all this data and works backwards to infer how much light there is at each wavelength. To be more specific, between the light source and the detector, there is a certain configuration of mirrors that allows some wavelengths to pass through but blocks others (due to wave interference). The beam is modified for each new data point by moving one of the mirrors; this changes the set of wavelengths that can pass through. As mentioned, computer processing is required to turn the raw data (light intensity for each mirror position) into the desired result (light intensity for each wavelength). The processing required turns out to be a common algorithm called the Fourier transform (hence the name, "Fourier-transform spectroscopy"). The raw data is sometimes called an "interferogram". Because of the existing computer equipment requirements, and the ability of light to analyze very small amounts of substance, it is often beneficial to automate many aspects of the sample preparation. The sample can be better preserved and the results are much easier to replicate. Both of these benefits are important, for instance, in testing situations that may later involve legal action, such as those involving drug specimens.[2] Measuring an absorption spectrum[edit] An "interferogram" from a Fourier-transform spectrometer. This is the "raw data" which can be Fourier-transformed into an actual spectrum. The peak at the center is the ZPD position ("zero path difference"): Here, all the light passes through the interferometer because its two arms have equal length. The method of Fourier-transform spectroscopy can also be used for absorption spectroscopy. The primary example is "FTIR Spectroscopy", a common technique in chemistry. In general, the goal of absorption spectroscopy is to measure how well a sample absorbs or transmits light at each different wavelength. Although absorption spectroscopy and emission spectroscopy are different in principle, they are closely related in practice; any technique for emission spectroscopy can also be used for absorption spectroscopy. First, the emission spectrum of a broadband lamp is measured (this is called the "background spectrum"). Second, the emission spectrum of the same lamp shining through the sample is measured (this is called the "sample spectrum"). The sample will absorb some of the light, causing the spectra to be different. The ratio of the "sample spectrum" to the "background spectrum" is directly related to the sample's absorption spectrum. Accordingly, the technique of "Fourier-transform spectroscopy" can be used both for measuring emission spectra (for example, the emission spectrum of a star), and absorption spectra (for example, the absorption spectrum of a liquid). Continuous-wave Michelson or Fourier-transform spectrograph[edit] The Fourier-transform spectrometer is just a Michelson interferometer, but one of the two fully reflecting mirrors is movable, allowing a variable delay (in the travel time of the light) to be included in one of the beams. The Michelson spectrograph is similar to the instrument used in the Michelson–Morley experiment. Light from the source is split into two beams by a half-silvered mirror, one is reflected off a fixed mirror and one off a movable mirror, which introduces a time delay—the Fourier-transform spectrometer is just a Michelson interferometer with a movable mirror. The beams interfere, allowing the temporal coherence of the light to be measured at each different time delay setting, effectively converting the time domain into a spatial coordinate. By making measurements of the signal at many discrete positions of the movable mirror, the spectrum can be reconstructed using a Fourier transform of the temporal coherence of the light. Michelson spectrographs are capable of very high spectral resolution observations of very bright sources. The Michelson or Fourier-transform spectrograph was popular for infra-red applications at a time when infra-red astronomy only had single-pixel detectors. Imaging Michelson spectrometers are a possibility, but in general have been supplanted by imaging Fabry–Pérot instruments, which are easier to construct. Extracting the spectrum[edit] The intensity as a function of the path length difference (also denoted as retardation) in the interferometer {\displaystyle p} {\displaystyle {\tilde {\nu }}=1/\lambda } {\displaystyle I(p,{\tilde {\nu }})=I({\tilde {\nu }})[1+\cos \left(2\pi {\tilde {\nu }}p\right)],} {\displaystyle I({\tilde {\nu }})} is the spectrum to be determined. Note that it is not necessary for {\displaystyle I({\tilde {\nu }})} to be modulated by the sample before the interferometer. In fact, most FTIR spectrometers place the sample after the interferometer in the optical path. The total intensity at the detector is {\displaystyle {\begin{aligned}I(p)&=\left(p,\int _{0}^{\infty }I(p,{\tilde {\nu }})d{\tilde {\nu }}\right)\\&=\left(p,\int _{0}^{\infty }I({\tilde {\nu }})[1+\cos(2\pi {\tilde {\nu }}p)]\,d{\tilde {\nu }}\right){\text{ for all desired values of }}p.\end{aligned}}} This is just a Fourier cosine transform. The inverse gives us our desired result in terms of the measured quantity {\displaystyle I(p)} {\displaystyle I({\tilde {\nu }})=4\int _{0}^{\infty }\left[I(p)-{\frac {1}{2}}I(p=0)\right]\cos(2\pi {\tilde {\nu }}p)\,dp.} Pulsed Fourier-transform spectrometer[edit] A pulsed Fourier-transform spectrometer does not employ transmittance techniques[definition needed]. In the most general description of pulsed FT spectrometry, a sample is exposed to an energizing event which causes a periodic response. The frequency of the periodic response, as governed by the field conditions in the spectrometer, is indicative of the measured properties of the analyte. Examples of pulsed Fourier-transform spectrometry[edit] In magnetic spectroscopy (EPR, NMR), a microwave pulse (EPR) or a radio frequency pulse (NMR) in a strong ambient magnetic field is used as the energizing event. This turns the magnetic particles at an angle to the ambient field, resulting in gyration. The gyrating spins then induce a periodic current in a detector coil. Each spin exhibits a characteristic frequency of gyration (relative to the field strength) which reveals information about the analyte. In Fourier-transform mass spectrometry, the energizing event is the injection of the charged sample into the strong electromagnetic field of a cyclotron. These particles travel in circles, inducing a current in a fixed coil on one point in their circle. Each traveling particle exhibits a characteristic cyclotron frequency-field ratio revealing the masses in the sample. Free induction decay[edit] Pulsed FT spectrometry gives the advantage of requiring a single, time-dependent measurement which can easily deconvolute a set of similar but distinct signals. The resulting composite signal, is called a free induction decay, because typically the signal will decay due to inhomogeneities in sample frequency, or simply unrecoverable loss of signal due to entropic loss of the property being measured. Nanoscale spectroscopy with pulsed sources[edit] Pulsed sources allow for the utilization of Fourier-transform spectroscopy principles in scanning near-field optical microscopy techniques. Particularly in nano-FTIR, where the scattering from a sharp probe-tip is used to perform spectroscopy of samples with nanoscale spatial resolution, a high-power illumination from pulsed infrared lasers makes up for a relatively small scattering efficiency (often < 1%) of the probe.[4] Stationary forms of Fourier-transform spectrometers[edit] In addition to the scanning forms of Fourier-transform spectrometers, there are a number of stationary or self-scanned forms.[5] While the analysis of the interferometric output is similar to that of the typical scanning interferometer, significant differences apply, as shown in the published analyses. Some stationary forms retain the Fellgett multiplex advantage, and their use in the spectral region where detector noise limits apply is similar to the scanning forms of the FTS. In the photon-noise limited region, the application of stationary interferometers is dictated by specific consideration for the spectral region and the application. Fellgett advantage[edit] Main article: Fellgett's advantage One of the most important advantages of Fourier-transform spectroscopy was shown by P. B. Fellgett, an early advocate of the method. The Fellgett advantage, also known as the multiplex principle, states that when obtaining a spectrum when measurement noise is dominated by detector noise (which is independent of the power of radiation incident on the detector), a multiplex spectrometer such as a Fourier-transform spectrometer will produce a relative improvement in signal-to-noise ratio, compared to an equivalent scanning monochromator, of the order of the square root of m, where m is the number of sample points comprising the spectrum. However, if the detector is shot-noise dominated, the noise will be proportional to the square root of the power, thus for a broad boxcar spectrum (continuous broadband source), the noise is proportional to the square root of m, thus precisely offset the Fellgett's advantage. For line emission sources the situation is even worse and there is a distinct `multiplex disadvantage' as the shot noise from a strong emission component will overwhelm the fainter components of the sepectrum. Shot noise is the main reason Fourier-transform spectrometry was never popular for ultraviolet (UV) and visible spectra. Fellgett's advantage ^ Antoine Abragam. 1968. Principles of Nuclear Magnetic Resonance, Cambridge University Press: Cambridge, UK. ^ Semiautomated depositor for infrared microspectrometry http://www.opticsinfobase.org/viewmedia.cfm?uri=as-57-9-1078&seq=0 ^ Peter Atkins, Julio De Paula. 2006. Physical Chemistry, 8th ed. Oxford University Press: Oxford, UK. ^ Hegenbarth, R; Steinmann, A; Mastel, S; Amarie, S; Huber, A J; Hillenbrand, R; Sarkisov, S Y; Giessen, H (2014). "High-power femtosecond mid-IR sources for s-SNOM applications". Journal of Optics. 16 (9): 094003. Bibcode:2014JOpt...16i4003H. doi:10.1088/2040-8978/16/9/094003. ^ William H. Smith U.S. Patent 4,976,542 Digital Array Scanned Interferometer, issued Dec. 11, 1990 Description of how a Fourier transform spectrometer works The Michelson or Fourier transform spectrograph Internet Journal of Vibrational Spectroscopy – How FTIR works Fourier Transform Spectroscopy Topical Meeting and Tabletop Exhibit Retrieved from "https://en.wikipedia.org/w/index.php?title=Fourier-transform_spectroscopy&oldid=1070467086"
The Fixed Asset Turnover Ratio Calculating Fixed Asset Turnover The Impact of Intangible Assets For investors, the balance sheet is an important financial statement that should be interpreted when considering an investment in a company. The balance sheet is a reflection of the assets owned and the liabilities owed by a company at a certain point in time. The strength of a company's balance sheet can be evaluated by three broad categories of investment-quality measurements: working capital, or short-term liquidity, asset performance, and capitalization structure. Capitalization structure is the amount of debt versus equity that a company has on its balance sheet. The strength of a company's balance sheet can be evaluated by three investment-quality measurements. The cash conversion cycle shows how efficiently a company manages its accounts receivable and inventory. The fixed asset turnover ratio measures how much revenue is generated from the use of a company's total assets. The return on assets ratio shows how well a company is using its assets to generate profit or net income. The cash conversion cycle is a key indicator of the adequacy of a company's working capital position. Working capital is the difference between a company's current assets, such as cash and current liabilities, such as payables owed to suppliers for raw materials. Current assets and liabilities are short-term in nature, meaning they're usually on the books for less than one year. The cash conversion cycle is an indicator of a company's ability to efficiently manage two of its most important assets–accounts receivable and inventory. Accounts receivable is the total money owed to a company by its customers for booked sales. Components of the The Cash Conversion Cycle (CCC) Days sales outstanding is the average number of days it takes a company to collect payment from their customers after a sale is made. The cash conversion cycle uses days sales outstanding to help determine whether the company is efficient at collecting from its clients. The cash conversion cycle calculation also calculates how long it takes a company to pay its bills. Days payables outstanding represents the average number of days it takes a company to pay its suppliers and vendors. The third component of the CCC includes how long inventory sits idle. Days inventory outstanding is the average number of days that inventory has been in stock before selling it. Calculated in days, the CCC reflects the time required to collect on sales and the time it takes to turn over inventory. The cash conversion cycle calculation helps to determine how well a company is collecting and paying its short-term cash transactions. If a company is slow to collect on its receivables, for example, a cash shortfall could result and the company could have difficulty paying its bills and payables. The shorter the cycle, the better. Cash is king, and smart managers know that fast-moving working capital is more profitable than unproductive working capital that is tied up in assets. Formula and Calculation of the Cash Conversion Cycle \begin{aligned} &\text{CCC} = \text{DIO} + \text{DSO} - \text{DPO}\\ &\textbf{where:}\\ &\text{DIO} = \text{Days inventory outstanding} \\ &\text{DSO} = \text{Days sales outstanding} \\ &\text{DPO} = \text{Days payables outstanding} \\ \end{aligned} ​CCC=DIO+DSO−DPOwhere:DIO=Days inventory outstandingDSO=Days sales outstandingDPO=Days payables outstanding​ Obtain a company's days inventory outstanding and add the figure to the days sales outstanding. Take the result and subtract the company's days payables outstanding to arrive at the cash conversion cycle There is no single optimal metric for the CCC, which is also referred to as a company's operating cycle. As a rule, a company's CCC will be influenced heavily by the type of product or service it provides and industry characteristics. Investors looking for investment quality in this area of a company's balance sheet must track the CCC over an extended period of time (for example, 5 to 10 years) and compare its performance to that of competitors. Consistency and decreases in the operating cycle are positive signals. Conversely, erratic collection times and an increase in on-hand inventory are typically negative investment-quality indicators. 5 Tips For Reading A Balance Sheet The fixed asset turnover ratio measures how much revenue is generated from the use of a company's total assets. Since assets can cost a significant amount of money, investors want to know how much revenue is being earned from those assets and whether they're being used efficiently. Fixed assets, such as property, plant, and equipment (PP&E) are the physical assets that a company owns and are typically the largest component of total assets. Although the term fixed assets is typically considered a company's PP&E, the assets are also referred to as non-current assets, meaning they're long-term assets. The amount of fixed assets a company owns is dependent, to a large degree, on its line of business. Some businesses are more capital intensive than others. Large capital equipment producers, such as farm equipment manufacturers, require a large amount of fixed-asset investment. Service companies and computer software producers need a relatively small amount of fixed assets. Mainstream manufacturers typically have 25% to 40% of their assets in PP&E. Accordingly, fixed asset turnover ratios will vary among different industries. Formula and Calculation of the Fixed Asset Turnover Ratio \begin{aligned} &\text{Fixed Asset Turnover} = \frac{ \text{Net Sales} }{ \text{Average Fixed Assets} }\\ \end{aligned} ​Fixed Asset Turnover=Average Fixed AssetsNet Sales​​ Fixed asset turnover steps to calculate. Investopedia Obtain net sales from the company's income statement. The fixed asset turnover ratio can tell investors how effectively a company's management is using its assets. The ratio is a measure of the productivity of a company's fixed assets with respect to generating revenue. The higher the number of times PP&E turns over, the more revenue or net sales a company's generating with those assets. It's important for investors to compare the fixed asset turnover rates over several periods since companies will likely upgrade and add new equipment over time. Ideally, investors should look for improving turnover rates over multiple periods. Also, it's best to compare the turnover ratios with similar companies within the same industry. Return on assets (ROA) is considered a profitability ratio, meaning it shows how much net income or profit is being earned from its total assets. However, ROA can also serve as a metric for determining the asset performance of a company. As noted earlier, fixed assets require a significant amount of capital to buy and maintain. As a result, the ROA helps investors determine how well the company is using that capital investment to generate earnings. If a company's management team has invested poorly with its asset purchases, it'll show up in the ROA metric. Also, if a company has not updated its assets, such as equipment upgrades, it'll result in a lower ROA when compared to similar companies that have upgraded their equipment or fixed assets. As a result, it's important to compare the ROA of companies in the same industry or with similar product offerings, such as automakers. Comparing the ROAs of a capital intensive company such as an auto manufacturer to a marketing firm that has few fixed assets would provide little insight as to which company would be a better investment. Formula and Calculation of the Return on Assets Ratio \begin{aligned} &\text{ROA} = \frac{ \text{Net Income} }{ \text{Average Total Assets} }\\ \end{aligned} ​ROA=Average Total AssetsNet Income​​ Locate net income on the company's income statement. In many ROA formulas, total assets or the ending period total assets figure is used in the denominator. However, if you want to use average total assets, add total assets from the beginning of the period to the ending period value of total assets and divide the result by two to calculate the average total assets. Divide net income by the total assets or average total assets to obtain the ROA. Please note that the above formula will yield a decimal, such as .10 for example. Multiply the result by 100 to move the decimal and convert it to a percentage, such as .10 * 100 = 10% ROA. The reason that the ROA ratio is expressed as a percentage return is to allow a comparison in percentage terms of how much profit is generated from total assets. If a company has a 10% ROA, it generates 10 cents for every one dollar of profit or net income that's earned. A high percentage return implies well-managed assets and here again, the ROA ratio is best employed as a comparative analysis of a company's own historical performance. Numerous non-physical assets are considered intangible assets, which are broadly categorized into three different types: Intellectual property (patents, copyrights, trademarks, brand names, etc.) Deferred charges (capitalized expenses) Purchased goodwill (the cost of an investment in excess of book value) Unfortunately, there is little uniformity in balance sheet presentations for intangible assets or the terminology used in the account captions. Often, intangibles are buried in other assets and only disclosed in a note in the financials. The dollars involved in intellectual property and deferred charges are typically not material and, in most cases, do not warrant much analytical scrutiny. However, investors are encouraged to take a careful look at the amount of purchased goodwill on a company's balance sheet—an intangible asset that arises when an existing business is acquired. Some investment professionals are uncomfortable with a large amount of purchased goodwill. The return to the acquiring company will be realized only if, in the future, it is able to turn the acquisition into positive earnings. Conservative analysts will deduct the amount of purchased goodwill from shareholders' equity to arrive at a company's tangible net worth. In the absence of any precise analytical measurement to make a judgment on the impact of this deduction, investors use common sense. If the deduction of purchased goodwill has a material negative impact on a company's equity position, it should be a matter of concern. For example, a moderately-leveraged balance sheet might be unappealing if its debt liabilities are seriously in excess of its tangible equity position. Companies acquire other companies, so purchased goodwill is a fact of life in financial accounting. However, investors need to look carefully at a relatively large amount of purchased goodwill on a balance sheet. The impact of this account on the investment quality of a balance sheet needs to be judged in terms of its comparative size to shareholders' equity and the company's success rate with acquisitions. This truly is a judgment call, but one that needs to be considered thoughtfully. Assets represent items of value that a company owns, has in its possession or is due. Of the various types of items a company owns, receivables, inventory, PP&E, and intangibles are typically the four largest accounts on the asset side of a balance sheet. Therefore, a strong balance sheet is built on the efficient management of these major asset types, and a strong portfolio is built on knowing how to read and analyze financial statements. Harvard Business School. “Word of the Week: Cash Conversion Cycle.” United Nations. “Corporate Guidance for International Public Sector Accounting Standards: Property, Plant and Equipment (Excluding Infrastructure Assets,” Page 5. Selling, Thomas I. and Stickney, Clyde P. “The Effects of Business Environment and Strategy on a Firm's Rate of Return on Assets.” Financial Analysts Journal, vol. 45, no 1, January-February 1989, pp. 43. Ministry of Corporate Affairs. “Accounting Standard (AS) 26,” Page 436.
Geometric and Topological Combinatorics | EMS Press The 2007 Oberwolfach meeting ``Geometric and Topological Combinatorics'' was organized by Anders Bj\"orner (KTH and Mittag-Leffler Institute, Stockholm), Gil Kalai (Hebrew University, Jerusalem), and G\"unter M. Ziegler (TU Berlin). It consisted of six one-hour lectures by Isabella Novik, Herbert Edelsbrunner, Carsten Schultz, Igor Pak, Alexander Barvinok and Roy Meshulam, as well as twenty-seven half-hour talks, a problem session (led by Gil Kalai, also documented below), two software demonstrations (on \texttt{polymake} by Michael Joswig, and on \texttt{LattE} by Jesus De Loera), and many more informal sessions, group discussions, and a great variety of small group and pairwise discussions. It was a lively week! \smallskip As will become obvious from the sequence of extended abstracts presented below, the conference treated a broad spectrum of topics from Topological Combinatorics (such as poset topology, graph coloring, etc.), Discrete Geometry (polytopes, triangulations, arrangements, Coxeter groups, and polyhedral surfaces), as well as Geometric Topology (triangulated manifolds, persistent homology of geometric data, etc.). The manifold connections between these themes, with refinements of well-established bridges as well as completely new links between seemingly distant themes, problems, methods, and theories show that the area is very much alive. Even more so this is demonstrated by substantial progress on older problems, which on this conference included Isabella Novik's opening lecture about centrally symmetric polytopes (joint works with Nati Linial and with Alexander Barvinok), or still on the first day Ed Swartz' report about the f -vectors of triangulated manifolds. \smallskip Of course there is no way to present the richness and depth of the work and presentations of the conference's program on a one page report, or a short collection of abstracts. All the following can offer is an overview of the official program of the conference. It does not cover all the additional smaller presentations, group discussions and blackboard meetings (for example, Tom Braden was coerced into an additional evening presentation of the topology of hypertoric varieties ``by popular demand''), nor the lively interactions that occured during the week --- for example, a surprising connection was made at the problem session between permutation patterns that appeared in Jonas Sj\"ostrand's lecture, and a conjecture posed by Alex Postnikov; in subsequent work, the problem was solved by Axel Hultman, Svante Linusson, John Shareshian, and Jonas Sj\"ostrand (paper in preparation). We are extremely grateful to the Oberwolfach institute, to its director and to all of its staff for providing a perfect setting for an inspiring, intensive week of ``Geometric and Topological Combinatorics.'' \bigskip \bigskip \begin{flushright} Anders Bj\"orner, Gil Kalai, G\"unter M. Ziegler\\ Stockholm / Jerusalem / Berlin, March 2007 \end{flushright} Günter M. Ziegler, Anders Björner, Gil Kalai, Geometric and Topological Combinatorics. Oberwolfach Rep. 4 (2007), no. 1, pp. 195–272
Kazanskyite, Ba□TiNbNa3Ti(Si2O7)2O2(OH)2(H2O)4, a Group-III Ti-disilicate mineral from the Khibiny alkaline massif, Kola Peninsula, Russia: description and crystal structure | Mineralogical Magazine | GeoScienceWorld F. Cámara; F. Cámara * E-mail: fernando.camaraartigas@unito.it F. Cámara, E. Sokolova, F. C. Hawthorne; Kazanskyite, Ba□TiNbNa3Ti(Si2O7)2O2(OH)2(H2O)4, a Group-III Ti-disilicate mineral from the Khibiny alkaline massif, Kola Peninsula, Russia: description and crystal structure. Mineralogical Magazine 2012;; 76 (3): 473–492. doi: https://doi.org/10.1180/minmag.2012.076.3.03 Kazanskyite, Ba□TiNbNa3Ti(Si2O7)2O2(OH)2(H2O)4, is a Group-III TS-block mineral from the Kirovskii mine, Mount Kukisvumchorr, Khibiny alkaline massif, Kola Peninsula, Russia. The mineral occurs as flexible and commonly bent flakes 2–15 μm thick and up to 330 μm across. It is colourless to pale tan, with a white streak and a vitreous lustre. The mineral formed in a pegmatite as a result of hydrothermal activity. Associated minerals are natrolite, barytolamprophyllite, nechelyustovite, hydroxylapatite, belovite-(La), belovite-(Ce), gaidonnayite, nenadkevichite, epididymite, apophyllite-(KF) and sphalerite. Kazanskyite has perfect cleavage on {001}, splintery fracture and a Mohs hardness of 3. Its calculated density is 2.930 g cm−3. Kazanskyite is biaxial positive with α 1.695, β 1.703, γ 1.733 (λ 590 nm), 2Vmeas = 64.8(7)°, 2Vcalc = 55.4°, with no discernible dispersion. It is not pleochroic. Kazanskyite is triclinic, space group P1İ, a 5.4260(9), b 7.135(1), c 25.514(4) Å, α 90.172(4), β 90.916(4), γ 89.964(3)°, V 977.61(3) Å3. The strongest lines in the X-ray powder-diffraction pattern [d(Å)(I)(hkl)] are: 2.813(100)(124İ,12İ2İ), 2.149(82)(222İ,22İ0,207,220,22İ2), 3.938(70)(11İ 3,112), 4.288(44)(111İ,11İ0,110,11İ1), 2.128(44)(223İ,22İ1İ,13İ4,221,13İ4,221,22İ3), 3.127(39)(11İ6,115), 3.690(36)(11İ4), 2.895(33)(12İ3,121) and 2.955(32)(12İ0,120,12İ2). Chemical analysis by electron microprobe gave Nb2O5 9.70, TiO2 19.41, SiO2 28.21, Al2O3 0.13, FeO 0.28, MnO 4.65, BaO 12.50, SrO 3.41, CaO 0.89, K2O 1.12, Na2O 9.15, H2O 9.87, F 1.29, O = F – 0.54, sum 100.07 wt.%; H2O was determined from structure refinement. The empirical formula is (Na2.55Mn0.31Ca0.11 Fe0.032+ ⁠)Σ3(Ba0.70Sr0.28K0.21Ca0.03)Σ1.22*Ti2.09Nb0.63Mn0.26Al0.02)Σ3Si4.05O21.42H9.45F0.59, calculated on 22 (O + F) a.p.f.u., Z = 2. The structural formula of the form A2PM2HM4O(Si2O7)2X4OXMPXAP(H2O)n is (Ba0.56Sr0.22K0.15Ca0.03□0.04)Σ1(□0.74Ba0.14Sr0.06K0.06)Σ1(Ti0.98Al0.02)Σ1(Nb0.63Ti0.37)Σ1(Na2.55Mn0.31Ca0.11 Fe0.032+ ⁠)Σ3(Ti0.74Mn0.26)Σ(Si2O7)2O2(OH1.41F0.59)Σ2(H2O)(□0.74H2O0.26)Σ(H2O)2.74. Simplified and ideal formulae are as follows: Ba(□,Ba)Ti(Nb,Ti)(Na,Mn)3(Ti,Mn)(Si2O7)2O2(OH,F)2(H2O)4 and Ba□TiNbNa3Ti (Si2O7)2O2(OH)2(H2O)4. The Raman spectrum of the mineral contains the following bands: 3462 cm−1 (broad) and 3545 and 3628 cm−1 (sharp). The crystal structure was solved by direct methods and refined to an R1 index of 8.09%. The crystal structure of kazanskyite is a combination of a TS (titanium silicate) block and an I (intermediate) block. The TS block consists of HOH sheets (H is heteropolyhedral and O is octahedral). The TS block exhibits linkage and stereochemistry typical for Group-III (Ti = 3 a.p.f.u.) Ti-disilicate minerals. The TS block has two different H sheets where (Si2O7) groups link to [5]-coordinated Ti and [6]-coordinated Nb polyhedra, respectively. There are two peripheral sites, AP(1,2), occupied mainly by Ba (less Sr and K) at 96% and 26%. There are two I blocks: the I1 block is a layer of Ba atoms; the I2 block consists of H2O groups and AP (2) atoms. The TS and I blocks are topologically identical to those in the nechelyustovite structure. The mineral is named in honour of Professor Vadim Ivanovich Kazansky ( ), a prominent Russian ore geologist and an expert in Precambrian metallogeny. Kirovskiy Mine KOLSKYITE, (Ca□)Na2Ti4(Si2O7)2O4(H2O)7, A GROUP-IV Ti-DISILICATE MINERAL FROM THE KHIBINY ALKALINE MASSIF, KOLA PENINSULA, RUSSIA: DESCRIPTION AND CRYSTAL STRUCTURE SAAMITE, Ba□TiNbNa3Ti(Si2O7)2O2(OH)2(H2O)2, A GROUP-III Ti-DISILICATE MINERAL FROM THE KHIBINY ALKALINE MASSIF, KOLA PENINSULA, RUSSIA: DESCRIPTION AND CRYSTAL STRUCTURE Armbrusterite, K5Na6Mn3+Mn142+[Si9O22]4(OH)10·4H2O, a new Mn hydrous heterophyllosilicate from the Khibiny alkaline massif, Kola Peninsula, Russia
The 2018 Palu Tsunami: Coeval Landslide and Coseismic Sources | Seismological Research Letters | GeoScienceWorld Amy L. Williamson; Amy L. Williamson * Department of Earth Sciences, University of Oregon, Eugene, Oregon, U.S.A. Corresponding author: awillia5@uoregon.edu Diego Melgar; Scripps Institutions of Oceanography, University of California San Diego, La Jolla, California, U.S.A. Amy L. Williamson, Diego Melgar, Xiaohua Xu, Christopher Milliner; The 2018 Palu Tsunami: Coeval Landslide and Coseismic Sources. Seismological Research Letters 2020;; 91 (6): 3148–3160. doi: https://doi.org/10.1785/0220200009 On 28 September 2018, Indonesia was struck by an MW 7.5 strike‐slip earthquake. An unexpected tsunami followed, inundating nearby coastlines leading to extensive damage. Given the traditionally non‐tsunamigenic mechanism, it is important to ascertain if the source of the tsunami is indeed from coseismic deformation, or something else, such as shaking induced landsliding. Here we determine the leading cause of the tsunami is a complex combination of both. We constrain the coseismic slip from the earthquake using static offsets from geodetic observations and validate the resultant “coseismic‐only” tsunami to observations from tide gauge and survey data. This model alone, although fitting some localized run‐up measurements, overall fails to reproduce both the timing and scale of the tsunami. We also model coastal collapses identified through rapidly acquired satellite imagery and video footage as well as explore the possibility of submarine landsliding using tsunami raytracing. The tsunami model results from the landslide sources, in conjunction with the coseismic‐generated tsunami, show a greatly improved fit to both tide gauge and field survey data. Our results highlight a case of a damaging tsunami the source of which is a complex mix of coseismic deformation and landsliding. Tsunamis of this nature are difficult to provide warning for and are underrepresented in regional tsunami hazard analysis. Palu earthquake 2018 Mw Mw Evidence of Fault Immaturity from Shallow Slip Deficit and Lack of Postseismic Deformation of the 2017 Mw 6.5 Jiuzhaigou Earthquake
ratio between two quantities whose sum is at the same ratio to the larger one {\displaystyle \varphi } {\displaystyle \varphi } {\displaystyle \varphi } {\displaystyle (\varphi +1)/\varphi } {\displaystyle \varphi ={\frac {\varphi +1}{\varphi }}} {\displaystyle \varphi ={\frac {1+{\sqrt {5}}}{2}}=1.618...} {\displaystyle {\sqrt {5}}} {\displaystyle {\sqrt {5}}\times {\sqrt {5}}=5} {\displaystyle {\begin{array}{ccccc}\varphi -1&=&1.6180339887...-1&=&0.6180339887...\\1/\varphi &=&{\frac {1}{1.6180339887...}}&=&0.6180339887...\end{array}}} Golden rectangleEdit {\displaystyle \varphi } {\displaystyle a/b=\varphi } {\displaystyle b/(a-b)=\varphi } {\displaystyle \varphi } Golden ratio in natureEdit
Okun's law - Wikipedia Graph of US quarterly data (not annualized) from 1948 through 2016 estimates a form of the difference version of Okun's law: %Change GDP = 3.2 - 1.8*(Change Unemployment Rate). R^2 of .463. Differences from other results are partly due to the use of quarterly data. In economics, Okun's law is an empirically observed relationship between unemployment and losses in a country's production. It is named after Arthur Melvin Okun, who first proposed the relationship in 1962.[1] The "gap version" states that for every 1% increase in the unemployment rate, a country's GDP will be roughly an additional 2% lower than its potential GDP. The "difference version"[2] describes the relationship between quarterly changes in unemployment and quarterly changes in real GDP. The stability and usefulness of the law has been disputed.[3] 1 Imperfect relationship 3 Derivation of the growth rate form Imperfect relationship[edit] Okun's law may more accurately be called "Okun's rule of thumb" because it is an approximation based on empirical observation rather than a result derived from theory. Okun's law is approximate because factors other than employment, such as productivity, affect output. In Okun's original statement of his law, a 2% increase in output corresponds to a 1% decline in the rate of cyclical unemployment; a 0.5% increase in labor force participation; a 0.5% increase in hours worked per employee; and a 1% increase in output per hours worked (labor productivity).[4] Okun's law states that a one-point increase in the cyclical unemployment rate is associated with two percentage points of negative growth in real GDP. The relationship varies depending on the country and time period under consideration. One implication of Okun's law is that an increase in labor productivity or an increase in the size of the labor force can mean that real net output grows without net unemployment rates falling (the phenomenon of "jobless growth") Okun's Law is sometimes confused with Lucas wedge. Mathematical statements[edit] The gap version of Okun's law may be written (Abel & Bernanke 2005) as: {\displaystyle {\frac {{\overline {Y}}-Y}{\overline {Y}}}=c(u-{\overline {u}})} {\displaystyle Y} is actual output {\displaystyle {\overline {Y}}} is potential GDP {\displaystyle u} is actual unemployment rate {\displaystyle {\overline {u}}} is the natural rate of unemployment {\displaystyle c} is the factor relating changes in unemployment to changes in output The gap version of Okun's law, as shown above, is difficult to use in practice because {\displaystyle {\overline {Y}}} {\displaystyle {\overline {u}}} can only be estimated, not measured. A more commonly used form of Okun's law, known as the difference or growth rate form of Okun's law, relates changes in output to changes in unemployment: {\displaystyle {\frac {\Delta Y}{Y}}=k-c\Delta u\,} {\displaystyle Y} {\displaystyle c} are as defined above {\displaystyle \Delta Y} is the change in actual output from one year to the next {\displaystyle \Delta u} is the change in actual unemployment from one year to the next {\displaystyle k} is the average annual growth rate of full-employment output {\displaystyle {\frac {\Delta Y}{Y}}=0.03-2\Delta u.\,} The graph at the top of this article illustrates the growth rate form of Okun's law, measured quarterly rather than annually. Derivation of the growth rate form[edit] We start with the first form of Okun's law: {\displaystyle {\frac {{\overline {Y}}-Y}{\overline {Y}}}=1-{\frac {Y}{\overline {Y}}}=c(u-{\overline {u}})} {\displaystyle {\frac {Y}{\overline {Y}}}-1=c({\overline {u}}-u).} {\displaystyle \Delta \left({\frac {Y}{\overline {Y}}}\right)={\frac {Y+\Delta Y}{{\overline {Y}}+\Delta {\overline {Y}}}}-{\frac {Y}{\overline {Y}}}=c(\Delta {\overline {u}}-\Delta u).} {\displaystyle {\frac {{\overline {Y}}\Delta Y-Y\Delta {\overline {Y}}}{{\overline {Y}}({\overline {Y}}+\Delta {\overline {Y}})}}=c(\Delta {\overline {u}}-\Delta u).} Multiplying the left hand side by {\displaystyle {\frac {{\overline {Y}}+\Delta {\overline {Y}}}{Y}}} , which is approximately equal to 1, we obtain {\displaystyle {\frac {{\overline {Y}}\Delta Y-Y\Delta {\overline {Y}}}{{\overline {Y}}Y}}={\frac {\Delta Y}{Y}}-{\frac {\Delta {\overline {Y}}}{\overline {Y}}}\approx c(\Delta {\overline {u}}-\Delta u)} {\displaystyle {\frac {\Delta Y}{Y}}\approx {\frac {\Delta {\overline {Y}}}{\overline {Y}}}+c(\Delta {\overline {u}}-\Delta u).} {\displaystyle \Delta {\overline {u}}} , the change in the natural rate of unemployment, is approximately equal to 0. We also assume that {\displaystyle {\frac {\Delta {\overline {Y}}}{\overline {Y}}}} , the growth rate of full-employment output, is approximately equal to its average value, {\displaystyle k} . So we finally obtain {\displaystyle {\frac {\Delta Y}{Y}}\approx k-c\Delta u.} Through comparisons between actual data and theoretical forecasting, Okun's law proves to be an invaluable[clarification needed] tool in predicting trends between unemployment and real GDP. However, the accuracy of the data theoretically proved through Okun's law compared to real world numbers proves to be generally inaccurate. This is due to the variances in Okun's coefficient. Many, including the Reserve Bank of Australia, conclude that information proved by Okun's law to be acceptable to a certain degree.[7] Also, some findings[which?] have concluded that Okun's law tends to have higher rates of accuracy for short-run predictions, rather than long-run predictions. Forecasters[who?] have concluded this to be true due to unforeseen market conditions that may affect Okun's coefficient. As such, Okun's law is generally acceptable by forecasters as a tool for short-run trend analysis between unemployment and real GDP, rather than being used for long run analysis as well as accurate numerical calculations. The San Francisco Federal Reserve Bank determined through the use of empirical data from past recessions in the 1970s, 1990s, and 2000s that Okun’s law was a useful theory. All recessions showed two common main trends: a counterclockwise loop[clarification needed] for both real-time and revised data. The recoveries of the 1990s and 2000s did have smaller and tighter loops.[8] ^ Okun, Arthur M. "Potential GNP: Its Measurement and Significance," American Statistical Association, Proceedings of the Business and Economics Statistics Section 1962. Reprinted with slight changes in Arthur M. Okun, The Political Economy of Prosperity (Washington, D.C.: Brookings Institution, 1970) ^ Knotek, 75 ^ Okun, 1962 ^ Prachowny, Martin F. J. (1993). "Okun's Law: Theoretical Foundations and Revised Estimates". The Review of Economics and Statistics. 75 (2): 331–336. doi:10.2307/2109440. ISSN 0034-6535. JSTOR 2109440. ^ Abel, Andrew; Bernanke, Ben (2005). Macroeconomics (5th ed.). Boston: Pearson/Addison Wesley. ISBN 0-321-16212-9. OCLC 52943097. ^ Lancaster, David; Tulip, Peter (2014–2015). "Okun's Law and Potential Output" (PDF). ^ Federal Reserve Bank of San Francisco | Interpreting Deviations from Okun’s Law Ball, Laurence; Tovar Jalles, João; Loungani, Prakash. "Do Forecasters Believe in Okun's Law? An Assessment of Unemployment and Output Forecasts" (PDF). imf.org. International Monetary Fund. Retrieved 2016-07-11. Abel, Andrew B. & Bernanke, Ben S. (2005). Macroeconomics (5th ed.). Pearson Addison Wesley. ISBN 0-321-16212-9. Baily, Martin Neil & Okun, Arthur M. (1965) The Battle Against Unemployment and Inflation: Problems of the Modern Economy. New York: W.W. Norton & Co.; ISBN 0-393-95055-7 (1983; 3rd revised edition). Okun, Arthur M. (1962). "Potential GNP, its measurement and significance". Cowles Foundation, Yale University. Plosser, Charles I. and Schwert, G. William (1979). "Potential GNP: Its measurement and significance: A dissenting opinion", Carnegie-Rochester Conference Series on Public Policy Knotek, Edward S. "How Useful Is Okun's Law." Economic Review, Federal Reserve Bank of Kansas City, Fourth Quarter 2007, pp. 73–103. Prachowny, Martin F. J. (1993). "Okun's Law: Theoretical Foundations and Revised Estimates," The Review of Economics and Statistics, 75 (2), pp. 331–336. Gordon, Robert J., Productivity, Growth, Inflation and Unemployment, Cambridge University Press, 2004 McBride, Bill (11 October 2010). "Real GDP Growth and the Unemployment Rate". Calculated Risk. Retrieved 5 December 2010. Goto, Eiji; Burgi, Constantin. "Sectoral Okun's Law and Cross-Country Cyclical Differences" (PDF). Retrieved 2020-09-09. Ball, Laurence; Leigh, Daniel; Loungani, Prakash. "Okun's Law: Fit at 50?" (PDF). Retrieved 2020-09-09. Retrieved from "https://en.wikipedia.org/w/index.php?title=Okun%27s_law&oldid=1072345327"
Crisis Modifiers - Dragalia Lost Wiki Crisis Modifiers See: Damage Formula Crisis is a mechanic that changes the damage modifiers of some actions, according to the amount of HP missing on the adventurer. All actions with this mechanic use the same formula, but have different modifiers, called Crisis Modifiers. Adventurers that use this mechanic and favor low HP are colloquially referred to as Enmity adventurers. Adventurers that use this mechanic and favor high HP are colloquially referred to as Stamina adventurers. 2 Skills with Crisis Modifiers 3 Abilities with Crisis Modifiers 4 Misc Crisis Modifier effects This is the general formula to calculate the bonus damage modifier at a given HP value for a skill with a Crisis Modifier Bonus. {\displaystyle [({HP\ Lost\ \%})^{2}*({Crisis\ Modifier}-1)]+1={Crisis\ Bonus\ Multiplier}} {\displaystyle [({\color {BrickRed}HP\ Lost\ \%})^{2}*({\color {RoyalPurple}Crisis\ Modifier}-1)]+1={\color {OliveGreen}Crisis\ Bonus\ Multiplier}} A Crisis Modifier above 1 favors low HP. Therefore, the skill will become stronger as HP decreases. (Enmity) A Crisis Modifier below 1 favors high HP. Therefore, the skill will become weaker as HP decreases. (Stamina) Note that multiple Crisis Modifiers on a single action do NOT stack - only the highest value will be applied. Example A: If Bellina equips a wyrmprint with Flash of Desperation (x1.6 crisis mod), and she uses her dragondrive skill Renegade Dragonfall (x1.5 crisis mod), the skill will use the x1.6 crisis mod. Example B: If Natalie equips a wyrmprint with Flash of Desperation (x1.6 crisis mod), and she uses her skill Defiant Dance (x2 crisis mod), the skill will use its default x2 crisis mod. Skills with Crisis Modifiers[edit] Arrow ShowerLv. 3: Deals 3 shots of 693% wind damage to enemies in a line. Poisoned foes take 1039.5% wind damage per hit instead. Damage will be decreased as the user's HP decreases, down to 3 hits of 346.5% (519.75% on poisoned foes) at 1 HP. [5001 SP] Bursting FuryLv. 3: Deals 1 hit of 1107.6% light damage to enemies directly ahead, and inflicts stun for 6-7 seconds with 110% chance. As HP decreases, damage increases, up to 1 hit of 2492.1% light damage. [2800 SP] Coarse JudgmentLv. 3: Deals 1 hit of 387% and 1 hit of 903% shadow damage to the target and nearby enemies, and inflicts poison for 15 seconds - dealing 58.2% shadow damage every 2.9 seconds - with 120% base chance . The lower the user's HP, the more damage this skill deals, up to 1 hit of 657.9% and 1 hit of 1535.1% shadow damage and poison dealing 130.95% shadow damage every 2.9 seconds instead. [2603 SP] Defiant DanceLv. 3: Deals 3 hits of 531% shadow damage to enemies directly ahead, and increases the user's energy level by 1 stage. Damage will be increased as the user's HP decreases, up to 3 hits of 1062% shadow damage. [3247 SP] 2.25 (1.5 at Lv4) Mighty ProtectorLv. 3: Deals 5 hits of 400% light damage to enemies directly ahead, grants the user a defense amp (Team Amp MAX Lv. = 2), and fills the user's dragondrive gauge by 300 UTP. If this skill is used during dragondrive, a more powerful variant called Howling Protector will be used instead. Howling Protector deals 5 hits of 520% light damage, grants the user a defense amp (Team Amp MAX Lv. = 2), and fills the user's dragondrive gauge by 300 UTP. During dragondrive, damage will be decreased as the user's HP decreases, down to 5 hits of 52% at 1 HP. [2860 SP] No Holds BarredLv. 2: Consumes the entire dragondrive gauge and restores HP to the user based on the amount consumed with 30% Recovery Potency up to an additional 140% of max HP. If this skill is used during dragondrive, a variant called Steadfast Victory will be used instead. Steadfast Victory consumes 1500 UTP of the user's dragondrive gauge, deals 1 hit of 3250% light damage to enemies directly ahead, then fills the skill gauge for this skill by 50%. Damage will be decreased as the user's HP decreases, down to 1 hit of 325%. [10725 SP] Renegade DescentLv. 3: Deals 1 hit of 840% shadow damage to the target and nearby enemies. Damage will be increased as the user's HP decreases, up to 1 hit of 1260% shadow damage. If this skill is used during dragondrive, a variant called Renegade Dragonfall will be used instead. Using this skill will consume 25% of the user's dragondrive gauge, but fills 100% of this skill's skill gauge after use. [2719 SP] SolLv. 3: Deals 2 hits of 439% light damage to surrounding enemies and restores the user's HP by 5% of the damage inflicted. This recovery caps at 10% of the user's HP per hit. Paralyzed foes take 2 hits of 658.5% light damage instead. The lower the user's HP, the more damage this skill deals. This increase caps at 2 hits of 878% light damage. For paralyzed foes this cap is instead 2 hits of 1317% light damage. [4983 SP] Violet UproarLv. 3: Summons "Twilight Wreath" then deals 1 hit of 934% and 4 hits of 155% shadow damage to the target and nearby enemies. Damage is increased as the user's HP decreases up to 1 hit of 1868% and 4 hits of 388% shadow damage. Additional bonus neutral-element damage will be dealt equal to 1100% of the damage taken while "Twilight Wreath" is active. [4935 SP] Wyrmbound RuinerLv. 3: Deals 4 hits of 242% shadow damage to enemies directly ahead. If this skill is used during dragondrive, a variant called Wyrmbound Eternity will be used instead. Using this skill will consume 25% of the user's dragondrive gauge, but fills 50% of this skill's skill gauge after use. [2537 SP] Abilities with Crisis Modifiers[edit] Flash of Desperation I - Thunderswift Lord's Blade (Axe's Boon) Flash of Desperation II - Flash of Desperation III - Misc Crisis Modifier effects[edit] Bellina's standard attacks and force strike during Dragondrive have a crisis modifier of x2. Phares's standard attacks during Dragondrive have a crisis modifier of x2. Farren's standard attacks and force strike during Dragondrive have a crisis modifier of x0.1. Retrieved from "http://dragalialost.wiki/index.php?title=Crisis_Modifiers&oldid=404788"
Multiplier Ideal Sheaves in Algebraic and Complex Geometry | EMS Press The conference was organized by Joseph Kohn (Princeton), Georg Schumacher (Marburg), and Yum-Tong Siu (Harvard), and was attended by 44 participants. Its aim was to put together a group from both complex analysis and algebraic geometry, reflecting recent developments, where the title of the conference stands for phenomena and methods, closely related to both of these areas. The original approach involving the theory of partial differential equations and subelliptic estimates was addressed in several contributions, including estimates for the \overline\partial -Neumann problem, subelliptic PDE's and sub-Riemannian Geometry, and subelliptic estimates from an algebraic-geometric point of view. Main areas were also applications to the abundance conjecture, pseudoeffective bundles, and the use of the twisted Nakano identity to investigate the cohomology of multiplier ideals. Critical points of sections of holomorphic vector bundles were discussed from a probabilistic viewpoint. Concerning the hyperbolicity of complex manifolds, for entire analytic curves the multiplicity of the associated current with respect to subsets was studied, and holomorphic curves in semi-abelian varieties. Another main topic was recent results on invariants arising from multiplier ideal sheaves, and critical exponents of analytic functions related to jumping coefficients, applications to local analytic geometry, and also results concerning the Fujita conjecture. Finally recent progress on transcendental Morse inequalities was presented. Joseph J. Kohn, Georg Schumacher, Yum-Tong Siu, Multiplier Ideal Sheaves in Algebraic and Complex Geometry. Oberwolfach Rep. 1 (2004), no. 2, pp. 1108–1166
Spherical basis vectors in 3-by-3 matrix form - MATLAB azelaxes - MathWorks Nordic \left({\stackrel{^}{e}}_{R},{\stackrel{^}{e}}_{az},{\stackrel{^}{e}}_{el}\right) \left({\stackrel{^}{e}}_{R},{\stackrel{^}{e}}_{az},{\stackrel{^}{e}}_{el}\right) \left({\stackrel{^}{e}}_{R},{\stackrel{^}{e}}_{az},{\stackrel{^}{e}}_{el}\right) \begin{array}{ll}{\stackrel{^}{e}}_{R}\hfill & =\mathrm{cos}\left(el\right)\mathrm{cos}\left(az\right)\stackrel{^}{i}+\mathrm{cos}\left(el\right)\mathrm{sin}\left(az\right)\stackrel{^}{j}+\mathrm{sin}\left(el\right)\stackrel{^}{k}\hfill \\ {\stackrel{^}{e}}_{az}\hfill & =-\mathrm{sin}\left(az\right)\stackrel{^}{i}+\mathrm{cos}\left(az\right)\stackrel{^}{j}\hfill \\ {\stackrel{^}{e}}_{el}\hfill & =-\mathrm{sin}\left(el\right)\mathrm{cos}\left(az\right)\stackrel{^}{i}-\mathrm{sin}\left(el\right)\mathrm{sin}\left(az\right)\stackrel{^}{j}+\mathrm{cos}\left(el\right)\stackrel{^}{k}\hfill \end{array}. \begin{array}{ll}{\stackrel{^}{e}}_{R}\hfill & ={R}_{z}\left(az\right){R}_{y}\left(-el\right)\left[\begin{array}{c}1\\ 0\\ 0\end{array}\right]\hfill \\ {\stackrel{^}{e}}_{az}\hfill & ={R}_{z}\left(az\right){R}_{y}\left(-el\right)\left[\begin{array}{c}0\\ 1\\ 0\end{array}\right]\hfill \\ {\stackrel{^}{e}}_{el}\hfill & ={R}_{z}\left(az\right){R}_{y}\left(-el\right)\left[\begin{array}{c}0\\ 0\\ 1\end{array}\right]\hfill \end{array}
Find the sum that yields an interest of 170 after two years at the rate of 12 5% p - Maths - Comparing Quantities - 13126623 | Meritnation.com Find the sum that yields an interest of ?170 after two years at the rate of 12.5% p.a. interest being compounded annually. CI=Rs170\phantom{\rule{0ex}{0ex}}Time=2years\phantom{\rule{0ex}{0ex}}Rate=12.5%\phantom{\rule{0ex}{0ex}}we know CI=A-P\phantom{\rule{0ex}{0ex}}⇒170=P{\left(1+\frac{12.5}{100}\right)}^{2}-P\phantom{\rule{0ex}{0ex}}⇒170=P\left[{\left(1+\frac{12.5}{100}\right)}^{2}-1\right]\phantom{\rule{0ex}{0ex}}⇒170=P\left[{\left(1+\frac{125}{1000}\right)}^{2}-1\right]\phantom{\rule{0ex}{0ex}}⇒170=P\left[{\left(1+\frac{1}{8}\right)}^{2}-1\right]\phantom{\rule{0ex}{0ex}}⇒170=P\left[{\left(\frac{9}{8}\right)}^{2}-1\right]\phantom{\rule{0ex}{0ex}}⇒170=P\left[\left(\frac{81}{64}\right)-1\right]\phantom{\rule{0ex}{0ex}}⇒170=P\left[\frac{81-64}{64}\right]\phantom{\rule{0ex}{0ex}}⇒170=P\left[\frac{17}{64}\right]\phantom{\rule{0ex}{0ex}}⇒\frac{170×64}{17}=P\phantom{\rule{0ex}{0ex}}⇒10×64=P\phantom{\rule{0ex}{0ex}}⇒\overline{)P=Rs 640}\phantom{\rule{0ex}{0ex}}Thus the required sum is Rs 640\phantom{\rule{0ex}{0ex}} A = Rs 170 future value r = 12.5% annual interest rate m = 1 number of compounding period per year n = 2 the number of compounding periods i = r/m = 12.5% interest per period P = A/(1 + i)^n P = 170/(1 + 0.125)^2 P = 170/1.265625 P = Rs 134.321 the sum that yield an amount of rs.170 after 2 years at 12.5% p.a. interest being compounded annually is Rs 134.321.
Effects of the Re-Entrant Bowl Geometry on a DI Turbocharged Diesel Engine Performance and Emissions—A CFD Approach | J. Eng. Gas Turbines Power | ASME Digital Collection S. Pasupathy Venkateswaran, S. Pasupathy Venkateswaran , Chennai, Tamilnadu 600 025, India e-mail: venkateswaran.ps@gmail.com e-mail: nagarajan1963@annauniv.edu Venkateswaran, S. P., and Nagarajan, G. (August 30, 2010). "Effects of the Re-Entrant Bowl Geometry on a DI Turbocharged Diesel Engine Performance and Emissions—A CFD Approach." ASME. J. Eng. Gas Turbines Power. December 2010; 132(12): 122803. https://doi.org/10.1115/1.4001294 The purpose of this study is to investigate the influence of re-entrant bowl geometry on both engine performance and combustion efficiency in a direct injection (DI), turbocharged diesel engine for heavy-duty applications. The piston bowl design is one of the most important factors that affect the air–fuel mixing and the subsequent combustion and pollutant formation processes in a DI diesel engine. The bowl geometry and dimensions, such as the pip region, bowl lip area, and toroidal radius, are all known to have an effect on the in-cylinder mixing and combustion processes. Based on the idea of enhancing diffusion combustion at the later stage of the combustion period, three different bowl geometries, namely, bowl 1 (baseline), bowl 2, and bowl 3 were selected and investigated. All the other relevant parameters, namely, compression ratio, maximum diameter of the bowl, squish clearance and injection rate were kept constant. A commercial CFD code STAR-CD was used to model the in-cylinder flows and combustion process, and experimental results of the baseline bowl were used to validate the numerical model. The simulation results show that, bowl 3 enhance the turbulence and hence results in better air-fuel mixing among all three bowls in a DI diesel engine. As a result, the indicated specific fuel consumption and soot emission reduced although the NOx emission is increased owing to better mixing and a faster combustion process. Globally, since the reduction in soot is larger (⁠ −46% as regards baseline) than the increase in NOx +15% as regards baseline), it can be concluded that bowl 3 is the best trade-off between performance and emissions. combustion, computational fluid dynamics, diesel engines, diffusion, fuel systems, shapes (structures), turbulence, STAR-CD, CFD, re-entrant bowl, diesel engine Combustion, Computational fluid dynamics, Cylinders, Diesel engines, Emissions, Engines, Fuels, Geometry, Nitrogen oxides, Soot, Turbulence, Shapes, Flow (Dynamics), Pressure, Diffusion (Physics) Montajir ,” SAE Technical Paper Series, Advances in Diesel Fuel Injection and Sprays, SP-1498, SAE Paper No. 2000-01-0946. ,” SAE Technical Paper Series, SAE Paper No. 900040. Effects of Combustion Chamber Geometry on In-Cylinder Air Motion and Performance in DI Diesel Engine ,” SAE Technical Paper Series, In-Cylinder Diesel Particulate and NOx Control 2000, SP-1508, SAE Paper No. 2000-01-0510. CFD Approach for Optimum Design of DI Combustion System in Small Versatile Diesel Engine ,” SAE Technical Paper Series, SAE Paper No. 1999-01-3261, JSAE Paper No. 9938016. Veshagh Intake Flow Predictions of a Transparent DI Diesel Engine ,” Trans. SAE, SP-1330: Modeling of SI and Diesel Engines, SAE Paper No. 981020. Corcicone Numerical and Experimental Analysis of Air Fuel Mixing B. V. V. S. U. CFD Simulation of In-Cylinder Air Flow in a DI Diesel Engine—Effect of Combustion Chamber Geometry Proceedings of the 19th conference on IV Engines and Combustion Computational Study of the Effects of the Re-Entrant Lip Shape and Toroidal Radii of Piston Bowl on a High-Speed Direct Injection Diesel Engine’s Performance and Emissions Computational Study of the Effects of the Throat Diameter of the Piston Bowl for the Performance and Emissions of a High-Speed Direct Injection Diesel Engine Relating Integral Length Scale to Turbulent Time Scale and Comparing and RNG Turbulence Models in Diesel Combustion Simulation ,” SAE Technical Paper Series, SAE Paper No. 2002-01-1117. Structure of High-Pressure Fuel Spray Collective Drop Effects on Vaporising Liquid Sprays ,” Ph.D. thesis, University of Princeton. Fuel Impingement in a Direct Injection Diesel Engine The Development and Application of a Diesel Engine and Combustion Simulation Proceedings of the 16th Symposium on Combustion , The Combustion Institute. Simulation of Soot Formation Under Diesel Engine Conditions Using a Detailed Kinetic Soot Model
Heat Kernels, Stochastic Processes and Functional Inequalities | EMS Press The workshop \emph{Heat kernels, stochastic processes and functional inequalities} was organized by Thierry Coulhon (Cergy), Bruno Franchi (Bologna), Takashi Kumagai (Kyoto) and Karl-Theodor Sturm (Bonn). It was held from November 27th to December 3nd. The meeting was attended by 56 participants from Australia, Austria, Canada, Finland, France, Germany, Italy, Japan, Poland, Switzerland, United Kingdom, and USA. This workshop was sponsored by the European Union, which allowed the invitation of 18 young people, who contributed positively to the atmosphere of the meeting. The conference brought together mathematicians belonging to several fields, essentially analysis, probability and geometry. One of the main unifying topics was certainly the study of heat kernels in various contexts: fractals, manifolds, domains of the Euclidean space, percolation clusters, infinite dimensional spaces, metric measure spaces. Some related aspects of geometric analysis were also considered such as L^p -cohomology and mass transportation. There was a stimulating exchange between probabilistic and analytic points of view, together with a geometric emphasis in most of the problems. We had 5 one hour survey lectures and 21 thirty-five minutes talks. A lot of time was devoted to discussions and exchange of ideas. Among the highlights were relations between mass transportation, generalized Ricci bounds and contraction properties, connections between heat kernel estimates and percolation clusters, non-linear aspects of diffusions, functional analytic approach to parabolic regularity, geometric and functional analytic aspects of infinite dimensional analysis. This diversity of topics and mix of participants stimulated many extensive and fruitful discussions. It also helped initiate new collaborations, in particular for the younger researchers. Thierry Coulhon, Bruno Franchi, Takashi Kumagai, Karl-Theodor Sturm, Heat Kernels, Stochastic Processes and Functional Inequalities. Oberwolfach Rep. 2 (2005), no. 4, pp. 3061–3120
Multiply matrix elements along rows, columns, or entire input - Simulink - MathWorks Nordic Multiply matrix elements along rows, columns, or entire input The Matrix Product block multiplies the elements of an M-by-N input matrix u along its rows, its columns, or over all its elements. When the Multiply over parameter is set to Rows, the block multiplies across the elements of each row and outputs the resulting M-by-1 matrix. The block treats length-N unoriented vector input as a 1-by-N matrix. \left[\begin{array}{ccc}{u}_{11}& {u}_{12}& {u}_{13}\\ {u}_{21}& {u}_{22}& {u}_{23}\\ {u}_{31}& {u}_{32}& {u}_{33}\end{array}\right]⇒\left[\begin{array}{c}{y}_{1}\\ {y}_{2}\\ {y}_{3}\end{array}\right]=\left[\begin{array}{c}\left(\prod _{j=1}^{3}{u}_{1j}\right)\\ \left(\prod _{j=1}^{3}{u}_{2j}\right)\\ \left(\prod _{j=1}^{3}{u}_{3j}\right)\end{array}\right] When the Multiply over parameter is set to Columns, the block multiplies down the elements of each column and outputs the resulting 1-by-N matrix. The block treats length-M unoriented vector input as an M-by-1 matrix. \begin{array}{ccc}\left[\begin{array}{ccc}{u}_{11}& {u}_{12}& {u}_{13}\\ {u}_{21}& {u}_{22}& {u}_{23}\\ {u}_{31}& {u}_{32}& {u}_{33}\end{array}\right]& & \\ ⇓& & \\ \left[\begin{array}{ccc}{y}_{1}& {y}_{2}& {y}_{3}\end{array}\right]& =& \left[\begin{array}{ccc}\left(\prod _{i=1}^{3}{u}_{i1}\right)& \left(\prod _{i=1}^{3}{u}_{i2}\right)& \left(\prod _{i=1}^{3}{u}_{i3}\right)\end{array}\right]\end{array} When the Multiply over parameter is set to Entire input, the block multiplies all the elements of the input together and outputs the resulting scalar. \left[\begin{array}{ccc}{u}_{11}& {u}_{12}& {u}_{13}\\ {u}_{21}& {u}_{22}& {u}_{23}\\ {u}_{31}& {u}_{32}& {u}_{33}\end{array}\right]⇒y=\left(\prod _{i=1}^{3}\prod _{j=1}^{3}{u}_{ij}\right) The following diagram shows the data types used within the Matrix Product block for fixed-point signals. The output of the multiplier is in the product output data type when at least one of the inputs to the multiplier is real. When both of the inputs to the multiplier are complex, the result of the multiplication is in the accumulator data type. For details on the complex multiplication performed, see Multiplication Data Types. You can set the accumulator, product output, intermediate product, and output data types in the block dialog as discussed in Parameters below. Indicate whether to multiply together the elements of each row, each column, or the entire input. Specify the intermediate product data type. As shown in Fixed-Point Data Types, the output of the multiplier is cast to the intermediate product data type before the next element of the input is multiplied into it. You can set it to: A rule that inherits a data type, for example, Inherit: Same as product output. A rule that inherits a data type, for example, Inherit: Same as product output Specify the minimum value that the block should output. The default value is [] (unspecified). Simulink® uses this value to perform: Array-Vector Multiply | Matrix Sum (Simulink)
group(deprecated)/cosrep - Maple Help Home : Support : Online Help : group(deprecated)/cosrep express group element as product of an element in a subgroup multiplied by a right coset representative for that subgroup cosrep(elem, sub) permutation or a word in the group generators permutation group or a subgrel Important: The group package has been deprecated. Use the superseding command GroupTheory[Factor] instead. If sub is a subgrel, then elem should be a word in the group generators. A two-element list will be returned. The first element is the subgroup element expressed as a word in the subgroup generators, the second is the right coset representative. The coset representative will be an element of the set returned by cosets(sub). If sub is a permgroup, then elem should be a permutation in disjoint cycle notation. A two-element list is returned. The first element is a permutation contained in sub, the second is a right coset representative permutation for sub in the symmetric group of the same degree. The coset representative will be an element of the set returned by cosets(Sn, sub), where Sn is the symmetric group of the same degree as sub. The command with(group,cosrep) allows the use of the abbreviated form of this command. \mathrm{with}⁡\left(\mathrm{group}\right): g≔\mathrm{grelgroup}⁡\left({a,b,c},{[a,b,c,a,\frac{1}{b}],[b,c,a,b,\frac{1}{c}],[c,a,b,c,\frac{1}{a}]}\right): \mathrm{cosrep}⁡\left([c],\mathrm{subgrel}⁡\left({y=[a,b,c]},g\right)\right) [[\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{a}]] \mathrm{pg}≔\mathrm{permgroup}⁡\left(7,{[[1,2,3]],[[3,4,5,6,7]]}\right): \mathrm{cosrep}⁡\left([[3,4,5,6]],\mathrm{pg}\right) [[[\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{7}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{6}]]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{6}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{7}]]] group[cosets]
Nikolaev Institute of Inorganic Chemistry SB RAS, Novosibirsk, Russia. A novel geometric approach is proposed for the development of the wave-particle notions. This approach is based on a comparison of the two geometries with different sizes of an infinitesimal point. It is assumed that the smaller is object mass, the larger is the size of the infinitesimal point in comparison with the point of the geometry of macro world. Within this approach, the smaller is object mass, the larger is the uncertainty of its position from the viewpoint of macro objects (macro geometry). This approach provides a natural ex-planation of Heisenberg’s indeterminancy principle. Formally, this approach appears as an unusual operation with an infinitesimal value (point). However, it should be noted that unusual operations (though with infinitely large values) are already known in physics. These are unattainability of the absolute zero of temperature and unattainability of the maximal velocity of movement. Interconnection of the two geometries with different sizes of infinitesimal values is possible with the help of the direct and inverse Weierstrass transformation. At present, diffraction effects are described using the wave notions about the light and Fourier transform. The diffraction of light is usually registered at a distance not less than 1 - 3 metres between the screens in one of which there is a slit or several slits. This distance is about 106 times longer than the wavelength of the radiation. In the present work, an approach is proposed that allows one to describe the light fluxes at short distances between the screens with the help of Fourier and Weierstrass transforms. Wave-Particle Duality, Geometry Postulates, Integral Transformations, Interference Stabnikov, P. (2019) A New Geometric Approach to Explain the Features of the Micro World. Natural Science, 11, 246-251. doi: 10.4236/ns.2019.117024. According to Heisenberg’s indeterminancy principle, the geometry of micro world does not differ from our usual macro geometry but the uncertainty of the simultaneous determination of such corpuscular characteristics as coordinate (x) and pulse (p), pulse time and energy etc. is introduced. Any pair of these characteristics is connected by relation ΔхΔр ≥ h, where h is the Planck constant. This relation shows that the smaller is the error of the determination of one of the values (x or p), the larger is the error for another one. This approach is generally accepted [1]. Two approaches were proposed almost a hundred years ago to describe the movement and properties of the objects with such features: matrix mechanics by Heisenberg, and wave mechanics by Schrödinger. Both of these approaches lead to the same results [2]. However, in Heisenberg’s matrix mechanics the problem is in excluding all non-observable quantities (velocities and trajectories of particles, for example electrons) keeping only observables (discrete transitions in atoms). However, within this approach it is impossible to establish the physical sense of calculation itself, while solving Schrödinger’s wave equation one obtains the wave function; the squared function is the probability for a particle to occur in a definite site of the space and time. In other words, the wave function allows visualizing atomic processes as the wave phenomena. The wave function and its square may be plotted, which makes it understandable for human perception. This is what defined the extensive development of Schrödinger’s wave mechanics. However, another interpretation is possible when the space of microparticles differs from usual geometry by the increased value of infinitesimal (point). The description of the movement of micro objects is usual, but indistinctness arises as a result of the transfer of information about movement into the geometry with a smaller infinitesimal value (to the macro level). With this approach, the difference between the objects in two geometries is reduced to different sharpness of figures or images. This is how image blurring arises during the transformation from micro geometry into macro. This allows us to explain Heisenberg’s indeterminancy principle in a natural way. The most important feature is that this geometric approach is more fundamental (it is based on comprehensible geometric statements) than the wave-particle duality. However, this approach assumes a paradoxical change of the idea of infinitesimal. In other words, it is proposed to assign the attribute of infinitesimal geometrically to a finite quantity (illegible point). This approach allows us to describe the interference of light at close distances from the screen. 2. Unattainability of the Infinite and Finite Only infinitely large or infinitely remote quantities possessed unattainability in classical physics, because it is impossible to reach infinitely large values. In early representations, body temperature could have any values from −∞ to +∞. However, Lord Kelvin proposed to transfer the minimal temperature to a finite value (−273.15˚C) and accept this value to be equal to zero. In this case, many thermodynamic expressions are written in a simpler form. In fact, this approach is the transfer of an infinitely remote value (−∞ for temperature) to the zero of Kelvin’s scale. This approach also automatically transfers unattainability of the infinitely far point into the zero of Kelvin’s scale, and this value becomes unattainable. An additional effect of this transfer is a decrease in the heat capacity of bodies almost to zero while temperature approaches the zero of Kelvin’s scale. Otherwise unattainability of Kelvin’s zero would be impossible. According to this approach, a body may be cooled to a temperature approaching the zero of Kelvin’s scale but this final (finite) value cannot be achieved. This is the manifestation of unattainability attribute. Another finite unattainable value for anybody with non-zero mass is the velocity of light. Ancient philosophers (with rare exception) thought that the velocity of light is infinite. The fact that the velocity of light is a finite value was established for the first time by O. Roemer in 1676 on the basis of the observations of a satellite of Jupiter. The Earth rotating around the Sun may approach Jupiter or move away from it. A consequence is the difference in the times of shadowing of Jupiter’s satellite. This effect allowed one to determine the velocity of light for the first time. Further on, the velocity of light was determined more precisely many times with the help of special devices allowing more accurate measurements of the distance and time. In 1905, Einstein developed the Special Theory of Relativity (SR) to adjust the laws of classical mechanics and electrodynamics. According to this theory, the velocity of light measured in any inertial reference system is the same and independent of the motion of the system and the irradiator. According to SR, the velocity of light is the maximal velocity with unattainability attribute. The consequences of unattainability for the maximal velocity are the following statements: relativistic law of velocity composition, time dilation, and a decrease in the linear size of a moving body with respect to the chosen inertial system. Otherwise unattainability of the finite velocity of light would be impossible. According to SR, the velocity of any body may be very close to the velocity of light in vacuum but this limit cannot be exceeded. 3. Unattainability Infinitely Small These two examples illustrate the simplicity of nature description with the help of so unusual approximations of infinitely large values to finite values. From the viewpoint of philosophy, not only infinitely large values but also infinitesimals possess unattainability. For instance, mathematical zero is reciprocal of infinitely large value. The attribute of the unattainability of the infinitely small was suggested in the aporias of the ancient Greek philosopher Zeno (for example, about Achilles and the tortoise). So, the idea of rendering unattainability to a small value is quite reasonable. It was demonstrated in [3 , 4] that this approach allows theoretical substantiation of quantum effects during the transformation of data between the two geometries differing from each other by the values of infinitesimals. The mathematical interconnection between the displays of two geometries with different infinitesimals may be built with the help of the direct and inverse integral Weierstrass’ transformations [5]: F\left(t\right)=\frac{1}{\sqrt{4\text{π}}}{\int }_{-\infty }^{+\infty }{\text{e}}^{\frac{-{\left(x-t\right)}^{2}}{4}}f\left(x\right)\text{d}x f\left(x\right)=\frac{1}{i\sqrt{4\text{π}}}{\int }_{s-i\infty }^{s+i\infty }{\text{e}}^{\frac{{\left(t-x\right)}^{2}}{4}}F\left(t\right)\text{d}t At present, computer numerical modeling with the help of Weierstrass transformation may be carried out with the help of computers. Solutions of some problems of light diffraction with the help of the integral Weierstrass transformation were described in [3 , 4]. 4. Description of Light Diffraction by Integral Transforms Diffraction was explained in literature [6 - 9]. Let us use Figure 40 from [6] and Figure 6.2 from [7]. We see that the diffraction patterns are registered at a distance of 2 - 3 m from the first screen with a slit. Diffraction result is easily explained with the help of the wave representation of light. However, nothing is written about diffraction at a distance shorter than 1 m. We will try to calculate diffraction effects (light fluxes) for the distances within this range. If the distance between the screens is short, the general picture of the light flux should depict the shapes and dimensions of slits (corpuscular approach). For longer distance between the screens (1 m and longer), diffraction effects should be manifested (the wave approach). To simulate the corpuscular approach, we will use Weierstrass transformation with the index in the core of the integral transformation \frac{-{\left(x-t\right)}^{2}\ast L}{4\ast \sigma } , where L is the distance between the screens (m), σ is the relative size of an infinitesimal point. It was accepted in calculations that \frac{x}{\sigma }=4\text{\hspace{0.17em}}{\text{m}}^{-1} The application of Weierstrass transformation to the model with two slits is shown in Figure 1 (plot a). This is a corpuscular contribution into the light flux. It follows from Figure 1 that total intensity gets smoothed out and scattered as the distance from the first screen increases (m—metres). To describe diffraction, we will use Fourier transform (Figure 2). The joint picture of the light flux on the registering screen was determined as a linear sum of corpuscular and wave scattering taking into account the total energy of incoming light. It was found that the corpuscular contribution decreases with an increase in the distance between the screens, while the wave contribution increases (Figures 3-5). It follows from Figures 3-5 that using the integral Fourier and Weierstrass transformations it is possible to describe the light flux at a shorter distance between the screens than the distance described in textbooks. This approach is based on the idea that the motion of micro objects may be described by another geometry differing from the classical one by an increased size of the infinitely small point. Figure 1. a—0 m; b—0.01 m; c—0.05 m; d—0.1 m; e—0.2 m; f—0.5 m; g—0.8 m. Figure 2. A symmetrical original with two slits and normalized square cosine of Fourier transform. This is the wave contribution into the light flux. Figure 3. a—0 m; b—0.01 m; c—0.05 m; d—0.1 m; e—0.2 m; f—0.5 m; g—0.8 m; h—1 m. Figure 5. a—0 m; b—0.01 m; c—0.05 m; d—0.1 m; e—0.2 m; f—0.5 m. A new approach is proposed in the present work to explain the specific features of the microworld. This approach is based on the ideas of different metrics for infinitesimal in two geometries. This approach allows a simple explanation of Heisenberg uncertainty principle. An interconnection between two geometries with different infinitely small values may be established through the integral Weierstrass transform. It is demonstrated that the description of light interference requires both the corpuscular (Weierstrass transform) and wave (Fourier transform) description of light fluxes. This approach allows us to describe light fluxes at any distance from a screen with slits. This agrees with the approach proposed by D. Bohm [10], who separated the wave function into the corpuscular and wave components. Very paradoxically, but this also allowed him to describe light fluxes at any distance from the screen with slits. [1] Feynman, R., Leighton, R. and Sands, M. (1963) Feynman Lectures of Physics. Vol. 3. [2] De Broglie, L. (1965) La Physique Nouvelle et les Quanta. ATOMIZDAT, Moscow, 231 p. (In Russian) [3] Stabnikov, P.A. (2018) Frames in Which Matter Develops. Palmarium Academic Publishing, Saarbrücken. (In Russian) [4] Stabnikov, P.A. and Babailov, S.P. (2019) Types of Interactions and Material Islands of Stability: From the Micro World to the Universal Scale. IIC SB RAN, Novosibirsk, 6,5 p. (In Russian) [5] Brychkov, Yu.A. and Prudnikov, A.P. (1977) Integral’nye preobrazovaniya obobshchennykh funktsiy. Nauka, Moscow, 288 p. (In Russian) [6] Pohl, R.W. (1963) Optik und atomphysik. Springer Verlag, Berlin, 525 p. [7] Ditchburn, R.W. (1965) Light. Glasgow, London, 631 p. (In Russian) [8] Landsberg, G.S. (1976) Optics. Moscow, 759 p. (In Russian) [9] Novotny, L. and Hecht, B. (2006) Principles of Nano-Optics. Cambridge University Press, Cambridge, 482 p. [10] Bohm, D. (1952) A Suggested Interpretation of the Quantum Theory in Terms of Hidden Variables. Parts I and II. Physical Review, 85, 166-193.
§ Topology is really about computation --- part 2 Here, we're going to describe whatever I've picked up of sheaves in the past couple of weeks. I'm trying to understand the relationship between sheaves, topoi, geometry, and logic. I currently see how topoi allows us to model logic, and how sheaves allow us to model geometry, but I see nothing about the relationship! I'm hoping that writing this down will allow me to gain some perspective on this. § What is a sheaf? Let's consider two sets P, A P \subseteq A . Now, given a function f: A \rightarrow X , we can restrict this function to A_P: P \rightarrow X . So, we get to invert the direction : (P \subseteq A) \iff (f: A \rightarrow X) \rightarrow (f_P: P \rightarrow X) We should now try to discover some sort of structure to this "reversal" business. Perhaps we will discover a contravariant functor! (Spoiler: we will).
1Department of Geology, Yadanabon University, Amarapura Township, Mandalay, Mandalay Region, Myanmar. 2Department of Geological Engineering, Faculty of Engineering, Gadjah Mada University, Yogyakarta, Indonesia. 3Department of Earth Resources Engineering, Faculty of Engineering, Kyushu University, Motooka, Nishi-Ku, Fukuoka, Japan. This study examines the behavior of trace- and rare-earth elements (REE) in different hydrothermal alteration facies (silicic, advanced argillic and argillic) of Cijulang high-sulfidation epithermal gold deposit, West Java, Indonesia. The results of the study indicate that remarkable differences in the behavior of trace elements and REE are observed in the studied alteration facies. All REE in the silicic facies are strongly depleted. In advanced argillic facies, Heavy rare-earth elements (HREE) are strongly depleted whereas light rare earth elements (LREE) are quite enriched. REE concentrations in the argillic facies show little or no variation with respect to fresh rock counterparts. A strong depletion of REE in the silicic facies is likely to be favored by the highly acidic nature of the hydrothermal fluids, the abundance of complexing ions such as Cl ˉ, F ˉ, and in the hydrothermal solutions and the absence of the secondary minerals that can fix the REE in their crystal structures. An apparent immobility of LREE in advanced argillic facies is possibly due to the presence of alunite. The immobility of REE in the argillic facies suggests the higher pH of the fluids, the lower water/rock ratios and the presence of the phyllosilicates minerals. Tun, M. , Warmada, I. , Idrus, A. , Harijoko, A. , Yonezu, K. and Watanabe, K. (2019) Geochemical Behavior of Trace- and Rare-Earth Elements in the Hydrothermal Alteration Facies of the Cijulang Area, West Java, Indonesia. Open Journal of Geology, 9, 278-294. doi: 10.4236/ojg.2019.95019. {\text{CO}}_{3}^{2-} {\text{PO}}_{4}^{3-} {\text{SO}}_{4}^{2-} {\text{SO}}_{4}^{2-} \sqrt{S{m}_{N}-G{d}_{N}} {\text{SO}}_{4}^{2-} \sqrt{S{m}_{N}-G{d}_{N}} {\text{SO}}_{4}^{2-} [1] Michard, A. (1989) Rare Earth Element Systematics in Hydrothermal Fluid. Geochimicaet Cosmochimica Acta, 53, 745-750. [2] Lewis, A.J., Palmer, M.R., Sturchio, N.C. and Kemp, A.J. (1997) The Rare Earth Element Geochemistry of Acid-Sulphate and Acid-Sulphate-Chloride Geothermal Systems from Yellowstone National Park, Wyoming, USA. Geochimicaet Cosmochimica Acta, 61, 695-706. [3] Alderton, D.H.M., Pearce, J.A. and Potts, P.J. (1980) Rare Earth Element Mobility during Granite Alteration: Evidence from Southeast England. Earth and Planetary Science Letters, 49, 149-165. [4] Taylor, R.P. and Fryer, B.J. (1982) Rare Earth Element Geochemistry as an Aid to Interpreting Hydrothermal Ore Deposits. In: Evans, A.M., Ed., Metallization Associated with Acid Magmatism, John Wiley, New York, 57-365. [5] Taylor, R.P. and Fryer, B.J. (1983) Rare Earth Element Lithogeochemistry of Granitoid Mineral Deposits. Bulletin of Canadian Institute Mine Metallurgy, 76, 74-84. [6] Palacios, C.M., Hein, U.F. and Dulski, P. (1986) Behaviour of Rare Earth Elements during Hydrothermal Alteration at the Buena Esperanza Copper-Silver Deposit, Northern Chile. Earth and Planetary Science Letters, 80, 208-216. [7] Lottermoser, B.G. (1990) Rare Earth Element and Heavy Metal Behaviour Associated with the Epithermal Gold Deposit on Lihir Island, Papua New Guinea. Journal of Volcanology and Geothermal Research, 40, 269-289. [8] Hopf, S. (1993) Behaviour of Rare Earth Elements in Geothermal Systems of New Zealand. Journal of Geochemical Exploration, 47, 333-357. [9] Arribas, A.J., Cunningham, C.G., Rytuba, J.J., Rye, R.O., Kelly, W.C., Podwisocki, M.H., McKee, E.H. and Tosdal, R.M. (1995) Geology, Geochronology, Fluid Inclusions, and Isotope Geochemistry of the Rodalquilar Gold Alunite Deposit, Spain. Economic Geology, 90, 795-822. [10] Fulignati, P., Gioncada, A. and Sbrana, A. (1999) Rare Earth Element (REE) Behaviour in Alteration Facies of the Active Magmatic-Hydrothermal System of Volcano (Aeolin Island, Italy). Journal of Volcanology and Geothermal Research, 88, 325-342. [11] Salvi, S., Fontan, F., Monchoux, P., Williams-Jones, A.E. and Moine, B. (2000) Hydrothermal Mobilization of High Field Strength Elements in Alkaline Igneous Systems: Evidence from the Tamazeght Complex (Morocco). Economic Geology, 95, 559-576. [12] Jiang, N., Sun, S., Chu, X., Mizuta, T. and Ishiyama, D. (2003) Mobilization and Enrichment of High-Field Strength Elements during Late- and Post-Magmatic Processes in the Shuiquangousyenitic Complex, Northern China. Chemical Geology, 200, 117-128. [13] Karakaya, N. (2009) REE and HFS Element Behavior in the Alteration Facies of the ErenlerDaği Volcanics (Konya, Turkey) and Kaolinite Occurrence. Journal of Geochemical Exploration, 101, 185-208. https://doi.org/10.1016/j.gexplo.2008.07.001 [14] Parsapoor, A., Khalili, M. and Mackizadeh, M.A. (2009) The Behaviour of Trace and Rare Earth Elements (REE) during Hydrothermal Alteration in the Rangan Area (Central Iran). Journal of Asian Earth Sciences, 34, 123-134. https://doi.org/10.1016/j.jseaes.2008.04.005 [15] Küpeli, ü. (2010) Trace and Rare-Earth Element Behavior during Alteration and Mineralization in the Attepe Iron Deposits (Feke-Adana, Southern Turkey). Journal of Geochemical Exploration, 105, 51-74. https://doi.org/10.1016/j.gexplo.2010.04.001 [16] Karakaya, M.Ç., Karakaya, N., Küpeli, Ş. and Yavuz, F. (2012) Ore Geology. Reviews, 48, 197-224. https://doi.org/10.1016/j.oregeorev.2012.03.007 [17] Lottermoser, B.G. (1992) Rare Earth Element and Hydrothermal Ore Formation Processes. Ore Geology Reviews, 7, 25-41. https://doi.org/10.1016/0169-1368(92)90017-F [18] Wood, S.A. (1990) The Aqueous Geochemistry of the Rare Earth Elements and Yttrium: Theoretical Prediction in Hydrothermal Solutions to 350 ºC at Saturation of Water Vapour Pressure. Chemical Geology, 88, 99-125. [19] Haas, J.R., Shock, E.L. and Sassani, D.C. (1995) Rare Earth Elements in Hydrothermal Systems: Estimates of Standard Partial Modal Thermodynamic Properties of Aqueous Complexes of the Rare Earth Elements at High Pressures and Temperatures. Geochimicaet Cosmochimica Acta, 59, 4329-4350. https://doi.org/10.1016/0016-7037(95)00314-P [20] Pirajno, F. (1992) Hydrothermal Mineral Deposits. John Wiley and Sons, Sydney, 709 p. [21] Takahashi, Y., Tada, A. and Shimizu, H. (2004) Distribution of Pattern of Rare Earth Ions between Water and Montmorillonite and Its Relation to the Sorbed Species of the Ions. Analytical Sciences, 20, 1301-1306. https://doi.org/10.2116/analsci.20.1301 [22] Carlile, J.C. and Mitchell, A.H.G. (1994) Magmatic Arcs and Associated Gold Copper Mineralization in Indonesia, Journal of Geochemical Exploration, 50, 91-142. [23] Hamilton, W.H. (1979) Tectonics of the Indonesian Region. USGS Professional Paper, 1078, 345 p. https://doi.org/10.3133/pp1078 [24] Rangin, C., Jolivet, L. and Pubellier, M. (1990) A Simple Model for the Tectonic Evolution of Southeast Asia and the Indonesian Region for the Past 43 m.y. Bulletin de la Société Géologique de France, 8, 889-905. https://doi.org/10.2113/gssgfbull.VI.6.889 [25] Whitford, D.J., Nicholls, I.A. and Taylor, S.R. (1979) Spatial Variations in the Geochemistry of Quaternary Lavas across the Sunda Arc in Java and Bali. Contributions to Mineralogy and Petrology, 70, 341-356. https://doi.org/10.1007/BF00375361 [26] Katili, J.A. (1989) Evolution of the Southeast Asian Arc Complex. Indonesian Geology, 12, 113-143. [27] Nicholls, I.A., Whitford, D.J., Harris, K.L. and Taylor, S.R. (1980) Variation in the Geochemistry of Mantle Source for Tholeitic and Calc-Alkaline Mafic Magma, Western Sunda Volcanic Arc, Indonesia. Chemical Geology, 30, 177-199. [28] Soeria-Atmadja, R., Maury, R.C., Bellon, H., Pringgoprawiro, H., Polve, M. and Priadi, B. (1994) Tertiary Magmatic Belts in Java. Journal of Southeast Asian Earth Sciences, 9, 13-27. [29] Van Bemmelen, R.W. (1949) The Geology of Indonesia, V.F.A. Government Printing Office, The Hague, 732 p. [30] Martodjojo, S. (1984) Bogor Basin Evaluation, West Java (EvolusiCekungan Bogor, Jawa Barat). Doctoral Thesis, Institute of Technology of Bandung, Bandung, 238 p. [31] Verdiansyah, O., Bangun, P. and Rahmat, I. (2012) High-Sulfidation Epithermal Gold Occurrences in Cijulang Area, Garut, West Java. 41th IAGI Annual Convention and Exhibition, Yogyakarta, 17-20 September 2012. [32] Tun, M.M., Warmada, I.W., Idrus, A., Harijoko, A., Al-Furqan, R. and Watanabe, K. (2015) Characteristics of Hydrothermal Alteration in Cijulang Area, West Java, Indonesia. Journal of Applied Geology, 7, 1-9. [33] Stoffregen, R. (1987) Genesis of Acid-Sulfate Alteration and Au-Cu-Ag Mineralization at Summitville, Colorado. Economic Geology, 82, 1575-1591. [34] Sun, S.S. and McDonough, W.S. (1989) Chemical and Isotopic Systematic of Oceanic Basalts: Implications for Mantle Composition and Processes. In: Saunders, A.D. and Norry, M.J., Eds., Magmatism in the Ocean Basins, The Geological Society of London, London, Special Publication, 313-345. https://doi.org/10.1144/GSL.SP.1989.042.01.19 [35] McLennan, S.M. (1989) Rare Earth Elements in Sedimentary Rocks: Influence of the Provenance and Sedimentary Process. Reviews in Mineralogy, 21, 169-200. [36] Knight, J.E. (1977) A Thermochemical Study of Alunite, Enargite, Luzonite, and Tennantite Deposits. Economic Geology, 72, 1321-1336. [37] Grant, J.A. (1986) The Isocon Diagram; a Simple Solution to Gresens’ Equation for Metasomatic Alteration: Economic Geology, 81, 1976-1982. [38] Potdevin, J.L. (1993) Gresens 92: A Simple Macintosh Program of the Gresens Method. Computers and Geosciences, 19, 1229-1238.
Feasibility Study of MEMs Technique for Characterizing Magnetic Susceptibility of Subcellular Organelles | J. Med. Devices | ASME Digital Collection Emily Paukert, Emily Paukert John Korkko, John Korkko Bruce Hammer, Paukert, E., Mantell, S., Korkko, J., Hammer, B., and Williams, P. (June 15, 2011). "Feasibility Study of MEMs Technique for Characterizing Magnetic Susceptibility of Subcellular Organelles." ASME. J. Med. Devices. June 2011; 5(2): 027538. https://doi.org/10.1115/1.3591391 biological techniques, biomagnetism, biomedical measurement, bioMEMS, cellular biophysics, magnetic susceptibility Magnetic susceptibility, Microelectromechanical systems, Biomagnetism, Biomedical measurement, Biomedical microelectromechanical systems, Biophysics All materials experience a force when placed in a region of magnetic field and field gradient. The magnitude of this force depends on the magnetic susceptibility of the material and this varies over a wide range depending on the type of material. Our goal is to develop a technique for evaluating the magnetic susceptibility of cells and subcellular organelles so that scientists can access to develop new methods to modify or modulate internal cellular forces. Research studies have shown that forces in the piconewton range can affect cellular behavior. Internal forces of this magnitude can occur in cells exposed to high intensity magnetic fields, if the difference in magnetic susceptibility of subcellular organelles is as low as 10%. Because the magnetic susceptibility x is expected to be on the order of 9×10−6 ⁠, the proposed measurement technique must be extremely sensitive. In this paper, a pilot study is described in which the feasibility of a magnetophoresis technique is explored. Tests implementing magnetophoresis for polystyrene test particles (|x|=8.21×10−6) 100 μm diameter explored the sensitivity and accuracy effects of varying fluid flow speeds of 0.63 mm/s, 1.09 mm/s, and 1.44 mm/s, particle radius to channel depth ratios (r/a) of 0.043 and 0.199, and a magnetic field and gradient product (B∗dB/dz) 38.91 T2/m ⁠. The percent uncertainties of the experimental magnetic susceptibilities for the three different flow speeds and r/a ratio combinations studied are 12.3%, 18.3%, and 22.4% (in order of flow speed). The trial runs indicate that a balance of a larger r/a ratio and a slower flow speed is ideal to optimize consistency in flow velocities and calculated magnetic susceptibilities while minimizing uncertainty. Requirements for MEMs device design are also presented. Diamagnetic Effects of Blood in a Magnetically Levitated Blood Pump
Al Gore - Uncyclopedia, the content-free encyclopedia Alleged photo of Al Gore. The vapid expression is indicative of a false computer-generated image, clearly edited in Photoshop. Albert Arnold "Al" Gore, Jr. (allegedly born March 31, 1948) is purported to be a prominent liberal spokesperson for and inventor of global warming and environmentalism. He is portrayed by the liberal media[1] as an author, a businessperson, former journalist, inventor of the algorithm, and recipient of a scientific prize. According to Wikipedia,[2] he has served as United States Vice President, Senator, and Representative, and has also served as a military journalist during the Vietnam War. However, despite efforts by the liberal media to prove otherwise, there is no completely irrefutable evidence that Al Gore exists.[3][4] 2.1 Service in Congress 2.2 Vice Presidential term 4 Examples of liberal misinformation An artist's rendition of Gore as he is supposed to have appeared sometime during his younger days. Al Gore was supposedly born to Albert and Pauline Gore on or around March 31, 1948 in Washington, D.C. Accounts of his childhood are rife with inconsistencies, inaccuracies, and clear falsehoods. It is claimed he spent most of his youth working on a family farm in Tennessee, which grew tobacco and raised cattle. However, conveniently these facts cannot be confirmed, as the lifespan of cattle has rendered any that Gore may have come into contact with in the 1950s to be currently dead, and all tobacco from that era was used in the Golden Age of Smoking, the 1960s. Gore allegedly attended St. Albans School, where he participated in such painfully common and stereotypical activities as football, student government, and discus throwing. The fact that these generic activities are attributed to Gore, particularly the high school stereotype of the jock discus-thrower, casts further doubt on the believability of his background. Gore then apparently attended Harvard, a university infamous for its erudite professors and their accompanying satanic values, which is curiously the only university to which he applied. The idea that anyone would apply only to Harvard is extremely suspect, and supports the theory that the liberal perpetrators of the Gore hoax are averse to filling out excessive paperwork such as multiple college entrance applications. After ostensibly graduating from university, Al Gore spent two years as a journalist stationed with the 20th Engineer Brigade. Curiously, he does not appear in a single one of the dozens of photographs he allegedly took during his service. It is then claimed by so-called "Gore Believers" that he returned from the war to engage in activities such as divinity school and night-time newspaper editor. These accounts contain clear fact manipulation, as both professions are widely known to be among the most unverifiable jobs[5]. Gore then apparently attended Vanderbilt Law School and his life began taking a political track since, as he is reported to have said, "I realized that while I could expose corruption, I could not change it."[6] An example of data manipulation in an attempt to support the idea that Al Gore exists. In 1976, Gore decided to run for a conveniently open seat in the Senate. Despite holding his own mid-term election and garnering only 31% of the votes, he won the office and began a decades-long "service" in the legislative branch. For nearly seventeen years, Gore is supposed to have served the state of Tennessee in both the Senate and the House of Representatives. There are many inconsistencies and logical fallacies concerning this time in Gore's life, including his multi-house service. No other politician has ever served in both the House and Senate[citation needed], yet very little fanfare surrounds Gore's accomplishment. Furthermore, computer voting records for Gore in both legislative bodies are virtually non-existent. Many claim that Gore's service occurred mostly in the 1980s when computer records were not kept for technical reasons, like computers costing hundreds of thousands of dollars. Despite claims of "experts" that Gore's voting records are available to the public and cries of "I have them right here! Look!," many remain unconvinced of his congressional career. In 1992 Gore was quixotically chosen by Bill Clinton to be the Democratic Party's Vice Presidential nominee. Their ticket won both the 1992 and 1996 presidential elections, and the tandem served eight years in the White House. However, unlike his previous endeavors where there was at least some semblance of a record of his activities, he appears to have accomplished absolutely nothing as a Vice President. His wife, the decidedly existing Tipper Gore, has more on her resume from this period. It appears that skeptics of Al Gore are asked to believe that his sole duty was to cast the tie-breaking vote in the Senate, which conveniently did not happen during his tenure in office. Unlike other famous historical figures who held the Vice Presidency, such as Hannibal Hamlin, Garrett Hobart, and Alben Barkley[7], Gore seems to have avoided attracting attention to himself despite holding the second-highest office in the country. For those without comedic tastes, the so-called experts at Wikipedia have an article about Al Gore. In the year 2000, in what may very well have been the largest liberal hoax ever perpetrated, Al Gore allegedly (possibly through the use of write-in ballots, though the lack of records from that era leaves this in dispute) ran for President of the United States. Although he "won" the election by an "indisputable margin", he was "not allowed" to take office and the position was instead won by governor George W. Bush, of Texas.[8] Although history now proves that President Bush Jr. was undoubtedly the wisest choice to lead our nation, one shouldn't overlook the fact that the Supreme Court awarded the Presidential office to a man who "lost" the election. Why would they do this? Clearly this is further evidence that Al Gore does not exist. In 2006, the Al Gore character appeared in the film, An Inconvenient Truth. In it, Gore is seen being driven around in a Hummer H2 while telling viewers that they should drive more expensive cars, using traditional liberal platitudes such as "hybrid" and "Carbon Dioxide." The film has been praised for its work in special effects, creating a near-perfect CGI Al Gore that appears throughout the film. In addition to this, the film makes many other claims, to include the idea that humans are directly responsible for global climate change, that humans should take drastic action in order to reduce their impact on the earth, that Gore theory is a real and applicable theory, and that he exists. The film ignores all rebuttals against the arguments made in the film. It completely fails to mention that natural climate change occurs on other planets[9] at the same rate as it does on Earth. Since Al Gore clearly does not exist on Mars,[citation needed] and therefore cannot be affecting climate change there, it can be deduced that he does not exist on Earth, either, and does not affect climate change.[10] Hollywood intellectuals attempted to reaffirm Al Gore's claim for corporeality by awarding An Inconvenient Truth with the Academy Award for Best Documentary, despite its factual inaccuracies. Despite Hollywood's best efforts to popularize the film, evidence of Gore's existence remains purely circumstantial and ill-referenced.[citation needed] The most glaring falsehood repeated several times in the film is that there is a "scientific consensus" that Al Gore exists. Gore also manipulates data in order to create the appearance that 100% of Americans believe that he exists. This is simply not true, as there are many political and scientific figures that recognize that Al Gore is nothing more than a lie created by the liberal media in order to sell more hybrid cars. Examples of liberal misinformation One idea of what the internet may look like, according to liberals. In addition to making claims about anthropogenic global warming and its supposed spokesperson, Al Gore, the liberal media also claims that Al Gore, a politician, invented the internet. However, according to a letter released in 1999 written by Robert Khan and Vinton Cerf, two scientists who specialize in the internet, "No one person or even small group of persons exclusively "invented" the Internet."[11] Now, assume that Al Gore invented the internet. {\displaystyle AlGore=InternetInventor} But also, we know that no one invented the internet. {\displaystyle InternetInventor=Nobody} {\displaystyle AlGore=Nobody} This is clear, mathematical proof that Al Gore does not exist. A further corollary of this fact states that if the founder of the internet doesn't exist, neither does the internet. Therefore, the internet is nothing more than just another lie created and spread by the liberal media using scare tactics and deceit. While some claim otherwise, there is no real consensus in the world of academia that the internet exists. It is very intangible, there is no physical evidence to support it. For God's sake, how can anyone believe in anything that they can't see? Despite the obvious logical evidence, supported by the conclusions of trained scientists, many liberals have resulted to petty name-calling in their support of Al Gore and/or the Internet's existence. These ignorant, stubborn, unthinking critics respond to the conclusions with insults like "stupid"[12] and "those aren't real scientists."[13] Reports still come in, from time to time, from the forests of the Pacific Northwest where the reclusive Al Gore is said to dwell. Despite repeated attempts to subject hair, fibres and stool to DNA analysis there remains no concrete proof to suggest that this curious wooly-man-beast is anything but the product of bored Democrats with too much time on their hands. Notwithstanding such skepticism there are those who point to far off Tibet and the famous Pangboch blazer as proof that a race of privileged liberals, who seem never to have evolved "the common touch", once existed in the Himalayas. To this day legends of the creatures persist and children who fail to go straight to bed are told that 'The Gore' will come after them and patronize them until they go mad. Several unqualified internet riff-raff have suggested that the "Gore" sightings are nothing but misidentified sightings of Jesse Jackson in his summer molt.[14] Nonetheless, every year there are still tales from campers, backpackers and terrists that tell of a strangely camp and annoying liberal in a blazer seen roaming the woods of California and Oregon, berating people for no good reason at all. ↑ As though there were any other type of media ↑ Another arm on the Shiva that is the liberal media ↑ See Lindzen, Dick - I Am Not An Al Gore Denier ↑ On the off-chance that he does, it is logical to deduce that he is also a secret muslim ↑ This is a fact ↑ This was likely due to his lack of existence. ↑ Widely considered to be the greatest American of all time ↑ The Democrats did not contest the decision, as Texas is not to be messed with ↑ Probably because they are liberal elitists who believe that a movie about global warming on planets other than Earth would not sell. ↑ See Global Warming ↑ Source - Barack Obama in New York Times, "Those allegations are stupid" ↑ Source - Richard Dawkins in an Oxford Lecture, "Those aren't real scientists." ↑ Source - Lovestospooge; 4channel "Nah man, its jus Jesse Jackson in the summer innit." This article was one of the Top 10 articles of 2009. Colonized Article Colonized version: 2 March 2009 Retrieved from "https://uncyclopedia.com/w/index.php?title=Al_Gore&oldid=6127574"
Millennials, also known as Generation Y or Gen Y, are the demographic cohort following Generation X and preceding Generation Z. Researchers and popular media use the early 1980s as starting birth years and the mid-1990s as ending birth years, with the generation typically being defined as people born from 1981 to 1996.[1] Most millennials are the children of baby boomers and early Gen Xers;[2] millennials are often the parents of Generation Alpha.[3] {\displaystyle n=29,912} (From left to right) Taylor Swift, Beyoncé, the Backstreet Boys, and Kanye West are some of the most representative musicians of the Millennial generation. {\displaystyle \alpha }
Convert vector from Cartesian components to spherical representation - MATLAB cart2sphvec - MathWorks Nordic \left({\stackrel{^}{e}}_{az},{\stackrel{^}{e}}_{el},{\stackrel{^}{e}}_{R}\right) \left({\stackrel{^}{e}}_{az},{\stackrel{^}{e}}_{el},{\stackrel{^}{e}}_{R}\right) \left({\stackrel{^}{e}}_{az},{\stackrel{^}{e}}_{el},{\stackrel{^}{e}}_{R}\right) \begin{array}{ll}{\stackrel{^}{e}}_{az}\hfill & =-\mathrm{sin}\left(az\right)\stackrel{^}{i}+\mathrm{cos}\left(az\right)\stackrel{^}{j}\hfill \\ {\stackrel{^}{e}}_{el}\hfill & =-\mathrm{sin}\left(el\right)\mathrm{cos}\left(az\right)\stackrel{^}{i}-\mathrm{sin}\left(el\right)\mathrm{sin}\left(az\right)\stackrel{^}{j}+\mathrm{cos}\left(el\right)\stackrel{^}{k}\hfill \\ {\stackrel{^}{e}}_{R}\hfill & =\mathrm{cos}\left(el\right)\mathrm{cos}\left(az\right)\stackrel{^}{i}+\mathrm{cos}\left(el\right)\mathrm{sin}\left(az\right)\stackrel{^}{j}+\mathrm{sin}\left(el\right)\stackrel{^}{k}\text{ }\text{.}\hfill \end{array} v={v}_{az}{\stackrel{^}{e}}_{az}+{v}_{el}{\stackrel{^}{e}}_{el}+{v}_{R}{\stackrel{^}{e}}_{R} \left[\begin{array}{c}{v}_{x}\\ {v}_{y}\\ {v}_{z}\end{array}\right]=\left[\begin{array}{ccc}-\mathrm{sin}\left(az\right)& -\mathrm{sin}\left(el\right)\mathrm{cos}\left(az\right)& \mathrm{cos}\left(el\right)\mathrm{cos}\left(az\right)\\ \mathrm{cos}\left(az\right)& -\mathrm{sin}\left(el\right)\mathrm{sin}\left(az\right)& \mathrm{cos}\left(el\right)\mathrm{sin}\left(az\right)\\ 0& \mathrm{cos}\left(el\right)& \mathrm{sin}\left(el\right)\end{array}\right]\left[\begin{array}{c}{v}_{az}\\ {v}_{el}\\ {v}_{R}\end{array}\right] \left[\begin{array}{c}{v}_{az}\\ {v}_{el}\\ {v}_{R}\end{array}\right]=\left[\begin{array}{ccc}-\mathrm{sin}\left(az\right)& \mathrm{cos}\left(az\right)& 0\\ -\mathrm{sin}\left(el\right)\mathrm{cos}\left(az\right)& -\mathrm{sin}\left(el\right)\mathrm{sin}\left(az\right)& \mathrm{cos}\left(el\right)\\ \mathrm{cos}\left(el\right)\mathrm{cos}\left(az\right)& \mathrm{cos}\left(el\right)\mathrm{sin}\left(az\right)& \mathrm{sin}\left(el\right)\end{array}\right]\left[\begin{array}{c}{v}_{x}\\ {v}_{y}\\ {v}_{z}\end{array}\right]
In parts (a) through (d) below, for each polynomial function f(x) f(x) is shown. Based on this information, state the number of linear and quadratic factors the factored form of its equation should have and how many real and complex (non-real) solutions f(x)= 0 might have. (Assume a polynomial function of the lowest possible degree for each one.) f(x) at right will have three linear factors, therefore three real roots and no complex roots. There will be three linear factors (one repeated), therefore two real (one single, one double) and zero complex (non-real) roots. There will be one linear factor and one quadratic factor, therefore one real and two complex (non-real) roots. There will be four linear factors, therefore four real and zero complex (non-real) roots. There will be two linear and one quadratic factor, therefore two real and two complex (non-real) roots.
First law of thermodynamics - Simple English Wikipedia, the free encyclopedia The first law of thermodynamics states that energy can neither be created nor destroyed; it can change only from one form to another. The law forms the basis of the principle of conservation of energy. This means that anything that uses energy is changing the energy from one kind of energy to another. For example, exercising changes energy from food into kinetic (motion) energy. Another example: In the Sun (or any star), nuclear fusion changes mass into heat and light (electromagnetic radiation), which travels to Earth and is used by plants to create food (chemical energy) via photosynthesis, which can be eaten by animals allowing them to move (kinetic energy). Energy only ever changes its form; it is neither created nor destroyed. This is why perpetual motion machines do not exist and could never exist; it would break a fundamental law of physics. People can use the changes to do work that is useful.[1] Examples of forms of energy in classical mechanics include heat, light, kinetic (movement) or potential energy. However, in modern physics it is considered that there are only two types of energy - mass and kinetic energy, although this may not be helpful to those not familiar with more complex physics. The law means that the total energy of the universe (or any Closed system) is a constant. However, energy can be transferred from one part of the universe to another. The most common wording of the first law of thermodynamics used by scientists is: “ The increase in the internal energy of a thermodynamic system is equal to the amount of heat energy added to the system minus the work done by the system on the surroundings. ” 1.1 Thermodynamics and Engineering James Prescott Joule was the first person who found out by experiments that heat and work are convertible. The first explicit statement of the first law of thermodynamics was given by Rudolf Clausius in 1850: "There is a state function E, called 'energy', whose differential equals the work exchanged with the surroundings during an adiabatic process." Thermodynamics and Engineering[change | change source] In thermodynamics and engineering, it is natural to think of the system as a heat engine which does work on the surroundings, and to state that the total energy added by heating is equal to the sum of the increase in internal energy plus the work done by the system. Hence {\displaystyle \delta W} is the amount of energy lost by the system due to work done by the system on its surroundings. During the portion of the thermodynamic cycle where the engine is doing work, {\displaystyle \delta W} is positive, but there will always be a portion of the cycle where {\displaystyle \delta W} is negative, e.g., when the working gas is being compressed. When {\displaystyle \delta W} represents the work done by the system, the first law is written: {\displaystyle \mathrm {d} U=\delta Q-\delta W\,} People disagree whether energy is a positive or a negative number. So that {\displaystyle \delta Q} is the flow of heat out of the system, and {\displaystyle \delta W} is the work into the system: {\displaystyle \mathrm {d} U=-\delta Q+\delta W\,} Because of this ambiguity, it is very important in any discussion involving the first law to explicitly establish the sign convention in use. dU = the change in internal energy ↑ 1st Law of Thermodynamics[permanent dead link] Ohio State University. Accessed July 2011 Retrieved from "https://simple.wikipedia.org/w/index.php?title=First_law_of_thermodynamics&oldid=8043039"
The effective thermal conductivity of reticulate porous ceramics (RPCs) is determined based on the 3D digital representation of their pore-level geometry obtained by high-resolution multiscale computer tomography. Separation of scales is identified by tomographic scans at 30μm digital resolution for the macroscopic reticulate structure and at 1μm digital resolution for the microscopic strut structure. Finite volume discretization and successive over-relaxation on increasingly refined grids are applied to solve numerically the pore-scale conduction heat transfer for several subsets of the tomographic data with a ratio of fluid-to-solid thermal conductivity ranging from 10−4 to 1. The effective thermal conductivities of the macroscopic reticulate structure and of the microscopic strut structure are then numerically calculated and compared with effective conductivity model predictions with optimized parameters. For the macroscale reticulate structure, the models by Dul’nev, Miller, Bhattachary and Boomsma and Poulikakos, yield satisfactory agreement. For the microscale strut structure, the classical porosity-based correlations such as Maxwell’s upper bound and Loeb’s models are suitable. Macroscopic and microscopic effective thermal conductivities are superimposed to yield the overall effective thermal conductivity of the composite RPC material. Results are limited to pure conduction and stagnant fluids or to situations where the solid phase dominates conduction heat transfer.
DocumentTools/Canvas/Text - Maple Help Home : Support : Online Help : DocumentTools/Canvas/Text Text(caption) Text(caption, options) Text(format, arguments, options) string containing "%1" The Text command is used by GetMath when extracting math and text from a canvas. It packages text expression up into a record where it can be saved alongside other information relevant to the context of the canvas application, such as the id, position, and annotation. It can be useful to use the Text command when constructing a canvas that will later be deployed to Maple Learn. The Text command supports mixed text and math notation. When given a first argument format string that contains the % character followed by a number, n, the text will display the nth argument in place of %n. \mathrm{with}⁡\left(\mathrm{DocumentTools}:-\mathrm{Canvas}\right): \mathrm{cv}≔\mathrm{NewCanvas}⁡\left(["Text Example",\mathrm{Text}⁡\left("Mix text and %1",\frac{\mathrm{sqrt}⁡\left({x}^{2}-1\right)}{y}\right),\mathrm{Text}⁡\left("Sum %1 and %2",{a}^{2},{b}^{2}\right),\mathrm{Text}⁡\left("The %1",\mathrm{_BOLD}⁡\left("Pythagorean Theorem"\right)\right),\mathrm{Math}⁡\left({a}^{2}+{b}^{2}={c}^{2}\right)]\right): \mathrm{ShowCanvas}⁡\left(\mathrm{cv}\right) Text records are returned from GetMath \mathrm{url}≔"https://learn.maplesoft.com/beta/index.html#/?d=OULPGFDHIFNKDLJJGGMKJKNOARBJLHGRPOOKHKCQHPMFKMOTBFEHCUBUCQFNOGKSAMDOCUEFGMAJMQELFLEQEGOMETNULHFSCUGU": \mathrm{cv}≔\mathrm{GetCanvas}⁡\left(\mathrm{url}\right): M≔\mathrm{GetMath}⁡\left(\mathrm{cv},'\mathrm{keeptext}'\right): M[1]:-\mathrm{text} M[1]:-\mathrm{id} M[1]:-\mathrm{position} The DocumentTools[Canvas][Text] command was introduced in Maple 2021.
On special representations of p-adic reductive groups 15 September 2014 On special representations of p -adic reductive groups F be a non-Archimedean locally compact field, and let G be a split connected reductive group over F . For a parabolic subgroup Q\subset G and a ring L , we consider the G -representation on the L {C}^{\infty }\left(G/Q,L\right)/{\sum }_{Q\text{'}⊋Q}{C}^{\infty }\left(G/Q\text{'},L\right).\phantom{\rule{2.00em}{0ex}}\left(\ast \right) I\subset G denote an Iwahori subgroup. We define a certain free finite rank- L \mathfrak{M} Q Q is a Borel subgroup, then (∗) is the Steinberg representation and \mathfrak{M} is of rank 1 ) and construct an I -equivariant embedding of (∗) into {C}^{\infty }\left(I,\mathfrak{M}\right) . This allows the computation of the I -invariants in (∗). We then prove that if L is a field with characteristic equal to the residue characteristic of F G is a classical group, then the G -representation (∗) is irreducible. This is the analogue of a theorem of Casselman (which says the same for L=\mathbb{C} ); it had been conjectured by Vignéras. Herzig (for G={GL}_{n}\left(F\right) ) and Abe (for general G ) have given classification theorems for irreducible admissible modulo p G in terms of supersingular representations. Some of their arguments rely on the present work. Elmar Grosse-Klönne. "On special representations of p -adic reductive groups." Duke Math. J. 163 (12) 2179 - 2216, 15 September 2014. https://doi.org/10.1215/00127094-2785697 Elmar Grosse-Klönne "On special representations of p -adic reductive groups," Duke Mathematical Journal, Duke Math. J. 163(12), 2179-2216, (15 September 2014)
§ The arg function, continuity, orientation Let us think of the function arg: \mathbb C \rightarrow \mathbb R as a multi valued function, which maps each complex number to the set of possible valid angles that generate it: arg(z) \equiv \{ t \in \mathbb R : |z|e^{i (\pi/2)t} = z \} We plot the function here: Note that for every value z \in C , we get a set of values associated to it. § Recovering single valued-ness Now, the question is, can we somehow automatically recover single valued-ness? kind of, by stipulating that for any given curve c: [0, 1] \rightarrow \mathbb C arg \circ c: [0, 1] \rightarrow \mathbb R is continuous . Let's try to investigate what happens if we move from right towards bot, arbitrarily stipulating ("picking a branch") that arg(right) = 0 as a sort of basepoint. Note that we were forced to pick the value arg(bot) = -1 from our considerations of continuity. No other value extends continuous from the right to the bottom. Also note that we got a smaller value: we move from 0 -> -1: we decrease our value as we move clockwise. This prompts the natural question: what happens if we move in the opposite direction? § Counter-clockwise movement Let's move counter-clockwise from right, arbitrarily picking the branch arg(right) = 0 as before. This gives us: Note that once again, we were forced to pick arg(top) = 1 by continuity considerations. Also note that this time, we got a larger value: we move from 0 -> 1: we increase our value as we move counter-clockwise § Multiple winding the true power of this multi-valued approach comes from being able to handle multiple windings. Here the real meaning of being a multi-valued function shows through. If we decide to go through the the loop twice , as: bot -> right -> top -> left -> bot -> right -> top -> left -1 -> 0 -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 That is, we end up with the value 6, which can only b § Orientation from continuity There's something really elegant about being able to recover a notion of "orientation" by simply: Allowing multi-valued functions. Forcing continuity constraints. Interpreting increase/decrease in the value of the function. § Discretizing, gaining more insight I was personally dis-satisfied with the above explanation, because it seemed weird that we would need to depend on the history to define this function. We can formalize this notion of history. Let's first discretize the situation, giving us: We are on the space of the spokes, given by a, b, c, d, e, f, g, h. We have a function f: Spoke -> Val whose values are given on the spokes. We are interested in the path p: Time -> Spoke, p = [a, b, c, d, e, f, g, h, a]. If we evaluate the function f on the path p, we get out: Time -> Val, out = [0, 1, 2, 3, 4, 5, 6, 7, 0]. We have a "jump" from 7 to 0 in out as we cross from h to a. This is a discontinuity in out at time 7. We want to fix this, so we make the function f multi-valued. We assign both values 8 and 0 to the spoke a. We wish to define the evaluation of f: Spoke -> 2^N relative to path p. At time t, point p[t], we pick any value in f(p[t]) that makes out[t] continuous. So in this case, when we start, we have two choices for out[0] = f(p[0]) = f(a): 0 and 8. But we know that out[1] = f(p[1]) = f(b) = 1. Hence, for out[0] to be continuous, we must pick out[0] = 0. Similarly, at out[8] we have two choices: 0 and 8. But we have that out[7] = 7, hence we pick out[8] = 8. Note that we say 'we pick the value' that makes out continuous. This is not really rigorous. We can fix this by re-defining f in such a way that f is not Spoke -> Val, but rather it knows the full path: f': (Time -> Spoke) -> Val. § Making the theory path-dependent We originally had: path: Time -> Spoke f: Spoke -> 2^Val -- multi-valued -- | morally, not mathematically. out: Time -> Val out t = choose_best_value (f(path[t])) But there was a vagueness in this choose_best_value. So we redefine it: f': (Time -> Spoke) -> Time -> Val f'(path, tcur) = argmax (\v -> |v - path[tcur-1]| + |v - path[tcur+1|) f(path[tcur]) out = f'(path) The function f' that defines the value of the path has full access to the path itself! At time tcur, it attempts to pick the value in f(path[tcur]) which makes the discontinuity as small as possible. It picks a value v from the possible values of f(path[tcur]). This v minimises the of the distances from the previous time point ( |v - path[tcur-1]), and the distance from the next time point ( |v - path[tcur + 1]). This provides a rigorous definition of what it means to "pick a value in the branch". This can clearly be extended to the continuous domain.
Rectangular beam with elastic properties for deformation - MATLAB - MathWorks 日本 \left[{I}_{x},{I}_{y}\right]=\left[\underset{A}{∫}{\left(y−{y}_{c}\right)}^{2}dA,\underset{A}{∫}{\left(x−{x}_{c}\right)}^{2}dA\right] {I}_{xy}=\underset{A}{∫}\left(x−{x}_{c}\right)\left(y−{y}_{c}\right)dA {I}_{P}={I}_{x}+{I}_{y} \left[C\right]=\mathrm{α}\left[M\right]+\mathrm{β}\left[K\right]
Hierarchal Modeling of Creep Behavior of SnAg Solder Alloys | J. Electron. Packag. | ASME Digital Collection Hierarchal Modeling of Creep Behavior of SnAg Solder Alloys Min Pei, , 801 Ferst Drive NW, Atlanta, GA 30332-0405 e-mail: Jianmin.qu@me.gatech.edu Pei, M., and Qu, J. (July 29, 2008). "Hierarchal Modeling of Creep Behavior of SnAg Solder Alloys." ASME. J. Electron. Packag. September 2008; 130(3): 031004. https://doi.org/10.1115/1.2957321 In this paper, a microstructure-dependent creep model is developed that accounts for the hierarchal microstructure at multiple length scales. The model considers three distinguishable phases in the solder alloy at two different length scales: at the larger scale Sn dendrites of micrometer size are embedded in a homogeneous eutectic region; at a much smaller length scale the eutectic region consists of submicron size Ag3Sn particles embedded in a homogeneous Sn matrix. The model predictions agree well with creep test data of lanthanum doped SnAg solders. creep, crystal microstructure, silver alloys, solders, tin alloys, constitutive model, creep, SnAg lead-free solder Alloys, Creep, Solders, Tin A Constitutive Model for Creep of Lead-Free Solders Undergoing Strain-Enhanced Microstructural Coarsening: A First Report Effect of Rare Earth Elements on Lead-Free Solder Microstructure Evolution Effects of Lanthanum Doping on the Microstructure and Mechanical Behavior of a SnAg Alloy http://www.ctcms.nist.gov/oof/oof2/http://www.ctcms.nist.gov/oof/oof2/, Aug. 2006. Mechanical Properties Evolution of Sn-3.5Ag Based Lead-Free Solders by Nanoindentation The Evolution of Plastic Deformation of Sn3.5Ag-Based Lead-Free Solders Bull. Alloy Phase Diagrams Study on the Microstructure of a Novel Lead-Free Solder Alloy SnAgCu-RE and Its Soldered Joints A Creep Model for Solder Alloys
Mini-Workshop: Complex Approximation and Universality | EMS Press Paul M. Gauthier The notion of universality covers a wide range of phenomena in complex analysis. Generally speaking, a universal object is one which, when subjected to some limiting process, approximates every object in some universe. For example, universality occurs when the translates of an entire function can approximate any other entire function, or when the partial sums of a formal power series or a formal trigonometric series approximate all functions in some natural class. For a long time, existing approximation theorems were used in constructions of universal functions and universal series. In recent years, however, constructions have required the development of new approximation theorems, thereby also enriching the area of complex approximation. {\bf Universal functions.} There is no single definition of a universal function. What they have in common is the following. One considers a suitable sequence \mathcal T = (T_n) of operators acting on a space X , for example, of holomorphic functions with values in another space Y of holomorphic functions. Then a function f\in X is called universal with respect to \mathcal T (T_nf) Y . One of the earliest examples of a universal function is due to Birkhoff (1929) who showed that there exists an entire function f whose translates f(z+n), n \ge 1, can approximate any other entire function, uniformly on compact sets. In that case we have (T_nf)(z) = f(z +n), X = Y is the space of entire functions with the usual compact-open topology. Seidel and Walsh showed that an analogue of Birkhoff's universality theorem holds for functions holomorphic in the unit disc, if we replace translates by "non-euclidian translates", that is T_nf=f\circ\phi_n is the composition of \phi_n of the unit disc D . At the heart of the study of holomorphic functions in the disc D H^\infty(D) of bounded holomorphic functions on the disc. Chee showed the existence of universal functions for the class H^\infty(B) of bounded holomorphic functions on the unit ball of C^N . Richard Aron's talk was concerned with the size and the structure of the set of such universal functions. In the study of the space H^\infty(B) a fundamental role is played by inner functions. These are also of importance in engineering control theory. Recently, Gauthier and Xiao have shown the existence of universal inner functions in the unit ball of C^N . Geir Arne Hjelle and Raymond Mortini gave talks concerned with approximating inner functions in the unit disc D by simpler inner functions, namely Blaschke products. Extending the study of functions in the unit disc, which are universal with respect to composition with automorphisms of the disc, Mortini talked about the universality of functions f holomorphic on a domain \Omega with respect to a sequence (f\circ\phi_n) of compositions, where (\phi_n) are self-maps of \Omega (not necessarily automorphisms). {\bf Universal series.} In 1918 Jentzsch gave an example of a power series \Sigma for which a subsequence of the partial sums of \Sigma converges outside of its disc D of convergence. Such a power series is said to be overconvergent. Luh, Chui and Parnes showed the existence of such an overconvergent series \Sigma which is universal in the sense that, for each compact set K \overline D f K , there are partial sums of \Sigma which converge uniformly to f . Nestoridis showed that one can even allow K to meet the boundary of D The main focus of the mini-workshop was in fact on universal series of one sort or another. The talks by Wolfgang Luh and Tatevik Gharibyan dealt with the strong relation between universality and lacunarity for power series and also the relation between various forms of summability and holomorphic continuation. Lacunary power series are ones for which many of the coefficients are zero. Another restriction which one can impose on the coefficients is that the sequence of coefficients lie in some sequence space. Vassili Nestoridis, in his first lecture, showed that there are universal series, whose sequence of coefficients are in every \ell^p -space, for each p>1 . In his second talk, he spoke on the recent work of Mouze concerning universality of the geometric series. Vagia Vlachou gave a talk on universal Faber series. J\"urgen M\"uller, in his talk "From polynomial approximation to universal Taylor series and back again" gave specific examples of the interplay between theorems on approximation by polynomials of a given class and the existence of universal Taylor series whose coefficients satisfy a given restriction. {\bf Potential theory.} The extension of some universality results to harmonic functions is due to Armitage (2002, 2003, 2005). Innocent Tamptse's talk was concerned with universal series of harmonic functions. Universality for harmonic functions is based on approximation theory just as universality for holomorphic functions. One reason that harmonic universality was developed much more recently than holomorphic universality is that harmonic approximation theory attained its full development only recently, largely due to Stephen Gardiner. In his talk, Gardiner discussed the relation between approximation theorems and maximum principles in potential theory. The classical approximation theorem of Runge was extended not only to harmonic functions but also to solutions of elliptic partial differential equations (Lax-Malgrange). This leads to universality results for solutions of such equations. Paul Gauthier spoke on universality for solutions of the heat equation (which of course is not elliptic) and also for solutions to Burgers' equation, which is one of the simplest non-linear parabolic equations. Burgers' equation has applications in aerodynamics. {\bf General theory of universality.} Several of the talks were not so much concerned with one particular type of universality as with phenomena related to universality in general. For example, the first talk of Nestoridis as well as the talks by Tamptse and Gauthier made use of the recently developed "abstract theory of universality". A bounded operator T defined on some separable Banach space X is called \emph{hypercyclic} if there exists some vector x\in X such that the orbit of x T \{T^n x;\ n\geq 0\} X . A theorem of Kitai, Gethner and Shapiro asserts that an operator satisfying a certain criterion called the \emph{Hypercyclicity Criterion} is always hypercyclic. The Hypercyclicity Criterion is a very powerful tool to prove that an operator is hypercyclic. And even, until recently, every hypercyclic operator was hypercyclic... because it satisfies the assumptions of the Hypercyclity Criterion. Thus, a natural question was to know whether every hypercyclic operator satisfies the assumptions of the Hypercyclicity Criterion. Recently, De La Rosa and Read proved that there exist a Banach space X and a hypercyclic operator T X that does not satisfy the hypercyclicity criterion, but this space cannot be identified with some ``classical'' Banach space. Fr\'ederic Bayart, in his talk proved that in fact such an operator exists on the separable Hilbert space. A bounded linear operator T defined on a separable Banach space X is said to be supercyclic if there exists a vector x\in X \{\lambda T^nx\,:\,\lambda\in\mathbb{C}, n\in\mathbb{N}\} X . It is called weakly supercyclic if the set \{\lambda T^nx\,:\,\lambda\in\mathbb{C}, n\in\mathbb{N}\} is weakly dense in X Fernando Le\'on-Saavedra, in his talk proposed a method to prove non-super\-cy\-clic\-i\-ty and non-weak supercyclicity which is less computational than previous methods and for which the proofs turn out to be simpler. George Costakis, spoke on the question as to whether there exist maps between spaces of holomorphic functions which preserve certain notions of universality. Karl-G. Grosse-Erdmann named his talk "Construction versus Baire category in universality". He could also have named it "Bare hands versus Baire category in universality". The existence of universal functions is usually proved by one of two methods: by an explicit construction or by the use of the Baire category theorem. In his talk, Grosse-Erdmann argued that the two methods are largely equivalent and ended his talk by recalling a pronouncement of T. W. K\"orner \begin{quote} \textit{The Baire Category Theorem is a profound triviality.} \end{quote} {\bf The Riemann zeta function.} Although universality is a generic phenomenon, the only explicit function which is known to have universality properties is the Riemann zeta function and its close cousins (Voronin, 1975). Markus Nie{\ss} extended recent results showing that it is possible to approximate the Riemann zeta function by functions which fail to satisfy the conclusion of the Riemann hypothesis. That is, they do have zeros which are not on the critical axis. An additional lecture was given by Sophie Grivaux from Lille, who was at Oberwolfach as a participant in the RIP program. She talked about the relation between hypercyclicity and the invariant subset problem. {\bf Problem session.} A problem session was held in which problems were presented by Grosse-Erdmann, Aron, Nestoridis, Vlachou, Gauthier, Luh, Gardiner, and Mortini. Gardiner presented some problems in the name of David H. Armitage, who unfortunately could not attend. Participants found the mini-workshop extremely stimulating. Mathematical and social bonds were reinforced which will surely prolong existing collaborations and develop new ones. The organizers were Paul M. Gauthier (Montr\'eal), Karl-Goswin Grosse-Erd\-mann (Mons), and Raymond Mortini (Metz). The participants greatly appreciated the hospitality and the stimulating atmosphere of the Forschungsinstitut Oberwolfach. Raymond Mortini, Karl-Goswin Grosse-Erdmann, Paul M. Gauthier, Mini-Workshop: Complex Approximation and Universality. Oberwolfach Rep. 5 (2008), no. 1, pp. 297–346
Determinantal point process - Wikipedia In mathematics, a determinantal point process is a stochastic point process, the probability distribution of which is characterized as a determinant of some function. Such processes arise as important tools in random matrix theory, combinatorics, physics,[1] and wireless network modeling.[2][3][4] 3.1 Gaussian unitary ensemble 3.2 Poissonized Plancherel measure 3.3 Uniform spanning trees {\displaystyle \Lambda } be a locally compact Polish space and {\displaystyle \mu } be a Radon measure on {\displaystyle \Lambda } . Also, consider a measurable function K:Λ2 → ℂ. {\displaystyle X} is a determinantal point process on {\displaystyle \Lambda } with kernel {\displaystyle K} if it is a simple point process on {\displaystyle \Lambda } with a joint intensity or correlation function (which is the density of its factorial moment measure) given by {\displaystyle \rho _{n}(x_{1},\ldots ,x_{n})=\det[K(x_{i},x_{j})]_{1\leq i,j\leq n}} for every n ≥ 1 and x1, . . . , xn ∈ Λ.[5] The following two conditions are necessary and sufficient for the existence of a determinantal random point process with intensities ρk. Symmetry: ρk is invariant under action of the symmetric group Sk. Thus: {\displaystyle \rho _{k}(x_{\sigma (1)},\ldots ,x_{\sigma (k)})=\rho _{k}(x_{1},\ldots ,x_{k})\quad \forall \sigma \in S_{k},k} Positivity: For any N, and any collection of measurable, bounded functions φk:Λk → ℝ, k = 1,. . . ,N with compact support: {\displaystyle \quad \varphi _{0}+\sum _{k=1}^{N}\sum _{i_{1}\neq \cdots \neq i_{k}}\varphi _{k}(x_{i_{1}}\ldots x_{i_{k}})\geq 0{\text{ for all }}k,(x_{i})_{i=1}^{k}} {\displaystyle \quad \varphi _{0}+\sum _{k=1}^{N}\int _{\Lambda ^{k}}\varphi _{k}(x_{1},\ldots ,x_{k})\rho _{k}(x_{1},\ldots ,x_{k})\,{\textrm {d}}x_{1}\cdots {\textrm {d}}x_{k}\geq 0{\text{ for all }}k,(x_{i})_{i=1}^{k}} A sufficient condition for the uniqueness of a determinantal random process with joint intensities ρk is {\displaystyle \sum _{k=0}^{\infty }\left({\frac {1}{k!}}\int _{A^{k}}\rho _{k}(x_{1},\ldots ,x_{k})\,{\textrm {d}}x_{1}\cdots {\textrm {d}}x_{k}\right)^{-{\frac {1}{k}}}=\infty } for every bounded Borel A ⊆ Λ.[6] Gaussian unitary ensemble[edit] Main article: Gaussian unitary ensemble The eigenvalues of a random m × m Hermitian matrix drawn from the Gaussian unitary ensemble (GUE) form a determinantal point process on {\displaystyle \mathbb {R} } {\displaystyle K_{m}(x,y)=\sum _{k=0}^{m-1}\psi _{k}(x)\psi _{k}(y)} {\displaystyle \psi _{k}(x)} {\displaystyle k} th oscillator wave function defined by {\displaystyle \psi _{k}(x)={\frac {1}{\sqrt {{\sqrt {2n}}n!}}}H_{k}(x)e^{-x^{2}/4}} {\displaystyle H_{k}(x)} {\displaystyle k} th Hermite polynomial. [7] Poissonized Plancherel measure[edit] The poissonized Plancherel measure on partitions of integers (and therefore on Young diagrams) plays an important role in the study of the longest increasing subsequence of a random permutation. The point process corresponding to a random Young diagram, expressed in modified Frobenius coordinates, is a determinantal point process on ℤ[clarification needed] + 1⁄2 with the discrete Bessel kernel, given by: {\displaystyle K(x,y)={\begin{cases}{\sqrt {\theta }}\,{\dfrac {k_{+}(|x|,|y|)}{|x|-|y|}}&{\text{if }}xy>0,\\[12pt]{\sqrt {\theta }}\,{\dfrac {k_{-}(|x|,|y|)}{x-y}}&{\text{if }}xy<0,\end{cases}}} {\displaystyle k_{+}(x,y)=J_{x-{\frac {1}{2}}}(2{\sqrt {\theta }})J_{y+{\frac {1}{2}}}(2{\sqrt {\theta }})-J_{x+{\frac {1}{2}}}(2{\sqrt {\theta }})J_{y-{\frac {1}{2}}}(2{\sqrt {\theta }}),} {\displaystyle k_{-}(x,y)=J_{x-{\frac {1}{2}}}(2{\sqrt {\theta }})J_{y-{\frac {1}{2}}}(2{\sqrt {\theta }})+J_{x+{\frac {1}{2}}}(2{\sqrt {\theta }})J_{y+{\frac {1}{2}}}(2{\sqrt {\theta }})} For J the Bessel function of the first kind, and θ the mean used in poissonization.[8] This serves as an example of a well-defined determinantal point process with non-Hermitian kernel (although its restriction to the positive and negative semi-axis is Hermitian).[6] Uniform spanning trees[edit] Let G be a finite, undirected, connected graph, with edge set E. Define Ie:E → ℓ2(E) as follows: first choose some arbitrary set of orientations for the edges E, and for each resulting, oriented edge e, define Ie to be the projection of a unit flow along e onto the subspace of ℓ2(E) spanned by star flows.[9] Then the uniformly random spanning tree of G is a determinantal point process on E, with kernel {\displaystyle K(e,f)=\langle I^{e},I^{f}\rangle ,\quad e,f\in E} ^ Vershik, Anatoly M. (2003). Asymptotic combinatorics with applications to mathematical physics a European mathematical summer school held at the Euler Institute, St. Petersburg, Russia, July 9-20, 2001. Berlin [etc.]: Springer. p. 151. ISBN 978-3-540-44890-7. ^ Miyoshi, Naoto; Shirai, Tomoyuki (2016). "A Cellular Network Model with Ginibre Configured Base Stations". Advances in Applied Probability. 46 (3): 832–845. doi:10.1239/aap/1409319562. ISSN 0001-8678. ^ Torrisi, Giovanni Luca; Leonardi, Emilio (2014). "Large Deviations of the Interference in the Ginibre Network Model" (PDF). Stochastic Systems. 4 (1): 173–205. doi:10.1287/13-SSY109. ISSN 1946-5238. ^ N. Deng, W. Zhou, and M. Haenggi. The Ginibre point process as a model for wireless networks with repulsion. IEEE Transactions on Wireless Communications, vol. 14, pp. 107-121, Jan. 2015. ^ a b Hough, J. B., Krishnapur, M., Peres, Y., and Virág, B., Zeros of Gaussian analytic functions and determinantal point processes. University Lecture Series, 51. American Mathematical Society, Providence, RI, 2009. ^ a b c A. Soshnikov, Determinantal random point fields. Russian Math. Surveys, 2000, 55 (5), 923–975. ^ B. Valko. Random matrices, lectures 14–15. Course lecture notes, University of Wisconsin-Madison. ^ A. Borodin, A. Okounkov, and G. Olshanski, On asymptotics of Plancherel measures for symmetric groups, available via arXiv:math/9905032. ^ Lyons, R. with Peres, Y., Probability on Trees and Networks. Cambridge University Press, In preparation. Current version available at http://mypage.iu.edu/~rdlyons/ Retrieved from "https://en.wikipedia.org/w/index.php?title=Determinantal_point_process&oldid=1021670184"
§ Motivating Djikstra's Usually I've seen Djikstra's algorithm presented as a greedy algorithm, and then an analogy given to "fire" for the greediness. We can reverse this presentation start with the "fire", and the discretize the "fire" solution to arrive at Djikstra's § The fire analogy Assume we want to find the shortest path from point A B . We should find the path that fire travels. How do we do this? Well, we simply simulate a fire going through all paths from our initial point, and then pick the shortest path (ala Fermat). We interpret the graph edges as distances between the different locations in space. So we write a function simulate :: V \times T \rightarrow 2^{V \times T} , which when given a starting vertex v : V t : T , returns the set of vertices reachable in at most that time, and the time it took to reach them. Notice that we are likely repeating a bunch of work. Say the fire from x p , and we know the trajectory of the fire from p . The next time where a fire from y p , we don't need to recalculate the trajectory. We can simply cache this. Notice that this time parameter being a real number is pointless; really the only thing that matters are the events (as in computational geometry), where the time crosses some threshold. By combining the two pieces of information, we are led to the usual implementation: Order the events by time using a queue, and then add new explorations as we get to them.
In building design, thermal mass is a property of the mass of a building which enables it to store heat, providing "inertia" against temperature fluctuations. It is sometimes known as the thermal flywheel effect.[1] For example, when outside temperatures are fluctuating throughout the day, a large thermal mass within the insulated portion of a house can serve to "flatten out" the daily temperature fluctuations, since the thermal mass will absorb thermal energy when the surroundings are higher in temperature than the mass, and give thermal energy back when the surroundings are cooler, without reaching thermal equilibrium. This is distinct from a material's insulative value, which reduces a building's thermal conductivity, allowing it to be heated or cooled relatively separately from the outside, or even just retain the occupants' thermal energy longer. {\displaystyle Q=C_{\mathrm {th} }\Delta T\,} {\displaystyle {\bar {c}}} {\displaystyle C_{th}} {\displaystyle C_{th}=mc_{p}} {\displaystyle m} {\displaystyle c_{p}} Thermal mass in buildings[edit] Thermal mass is effective in improving building comfort in any place that experiences these types of daily temperature fluctuations—both in winter as well as in summer. When used well and combined with passive solar design, thermal mass can play an important role in major reductions to energy use in active heating and cooling systems. The use of materials with thermal mass is most advantageous where there is a big difference in outdoor temperatures from day to night (or, where nighttime temperatures are at least 10 degrees cooler than the thermostat set point).[2] The terms heavy-weight and light-weight are often used to describe buildings with different thermal mass strategies, and affects the choice of numerical factors used in subsequent calculations to describe their thermal response to heating and cooling. In building services engineering, the use of dynamic simulation computational modelling software has allowed for the accurate calculation of the environmental performance within buildings with different constructions and for different annual climate data sets. This allows the architect or engineer to explore in detail the relationship between heavy-weight and light-weight constructions, as well as insulation levels, in reducing energy consumption for mechanical heating or cooling systems, or even removing the need for such systems altogether. Properties required for good thermal mass[edit] Use of thermal mass in different climates[edit] Temperate and cold temperate climates[edit] Solar-exposed thermal mass[edit] Any form of thermal mass can be used. A concrete slab foundation either left exposed or covered with conductive materials, e.g. tiles, is one easy solution. Another novel method is to place the masonry facade of a timber-framed house on the inside ('reverse-brick veneer'). Thermal mass in this situation is best applied over a large area rather than in large volumes or thicknesses. 7.5–10 cm (3″–4″) is often adequate. Thermal mass for limiting summertime overheating[edit] Hot, arid climates (e.g. desert)[edit] This is a classical use of thermal mass. Examples include adobe, rammed earth, or limestone block houses. Its function is highly dependent on marked diurnal temperature variations. The wall predominantly acts to retard heat transfer from the exterior to the interior during the day. The high volumetric heat capacity and thickness prevents thermal energy from reaching the inner surface. When temperatures fall at night, the walls re-radiate the thermal energy back into the night sky. In this application it is important for such walls to be massive to prevent heat transfer into the interior. Hot humid climates (e.g. sub-tropical and tropical)[edit] Materials commonly used for thermal mass[edit] Concrete, clay bricks and other forms of masonry: the thermal conductivity of concrete depends on its composition and curing technique. Concretes with stones are more thermally conductive than concretes with ash, perlite, fibers, and other insulating aggregates. Concrete's thermal mass properties save 5–8% in annual energy costs compared to softwood lumber.[5] Seasonal energy storage[edit] ^ "Thermal mass | YourHome". ^ "Thermal Mass – Energy Savings Potential in Residential Buildings". Archived from the original on 2004-06-16. Retrieved 2018-12-12. Retrieved from "https://en.wikipedia.org/w/index.php?title=Thermal_mass&oldid=1072794735"
§ Splitting of semidirect products in terms of projections Say we have an exact sequence that splits: 0 \rightarrow N \xrightarrow{i} G \xrightarrow{\pi} K \rightarrow 0 with the section given by s: K \rightarrow G \forall k \in K, \pi(s(k)) = k . Then we can consider the map \pi_k \equiv s \circ pi: G \rightarrow G . See that this firsts projects down to K , and then re-embeds the value in G . The cool thing is that this is in fact idempotent (so it's a projection!) Compute: \begin{aligned} &\pi_k \circ \pi_k \\ &= (s \circ \pi) \circ (s \circ \pi ) \\ &= s \circ (\pi \circ s) \circ \pi ) \\ &= s \circ id \circ \pi \\ &= s \circ \pi = \pi_k \\ \end{aligned} So this "projects onto the k value". We can then extract out the N component as \pi_n: G \rightarrow G; \pi_n(g) \equiv g \cdot \pi(k)^{-1}